ion7-core

Silicon-level LuaJIT FFI bindings for llama.cpp

$ luarocks install ion7-core

ion7-core gives LuaJIT direct, zero-overhead access to llama.cpp.

Model loading, decode, KV cache, sampler chains, custom samplers, threadpool sharing, speculative decoding, training, GBNF constraints all driven from Lua via FFI plus a small libcommon bridge.

The rockspec build vendors llama.cpp as a submodule and compiles it together with `ion7_bridge.so`. Backend selection is driven by the ION7_BACKEND environment variable :

ION7_BACKEND=cpu (default - pure CPU, AVX2 / NEON)
ION7_BACKEND=vulkan (cross-vendor GPU)
ION7_BACKEND=cuda (NVIDIA - also reads ION7_CUDA_ARCH)
ION7_BACKEND=rocm (AMD)
ION7_BACKEND=metal (Apple Silicon)

The build leaves `libllama`, `libggml*`, and `ion7_bridge` under `<rocktree>/share/lua/<ver>/ion7/core/_libs/`.

The FFI loader probes that directory automatically, no LD_LIBRARY_PATH or ION7_LIBLLAMA_PATH gymnastics required.

Versions

0.1.0beta4-115 hours ago2 downloads

Dependencies

lua >= 5.1
lua-cjson >= 2.1

Dependency for

ion7-grammar, ion7-llm

Labels

Manifests