$ luarocks install ion7-coreion7-core gives LuaJIT direct, zero-overhead access to llama.cpp.
Model loading, decode, KV cache, sampler chains, custom samplers, threadpool sharing, speculative decoding, training, GBNF constraints all driven from Lua via FFI plus a small libcommon bridge.
The rockspec build vendors llama.cpp as a submodule and compiles it together with `ion7_bridge.so`. Backend selection is driven by the ION7_BACKEND environment variable :
ION7_BACKEND=cpu (default - pure CPU, AVX2 / NEON)
ION7_BACKEND=vulkan (cross-vendor GPU)
ION7_BACKEND=cuda (NVIDIA - also reads ION7_CUDA_ARCH)
ION7_BACKEND=rocm (AMD)
ION7_BACKEND=metal (Apple Silicon)
The build leaves `libllama`, `libggml*`, and `ion7_bridge` under `<rocktree>/share/lua/<ver>/ion7/core/_libs/`.
The FFI loader probes that directory automatically, no LD_LIBRARY_PATH or ION7_LIBLLAMA_PATH gymnastics required.