# Avoid issues when wine is installed.
sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'
And: # Capture the entirety of the instructions to obtain the input length.
readonly INSTRUCT=$(
join ${PATH_PREFIX_SYSTEM} ${PATH_PROMPT_SYSTEM} ${PATH_PREFIX_SYSTEM}
join ${PATH_SUFFIX_USER} ${PATH_PROMPT_USER} ${PATH_SUFFIX_USER}
join ${PATH_SUFFIX_ASSIST} "/dev/null" ${PATH_SUFFIX_ASSIST}
)
(
echo ${INSTRUCT}
) | ./llamafile \
-m "${LINK_MODEL}" \
-e \
-f /dev/stdin \
-n 1000 \
-c ${#INSTRUCT} \
--repeat-penalty 1.0 \
--temp 1.5 \
--silent-prompt > output.txt> sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'
Please don’t recommend this. If binfmt_misc is enabled, it’s probably for a reason, and disabling it will break things. I have a .NET/Mono app installed that it would break, for example—it’s definitely not just Wine.
If binfmt_misc is causing problems, the proper solution is to register the executable type. https://github.com/mozilla-ai/llamafile#linux describes steps.
I made myself a package containing /usr/bin/ape and the following /usr/lib/binfmt.d/ape.conf:
:APE:M::MZqFpD::/usr/bin/ape:
:APE-jart:M::jartsr::/usr/bin/ape:1. Curate a continuously extended inventory of prebuilt llamafiles for models as they are released 2. Create both flexible builds (with dynamic backend loading for cpu and cuda) and slim minimalist builds 3. Upstreaming as much as they can into llama.cpp and partner with the project
It seems people have moved on from Llamafile. I doubt Mozilla AI is going to bring it back.
This announcement didn't even come with a new code commit, just a wish. https://github.com/mozilla-ai/llamafile/commits/main/
The llamafile UX (CLI interface and web server with chat to quickly interact with the model) is great and make easy to download and play with a local LLM.
However I fail to see use cases where I would build a solution built on a llamafile. If I want to play with multiple models, I don't need to have the binary attached to the model data. If I want to play with a model on multiple operating systems, I'm fine downloading the llamafile tool binary for the platform separately from the model data (in fact, on Windows one have to download the llamafile.exe separately anyway because of a limit of the OS for executable files).
So Cosmopolitan is great tech, the llamafile command (the "UX for a model" part) is great, but I'm not convinced by the value of Cosmopolitan applied here.
jart•3mo ago
bsenftner•3mo ago
rvz•3mo ago
I don't know if you were informed but you realize that jart is no longer at Mozilla anymore and now at Google Inc?
setheron•3mo ago
jart•3mo ago
setheron•3mo ago
dingnuts•3mo ago
what the fuck is wrong with this website
dboon•2mo ago