frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•1m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•2m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•3m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•3m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•6m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•6m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•6m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•7m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•8m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•10m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•10m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•11m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•12m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•13m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•16m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•16m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•16m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•16m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•19m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•20m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•20m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•22m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
5•josephcsible•23m ago•1 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
5•jdjuwadi•26m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•26m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•29m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•30m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•30m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•31m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•33m ago•0 comments
Open in hackernews

Llamafile Returns

https://blog.mozilla.ai/llamafile-returns/
137•aittalam•3mo ago

Comments

jart•3mo ago
Really exciting to see Mozilla AI starting up and I can't wait to see where the next generation takes the project!
bsenftner•3mo ago
People are so uninformed, they don't know you are Mozilla AI's star employee.
rvz•3mo ago
s/are/was

I don't know if you were informed but you realize that jart is no longer at Mozilla anymore and now at Google Inc?

setheron•3mo ago
jart, you are back at Google?
jart•3mo ago
Yeah Google liked llamafile so much that they asked me to help them improve the LLM on their website too.
setheron•3mo ago
I agree that the work they put out is A+ Whenever I see content produced by jart, it's always amazing.
dingnuts•3mo ago
sorry, I'm out of the loop. is this thread glazing a celebrity member commenting on an announcement from his own team to create hype?

what the fuck is wrong with this website

dboon•2mo ago
it sounds like you are
behindsight•3mo ago
great stuff, working on something around agentic tooling and hope to collab with Mozilla AI as well in the future as they share the same values I have
throawayonthe•3mo ago
go get that investor money i guess?
apitman•3mo ago
This is great news. Given the proliferation of solid local models, it would be cool if llamafile had a way to build your own custom versions with the model of your choice.
swyx•3mo ago
justine tunney gave a great intro to Llamafile at AIE last year if it helps anyone: https://www.youtube.com/watch?v=-mRi-B3t6fA
synergy20•3mo ago
how is this different from ollama? for me the more/open the merrier.
ricardobeat•3mo ago
Ollama is a model manager and pretty interface for llama.cpp, llamafile is a cross-platform packaging tool to distribute and run individual models also based on llama.cpp
thangalin•3mo ago
Tips:

    # Avoid issues when wine is installed.
    sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'
And:

    # Capture the entirety of the instructions to obtain the input length.
    readonly INSTRUCT=$(
      join ${PATH_PREFIX_SYSTEM} ${PATH_PROMPT_SYSTEM} ${PATH_PREFIX_SYSTEM}
      join ${PATH_SUFFIX_USER} ${PATH_PROMPT_USER} ${PATH_SUFFIX_USER}
      join ${PATH_SUFFIX_ASSIST} "/dev/null" ${PATH_SUFFIX_ASSIST}
    )

    (
      echo ${INSTRUCT}
    ) | ./llamafile \
      -m "${LINK_MODEL}" \
      -e \
      -f /dev/stdin \
      -n 1000 \
      -c ${#INSTRUCT} \
      --repeat-penalty 1.0 \
      --temp 1.5 \
      --silent-prompt > output.txt
chrismorgan•3mo ago
> # Avoid issues when wine is installed.

> sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'

Please don’t recommend this. If binfmt_misc is enabled, it’s probably for a reason, and disabling it will break things. I have a .NET/Mono app installed that it would break, for example—it’s definitely not just Wine.

If binfmt_misc is causing problems, the proper solution is to register the executable type. https://github.com/mozilla-ai/llamafile#linux describes steps.

I made myself a package containing /usr/bin/ape and the following /usr/lib/binfmt.d/ape.conf:

  :APE:M::MZqFpD::/usr/bin/ape:
  :APE-jart:M::jartsr::/usr/bin/ape:
michaelgiba•3mo ago
I’m glad to see llamafile being resurrected. A few things I hope for:

1. Curate a continuously extended inventory of prebuilt llamafiles for models as they are released 2. Create both flexible builds (with dynamic backend loading for cpu and cuda) and slim minimalist builds 3. Upstreaming as much as they can into llama.cpp and partner with the project

michaelgiba•3mo ago
Crazier ideas would be: - extend the concept to also have some sort of “agent mode” where the llamafiles can launch with their own minimal file system or isolated context - detailed profiling of main supported models to ensure deterministic outputs
njbrake•3mo ago
Love the idea!
FragenAntworten•3mo ago
The Discord link is broken, in that it links to the server directly rather than to an invitation to join the server, which prevents new members from joining.
njbrake•3mo ago
Fixed, thank you!
benatkin•3mo ago
> As the local and open LLM ecosystem has evolved over the years, time has come for llamafile to evolve too. It needs refactoring and upgrades to incorporate newer features available in llama.cpp and develop a refined understanding of the most valuable features for its users.

It seems people have moved on from Llamafile. I doubt Mozilla AI is going to bring it back.

This announcement didn't even come with a new code commit, just a wish. https://github.com/mozilla-ai/llamafile/commits/main/

dolmen•3mo ago
Cosmocc and Cosmopolitan are remarkable technical achievements and llamafile made me discover them.

The llamafile UX (CLI interface and web server with chat to quickly interact with the model) is great and make easy to download and play with a local LLM.

However I fail to see use cases where I would build a solution built on a llamafile. If I want to play with multiple models, I don't need to have the binary attached to the model data. If I want to play with a model on multiple operating systems, I'm fine downloading the llamafile tool binary for the platform separately from the model data (in fact, on Windows one have to download the llamafile.exe separately anyway because of a limit of the OS for executable files).

So Cosmopolitan is great tech, the llamafile command (the "UX for a model" part) is great, but I'm not convinced by the value of Cosmopolitan applied here.

romperstomper•3mo ago
While this is very cool and llamafiles are quite universal there is anyway a nuance for Window systems which is the size limit for a Windows executable which is 4Gb maximum. As LLM models are tend to be quite large this limit is reached pretty fast. So for such cases llamafile.exe will be required (which is also universal and runs everywhere). And at the end it could be just llama.cpp tools which released for all platforms + the LLM model file itself.