frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Strange Attractors

https://blog.shashanktomar.com/posts/strange-attractors
241•shashanktomar•5h ago•29 comments

The Profitable Startup

https://linear.app/now/the-profitable-startup
35•doppp•1h ago•10 comments

S.A.R.C.A.S.M: Slightly Annoying Rubik's Cube Automatic Solving Machine

https://github.com/vindar/SARCASM
94•chris_overseas•5h ago•17 comments

Futurelock: A subtle risk in async Rust

https://rfd.shared.oxide.computer/rfd/0609
286•bcantrill•12h ago•127 comments

Why Should I Care What Color the Bikeshed Is?

https://www.bikeshed.com/
21•program•1w ago•11 comments

Introducing architecture variants

https://discourse.ubuntu.com/t/introducing-architecture-variants-amd64v3-now-available-in-ubuntu-...
183•jnsgruk•1d ago•116 comments

Viagrid – PCB template for rapid PCB prototyping with factory-made vias [video]

https://www.youtube.com/watch?v=A_IUIyyqw0M
83•surprisetalk•4d ago•27 comments

Addiction Markets

https://www.thebignewsletter.com/p/addiction-markets-abolish-corporate
214•toomuchtodo•11h ago•193 comments

My Impressions of the MacBook Pro M4

https://michael.stapelberg.ch/posts/2025-10-31-macbook-pro-m4-impressions/
145•secure•18h ago•199 comments

Active listening: the Swiss Army Knife of communication

https://togetherlondon.com/insights/active-listening-swiss-army-knife
35•lucidplot•4d ago•15 comments

Hacking India's largest automaker: Tata Motors

https://eaton-works.com/2025/10/28/tata-motors-hack/
159•EatonZ•3d ago•52 comments

Use DuckDB-WASM to query TB of data in browser

https://lil.law.harvard.edu/blog/2025/10/24/rethinking-data-discovery-for-libraries-and-digital-h...
153•mlissner•11h ago•41 comments

A theoretical way to circumvent Android developer verification

https://enaix.github.io/2025/10/30/developer-verification.html
105•sleirsgoevy•8h ago•72 comments

How We Found 7 TiB of Memory Just Sitting Around

https://render.com/blog/how-we-found-7-tib-of-memory-just-sitting-around
123•anurag•1d ago•28 comments

Perfetto: Swiss army knife for Linux client tracing

https://lalitm.com/perfetto-swiss-army-knife/
105•todsacerdoti•16h ago•10 comments

Kerkship St. Jozef, Antwerp – WWII German Concrete Tanker

https://thecretefleet.com/blog/f/kerkship-st-jozef-antwerp-%E2%80%93-wwii-german-concrete-tanker
14•surprisetalk•1w ago•1 comments

Fungus: The Befunge CPU(2015)

https://www.bedroomlan.org/hardware/fungus/
9•onestay42•3h ago•1 comments

New analog chip that is 1k times faster than high-end Nvidia GPUs

https://www.livescience.com/technology/computing/china-solves-century-old-problem-with-new-analog...
6•mrbluecoat•40m ago•2 comments

Signs of introspection in large language models

https://www.anthropic.com/research/introspection
119•themgt•1d ago•64 comments

Nix Derivation Madness

https://fzakaria.com/2025/10/29/nix-derivation-madness
156•birdculture•14h ago•57 comments

Show HN: Pipelex – Declarative language for repeatable AI workflows

https://github.com/Pipelex/pipelex
81•lchoquel•3d ago•15 comments

Value-pool based caching for Java applications

https://github.com/malandrakisgeo/mnemosyne
3•plethon•1w ago•0 comments

The cryptography behind electronic passports

https://blog.trailofbits.com/2025/10/31/the-cryptography-behind-electronic-passports/
145•tatersolid•17h ago•92 comments

Photographing the rare brown hyena stalking a diamond mining ghost town

https://www.bbc.com/future/article/20251014-the-rare-hyena-stalking-a-diamond-mining-ghost-town
17•1659447091•5h ago•2 comments

Sustainable memristors from shiitake mycelium for high-frequency bioelectronics

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328965
109•PaulHoule•15h ago•55 comments

AI scrapers request commented scripts

https://cryptography.dog/blog/AI-scrapers-request-commented-scripts/
195•ColinWright•13h ago•147 comments

Llamafile Returns

https://blog.mozilla.ai/llamafile-returns/
103•aittalam•2d ago•18 comments

Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking

https://arstechnica.com/gadgets/2025/10/leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-...
220•akyuu•1d ago•152 comments

Pangolin (YC S25) is hiring a full stack software engineer (open-source)

https://docs.pangolin.net/careers/software-engineer-full-stack
1•miloschwartz•11h ago

Apple reports fourth quarter results

https://www.apple.com/newsroom/2025/10/apple-reports-fourth-quarter-results/
143•mfiguiere•1d ago•202 comments
Open in hackernews

Llamafile Returns

https://blog.mozilla.ai/llamafile-returns/
103•aittalam•2d ago

Comments

jart•1d ago
Really exciting to see Mozilla AI starting up and I can't wait to see where the next generation takes the project!
bsenftner•7h ago
People are so uninformed, they don't know you are Mozilla AI's star employee.
rvz•6h ago
s/are/was

I don't know if you were informed but you realize that jart is no longer at Mozilla anymore and now at Google Inc?

setheron•1h ago
jart, you are back at Google?
setheron•1h ago
I agree that the work they put out is A+ Whenever I see content produced by jart, it's always amazing.
behindsight•9h ago
great stuff, working on something around agentic tooling and hope to collab with Mozilla AI as well in the future as they share the same values I have
throawayonthe•7h ago
go get that investor money i guess?
apitman•7h ago
This is great news. Given the proliferation of solid local models, it would be cool if llamafile had a way to build your own custom versions with the model of your choice.
swyx•7h ago
justine tunney gave a great intro to Llamafile at AIE last year if it helps anyone: https://www.youtube.com/watch?v=-mRi-B3t6fA
synergy20•6h ago
how is this different from ollama? for me the more/open the merrier.
ricardobeat•5h ago
Ollama is a model manager and pretty interface for llama.cpp, llamafile is a cross-platform packaging tool to distribute and run individual models also based on llama.cpp
thangalin•5h ago
Tips:

    # Avoid issues when wine is installed.
    sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'
And:

    # Capture the entirety of the instructions to obtain the input length.
    readonly INSTRUCT=$(
      join ${PATH_PREFIX_SYSTEM} ${PATH_PROMPT_SYSTEM} ${PATH_PREFIX_SYSTEM}
      join ${PATH_SUFFIX_USER} ${PATH_PROMPT_USER} ${PATH_SUFFIX_USER}
      join ${PATH_SUFFIX_ASSIST} "/dev/null" ${PATH_SUFFIX_ASSIST}
    )

    (
      echo ${INSTRUCT}
    ) | ./llamafile \
      -m "${LINK_MODEL}" \
      -e \
      -f /dev/stdin \
      -n 1000 \
      -c ${#INSTRUCT} \
      --repeat-penalty 1.0 \
      --temp 1.5 \
      --silent-prompt > output.txt
chrismorgan•1h ago
> # Avoid issues when wine is installed.

> sudo su -c 'echo 0 > /proc/sys/fs/binfmt_misc/status'

Please don’t recommend this. If binfmt_misc is enabled, it’s probably for a reason, and disabling it will break things. I have a .NET/Mono app installed that it would break, for example—it’s definitely not just Wine.

If binfmt_misc is causing problems, the proper solution is to register the executable type. https://github.com/mozilla-ai/llamafile#linux describes steps.

I made myself a package containing /usr/bin/ape and the following /usr/lib/binfmt.d/ape.conf:

  :APE:M::MZqFpD::/usr/bin/ape:
  :APE-jart:M::jartsr::/usr/bin/ape:
michaelgiba•5h ago
I’m glad to see llamafile being resurrected. A few things I hope for:

1. Curate a continuously extended inventory of prebuilt llamafiles for models as they are released 2. Create both flexible builds (with dynamic backend loading for cpu and cuda) and slim minimalist builds 3. Upstreaming as much as they can into llama.cpp and partner with the project

michaelgiba•5h ago
Crazier ideas would be: - extend the concept to also have some sort of “agent mode” where the llamafiles can launch with their own minimal file system or isolated context - detailed profiling of main supported models to ensure deterministic outputs
njbrake•5h ago
Love the idea!
FragenAntworten•3h ago
The Discord link is broken, in that it links to the server directly rather than to an invitation to join the server, which prevents new members from joining.
benatkin•2h ago
> As the local and open LLM ecosystem has evolved over the years, time has come for llamafile to evolve too. It needs refactoring and upgrades to incorporate newer features available in llama.cpp and develop a refined understanding of the most valuable features for its users.

It seems people have moved on from Llamafile. I doubt Mozilla AI is going to bring it back.

This announcement didn't even come with a new code commit, just a wish. https://github.com/mozilla-ai/llamafile/commits/main/