frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
143•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
17•kaonwarb•3d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
28•jesperordrup•4h ago•16 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
223•dmpetrov•14h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
331•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•6 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
183•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

Controlling Language and Diffusion Models by Transporting Activations

https://machinelearning.apple.com/research/transporting-activations
90•2bit•10mo ago

Comments

turnsout•10mo ago
Super interesting. You can see why Apple would be interested in strictly controlling output. I wonder if any of this work found its way into the Image Playground.
scorps•10mo ago
It's amusing to me that humans seem to have this same problem ("Do not think of a pink elephant!")
sampton•10mo ago
Multimodal LLM is the true solution but Apple is probably looking for something they can run on-device, at least current generation of devices.
azeirah•10mo ago
True solution to the problem stated in the article? They're talking about fine-grained control over model outputs, how would multimodality help there?
roro_7•10mo ago
I could be wrong but ... I feel like this may partially go against a very basic fact about intelligence that was recently stated by Ilya (but is common sense): the more intelligent the model the harder it is to control it. You can remove elephants and force other basic behavioral changes, but the strength of artificial free will (so to speak) of these models is correlated with their intelligence, and this does not reduce it, so it will come out in other ways. If you do manage to control it fully then you will have a model as dumb as a brick. The whole point of intelligent machines is their independent thought. The more intelligent, the more independent thinking will emerge.
hiddencost•10mo ago
s/fact/hypothesis/
antonkar•10mo ago
The intelligence is just a static geometric shape in an LLM file (only GPUs "choose" and "shape-change" in that shape).

So the maximal intelligence is actually not an agent at all (it has zero agency itself), it's a place. You can imagine the final direct democratic simulated multiverse, that's the final absolute super-intelligence. It has all the agents inside of it, while it itself is as static spacetime. Agents (like us and others) are 3D and dynamic, while the multiverse is 4D static spacetime. Everything already happened, so there is no future, only the past, you can forget something to relive it.

While maximal agency (=shape-changing) is actually the Big Bang, it has almost zero intelligence (it's a dot) but infinite potential future intelligence (can become a multiversal simulation).

throwaway290•10mo ago
The error is thinking there is "thought" at all, forget about "independent". Don't antropomorphize what ain't human
danielbln•10mo ago
Are you saying animals don't think?
anileated•10mo ago
Given an octopus[0] is worse at mimicking human output than an LLM, either you decide that the LLM has surpassed the octopus in thought capability and should enjoy higher protections against abuse and violence, or you decide that thought is irrelevant when exhibiting capability to mimic human output. (As a third option, you could decide that abusing a human-like thinking being is OK, but let’s assume you would not choose this path.)

[0] A protected species for its sentience.

imranq•10mo ago
This just seems like a fancy way of describing LoRA? At the end of the day you are still learning weights based on a described set of outputs and then applying them to inference
antonkar•10mo ago
There is an idea for the unicorn AI safety startup to get currently almost 100% unprotected (from AI botnet) consumer GPUs into a cloud to get Google-level security (each GPU can bring you $30-1500 in profits per month, you can share it with the user, the user can play GPU game from any device, use any free or paid AI model, everything really becomes better, you can include a 5g modem), here's the full proposal (the author is probably dyslexic) https://melonusk.substack.com/p/notes-on-euto-principles-and...
vessenes•10mo ago
OK - basic plan here, which I feel I may have read (just called something like a concept LoRA on r/stablediffusion?):

1. Any concept you're interested in, get inputs with and without it. For images: 100 with, say a pink elephant, 100 without.

2. Calculate the difference between these models as represented by an "Optimal Transport Map".

Apply the map at desired strength, and voila - you don't have a pink elephant anymore. These can stack.

There are lots of obvious and interesting applications here in LLMs - there's some research showing that LLMs have honesty/dishonesty parameter groupings, for instance.

But, I can't really figure out what this OT map is. Is it a single layer tensor? Is it multidimensional? If it's the size of the original model (which they say it is not), then I understand how to apply it - just add weights and rerun. If it's not a copy, where and when is this map applied? Another way to say this is, how is this different than calculating the average difference and storing it in a low-rank adapter? I have no idea.

bradneuberg•10mo ago
This looks like an important breakthrough, basically a non-RHLF mechanism to focus and restrict deep nets.
sva_•10mo ago
paper https://arxiv.org/abs/2410.23054v1