frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
529•klaussilveira•9h ago•146 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
859•xnx•15h ago•518 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
72•matheusalmeida•1d ago•13 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
180•isitcontent•9h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
182•dmpetrov•10h ago•79 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
294•vecti•11h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
69•quibono•4d ago•12 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
343•aktau•16h ago•168 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
338•ostacke•15h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
434•todsacerdoti•17h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
237•eljojo•12h ago•147 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
13•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
373•lstoll•16h ago•252 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
6•videotopia•3d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
41•kmm•4d ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
220•i5heu•12h ago•162 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
91•SerCe•5h ago•75 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
62•phreda4•9h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•82 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•10 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
127•vmatsiiako•14h ago•53 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
18•gmays•4h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1029•cdrnsf•19h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
55•rescrv•17h ago•18 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
83•antves•1d ago•60 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
18•denysonique•6h ago•2 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
5•neogoose•2h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
109•ray__•6h ago•54 comments
Open in hackernews

Controlling Language and Diffusion Models by Transporting Activations

https://machinelearning.apple.com/research/transporting-activations
90•2bit•10mo ago

Comments

turnsout•10mo ago
Super interesting. You can see why Apple would be interested in strictly controlling output. I wonder if any of this work found its way into the Image Playground.
scorps•10mo ago
It's amusing to me that humans seem to have this same problem ("Do not think of a pink elephant!")
sampton•10mo ago
Multimodal LLM is the true solution but Apple is probably looking for something they can run on-device, at least current generation of devices.
azeirah•10mo ago
True solution to the problem stated in the article? They're talking about fine-grained control over model outputs, how would multimodality help there?
roro_7•10mo ago
I could be wrong but ... I feel like this may partially go against a very basic fact about intelligence that was recently stated by Ilya (but is common sense): the more intelligent the model the harder it is to control it. You can remove elephants and force other basic behavioral changes, but the strength of artificial free will (so to speak) of these models is correlated with their intelligence, and this does not reduce it, so it will come out in other ways. If you do manage to control it fully then you will have a model as dumb as a brick. The whole point of intelligent machines is their independent thought. The more intelligent, the more independent thinking will emerge.
hiddencost•10mo ago
s/fact/hypothesis/
antonkar•10mo ago
The intelligence is just a static geometric shape in an LLM file (only GPUs "choose" and "shape-change" in that shape).

So the maximal intelligence is actually not an agent at all (it has zero agency itself), it's a place. You can imagine the final direct democratic simulated multiverse, that's the final absolute super-intelligence. It has all the agents inside of it, while it itself is as static spacetime. Agents (like us and others) are 3D and dynamic, while the multiverse is 4D static spacetime. Everything already happened, so there is no future, only the past, you can forget something to relive it.

While maximal agency (=shape-changing) is actually the Big Bang, it has almost zero intelligence (it's a dot) but infinite potential future intelligence (can become a multiversal simulation).

throwaway290•10mo ago
The error is thinking there is "thought" at all, forget about "independent". Don't antropomorphize what ain't human
danielbln•10mo ago
Are you saying animals don't think?
anileated•10mo ago
Given an octopus[0] is worse at mimicking human output than an LLM, either you decide that the LLM has surpassed the octopus in thought capability and should enjoy higher protections against abuse and violence, or you decide that thought is irrelevant when exhibiting capability to mimic human output. (As a third option, you could decide that abusing a human-like thinking being is OK, but let’s assume you would not choose this path.)

[0] A protected species for its sentience.

imranq•10mo ago
This just seems like a fancy way of describing LoRA? At the end of the day you are still learning weights based on a described set of outputs and then applying them to inference
antonkar•10mo ago
There is an idea for the unicorn AI safety startup to get currently almost 100% unprotected (from AI botnet) consumer GPUs into a cloud to get Google-level security (each GPU can bring you $30-1500 in profits per month, you can share it with the user, the user can play GPU game from any device, use any free or paid AI model, everything really becomes better, you can include a 5g modem), here's the full proposal (the author is probably dyslexic) https://melonusk.substack.com/p/notes-on-euto-principles-and...
vessenes•10mo ago
OK - basic plan here, which I feel I may have read (just called something like a concept LoRA on r/stablediffusion?):

1. Any concept you're interested in, get inputs with and without it. For images: 100 with, say a pink elephant, 100 without.

2. Calculate the difference between these models as represented by an "Optimal Transport Map".

Apply the map at desired strength, and voila - you don't have a pink elephant anymore. These can stack.

There are lots of obvious and interesting applications here in LLMs - there's some research showing that LLMs have honesty/dishonesty parameter groupings, for instance.

But, I can't really figure out what this OT map is. Is it a single layer tensor? Is it multidimensional? If it's the size of the original model (which they say it is not), then I understand how to apply it - just add weights and rerun. If it's not a copy, where and when is this map applied? Another way to say this is, how is this different than calculating the average difference and storing it in a low-rank adapter? I have no idea.

bradneuberg•10mo ago
This looks like an important breakthrough, basically a non-RHLF mechanism to focus and restrict deep nets.
sva_•10mo ago
paper https://arxiv.org/abs/2410.23054v1