frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
230•theblazehen•2d ago•66 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•553 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
5•AlexeyBrin•59m ago•0 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
66•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
53•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
385•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
8•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
422•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•215 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
63•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Weird CPU architectures, the MOV only CPU (2020)

https://justanotherelectronicsblog.com/?p=771
123•v9v•5mo ago

Comments

gsliepen•4mo ago
The Intel architecture is already Turing complete when you just use MOV instructions: https://github.com/xoreaxeaxeax/movfuscator. Of course, you don't even need instructions at all: https://news.ycombinator.com/item?id=5261598
QuadmasterXLII•4mo ago
While this is true, I suspect a spec compliant implementation of the x86 mov instruction would many use more transistors than OP’s entire CPU.
crest•4mo ago
Of course, but you don't have the toy CPU under your desk or in your laptop running at several GHz nor are you likely to find it in a target that really needs a cute hack to obscure your exploit.
mk_stjames•4mo ago
I came back to reply with just this. Christopher Domas's conference talk on the movfuscator is legendary:

https://www.youtube.com/watch?v=R7EEoWg6Ekk

aleph_minus_one•4mo ago
> The Intel architecture is already Turing complete when you just use MOV instructions

No physically existing architecture is Turing-complete, since every CPU can (by physics) only access a finite amount of memory, which means that its state space is finite, in opposite to the infinite state space of a Turing machine.

jdiff•4mo ago
But that's not a very useful definition so we usually don't both enforcing that constraint.
psychoslave•4mo ago
Looks like an interesting read, thank you @v9v.

Just when my night was going through a meditative sleep about basing ontological models using change as fundamental block. Identity is such a brittle choice as foundation, even if it's a great tool in many situations otherwise.

pyinstallwoes•4mo ago
Many ancient cultures use behavior as identity. It certainly has a charm.
lioeters•4mo ago
Was it the Navajos whose language doesn't have nouns, only verbs? A noun is a kind of illusion of eternal identity. A chair is only chair-ing for the moment as a configuration of matter that was doing something else before, and will fall apart and transform into doing something else in the future.
bradrn•4mo ago
IIRC Navajo has a pretty robust noun-verb distinction. However there definitely are other languages where nouns and verbs behave very similarly, e.g. most famously Salishan languages. That said, there don’t seem to be any natural languages in which nouns and verbs are completely indistinguishable — there’s always some minor difference in how they behave.
psychoslave•4mo ago
I don't think noun as a grammatical class is an issue, all the more if we take for granted that grammar themselves are mere inferences modeling what's happening on average when producing some utterance, or at least a very simplified representation of a leaned version of the utterance.

It might become more problematic when using a term such as substantive which can connotate some ontological beliefs about the nature of the word or what it refers to, or their relationship.

English is already very generous with conversion of word type without morphological impact in general. I heard Mandarin don't have even that kind of string lexical typology bound to every item in the vocabulary, but I didn't check the details to be transparent.

Thanks anyway for the hints on the other languages

PaulHoule•4mo ago
If you were interested in co-designing a CPU with software the TTA is an attractive way to do it, particularly in that it is easy to design it so you can do more than one MOV at the same time and thus have explicit parallelism.

The tough part though is that memory is usually slow and you have to wait an undetermined number of cycles for data to get back from DRAM and while one operation is blocked all the other operations are blocked.

I guess you could have something like this with a fancy memory controller that could be programming explicitly to start fetching ahead of time so data is available when it is needed, at least most of the time.

mrob•4mo ago
How would you handle context switching? You've got a whole lot of exposed state scattered throughout the whole CPU.
PaulHoule•4mo ago
By not doing it. The ideology here is that general purpose computing took numerous wrong turns from the 1950s to the present for the purpose of embedded systems.

I thought this through back when I was doing embedded projects with the AVR-8, namely display controllers for persistence of vision displays. Something like this doesn't have an OS so you don't need to do context switching for the purposes of the OS.

It was practical to write C code for this but I didn't really like it because code like this doesn't need the stack and the affordances that C calling conventions, the data structures needed to display a scene are dynamic with the scope of the scene, you have 32 registers which is a lot, enough that you can allocate 8 for the interrupt handler and have a lot left over for the main loop.

I was wargaming my paths forward if I needed more power: the obvious route which I probably would have taken is the portable C route via ARM or STM32. Yet I liked AVR-8 a lot and also considered the route of going to an FPGA board on which you could instantiate an AVR-8 soft core clocked higher than any real hardware AVR-8 and also put an accelerator behind it.

The FPGA + TTA + co-designed software route came up at this point. Notably any kind of concurrency, parallelism and extra context can be baked into the "hardware". Adding a few registers is much cheaper than adding superscalar features, adding another MOV slot to the instructions then is pretty cheap if you want more parallelism with the caveat that it could be hard to prevent blocking. If the requirements change it's a frickin' FPGA and you can add something to it or take something away.

What would put the whole idea on wheels is a superoptimizing compiler that could design both the CPU and the code that runs on top of it.

cmrdporcupine•4mo ago
I would just have multiple cores, and communication between them happens over a central shared hub, like the Parallax Propeller MCUs. If you want concurrency, push your new task onto a separate core.

Still the problem is writing a compiler for such a system would suck.

cmrdporcupine•4mo ago
I played around making a TTA-ish thing as part of learning Verilog some years ago. It's a neat idea: https://github.com/rdaum/simple_tta
PaulHoule•4mo ago
Exactly, it's easier than developing a CPU the normal way and offers the possibility of making something that has unique capabilities as opposed to the mostly boring option of revisiting the Z-80 [1] or the near certainty of getting bogged down trying to implement a modern high performance CPU and getting pipelining, superscalar and all that to work.

[1] with the caveat that extending that kind of chip to support a larger address space and simple memory protection is interesting to me

spicybright•4mo ago
I've always loved quirky CPU designs like this, and having one layed out in logic gates is amazing.

I'm having trouble running the file though, it's missing a chip, "74181.dig". Can you point me to where to download that or add it to the repo?

v9v•4mo ago
I'm not the author of the post itself, but the 74181 chip seems to be defined in the simulator: https://github.com/hneemann/Digital/blob/master/src/main/dig...
spicybright•4mo ago
Unsure why the program wasn't picking up the chips. I just moved the lib folder from the Digital repo into the MOVputer folder and it worked. Thanks!
zyxzevn•4mo ago
There was a CMOVE architecture around 1990 (Israel), I think. It was very similar. Could not find it on internet, sadly.

The MOVE architectures may work best with digital signal processors, because the data-flow is almost constant in such processors.

I invented my own version of the move only architecture (around 1992), but focused on speed. So here is my idea below.

1. The CPU only moves within the CPU, like from one register to the other. So all moves are extremely fast.

2. The CPU is separated in different units that can do work separately. Each unit has different input and output ports. The ports and registers are connected via a bus.

3. The CPU can have more buses and thus do more moves at the same time. If an output-data is not ready, the instruction will wait.

Example instruction: OUT1 -> IN1, OUT2 -> IN2 With 32 bits it would give give 8 units with 32 ports each.

Example of some set of units and ports. Control unit: (JUMP_to_address, CALL_to_address, RETURN_with_value, +conditionals) Memory unit: (STORE_Address, STORE_Value, READ_Address, READ_Value), Computation unit: (Start_Value, ADD_Value, SUB_Value, MUL_Value, DIV_Value, Result_Value) Value unit: (Value_from_next_instruction, ZERO, ONE) Register unit: (R0 ... R31)

It is extremely flexible. I also came up with a minimalist 8 bit version. One could even "plug-in" different units for different systems. Certain problems could be solved with adding special ports, which would work like a special instruction.

I did not continue the project due to people not understanding the bus architecture (like a PCI-bus). If you try to present it in a logical-gate architecture (like in the article), the units make the architecture more complicated than it actually is.

Joker_vD•4mo ago
Sounds similar to TIS-100, but with even more special-purpose units.
neuroelectron•4mo ago
Seems to me this would entirely eliminate many classes of exploits.
Bratmon•4mo ago
Why would that be?
Lerc•4mo ago
I have been playing around with my own design, initially inspired by the gigatron, but it seems to have diverged somewhat. ALU is the same, address unit enhanced, but a lot of the rest, program counter and instruction decode ending up completely different. Shuffling the Harvard architecture to be more like a instruction cache, only 16 bytes of instruction memory with long jumps triggering a full instruction memory load from RAM.

Going for transport triggered architecture for additional features seems like a fairly easy win. I kind of started designing one before I realised that's what the design was. The Gigatron has to do some unreasonably hard work for a few operations, like shift right, which is an operation that can fundamentally be done with just wires once you have a mechanism to provide the input and fetch the output.

Definitely not knocking the Gigatron though. Every limitation it has is because it saved a chip, when it comes something minimal to build upon It's pretty cool.

BertoldVdb•4mo ago
This architecture is good for data path applications, but not really for control flow (eg, think how expensive a context switch would be)
drob518•4mo ago
Yea, at best this is useful for deep embedded applications where you need a tiny bit of programmability and where implementation size in gates and cost is at a premium. It’s something you can stuff into a programmable logic device of some sort that you already have in the design. Otherwise, it’s interesting from an academic perspective in terms of studying minimalist computing architectures, but otherwise not practical.
noam_k•4mo ago
I'm surprised the article doesn't mention OpenASIP [0], which not only helps you define the architecture, but also provides RTL synthesis and a working (if not always useful) compiler.

[0] http://openasip.org/

Animats•4mo ago
That's actually useful as a minimal machine.

It's possible to have a one instruction machine where the one instruction does a subtract, store, and branch if negative. But it's not very useful. This register-oriented thing is something someone might put inside an FPGA.

This is the the device register mindset, where you do everything by storing into device registers, as a CPU architecture.

quuxplusone•4mo ago
https://esolangs.org/wiki/OISC