frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
460•klaussilveira•6h ago•112 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
800•xnx•12h ago•484 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
154•isitcontent•7h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
149•dmpetrov•7h ago•65 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
48•quibono•4d ago•5 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
24•matheusalmeida•1d ago•0 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
89•jnord•3d ago•11 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
259•vecti•9h ago•122 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
326•aktau•13h ago•157 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
199•eljojo•9h ago•128 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
322•ostacke•13h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
405•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
332•lstoll•13h ago•240 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
20•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
51•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
113•vmatsiiako•11h ago•36 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
192•i5heu•9h ago•141 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
150•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
3•romes•4d ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
990•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
23•gfortaine•4h ago•2 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
45•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
61•ray__•3h ago•18 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•57 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
21•MarlonPro•3d ago•4 comments
Open in hackernews

Notes on the Intel 8086 processor's arithmetic-logic unit

https://www.righto.com/2026/01/notes-on-intel-8086-processors.html
110•elpocko•2w ago

Comments

kens•2w ago
Author here for all your 8086 questions...
bcrl•2w ago
Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!

A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?

kens•2w ago
Thanks! The reverse engineering is just for fun. I have a software background, so I'm entirely unqualified to be doing this :-) But I figure that if I'm a programmer, I should know how computers really work.
gruturo•2w ago
Awesome article Ken, I feel spoiled! It's always nice to see your posts hit HN!

Out of curiosity: Is there anything you feel they could have done better in hindsight? Useless instructions, or inefficient ones, or "missing" ones? Either down at the transistor level, or in high level design/philosophy (the segment/offset mechanism creating 20 bit addresses out of 2 16-bit registers with thousands of overlaps sure comes to mind - if not a flat model, but that's asking too much to 1979 design and transistor limitations I guess) ?

Thanks!

kens•2w ago
That's an interesting question. Keep in mind that the 8086 was built as a stopgap processor to sell until Intel's iAPX 432 "micro-mainframe" processor was completed. Moreover, the 8086 was designed to be assembly-language compatible with the 8080 (through translation software) so it could take advantage of existing software. It was also designed to be compatible with the 8080's 16-bit addressing while supporting more memory.

Given those constraints, the design of the 8086 makes sense. In hindsight, though, considering that the x86 architecture has lasted for decades, there are a lot of things that could have been done differently. For example, the instruction encoding is a mess and didn't have an easy path for extending the instruction set. Trapping on invalid instructions would have been a good idea. The BCD instructions are not useful nowadays. Treating a register as two overlapping 8-bit registers (AL, AH) makes register renaming difficult in an out-of-order execution system. A flat address space would have been much nicer than segmented memory, as you mention. The concept of I/O operations vs memory operations was inherited from the Datapoint 2200; memory-mapped I/O would have been better. Overall, a more RISC-like architecture would have been good.

I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

gruturo•2w ago
> I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

Thanks for capturing my feeling very precisely! I was indeed thinking what they could have done better with the same approximate number of transistor and the benefit of a time traveler :) And yes the constraints you mention (8080 compatibility, etc) indeed limit their leeway so maybe we'd have to point the time machine at a few years earlier and influence the 8080 first

mjevans•1w ago
What's that military adage? Something along the lines of 'planned to win the (prior) war'?

There's also the needs of the moment. Wasn't the 8086 a 'drop in' replacement for the 8080, and also (offhand recollection) limited by the number of pins on some of it's package options? This was still an era when it was common for even multiple series of computers from a vendor to have incompatible architectures that required at the very least recompiling software if not whole new programs.

bonzini•1w ago
As someone who did assembly coding on the 8086/286/386 in the 90s, the xH and xL registers were quite useful to write efficient code. Maybe 64-bit mode should have gotten rid of them completely though, rather than only when REX.W=1.

AAA/AAS/DAA/DAS were used quite a lot by COBOL compilers. These days ASCII and BCD processing doesn't use them, but it takes very fast data paths (the microcode sequencer in the 8086 was pretty slow), large ALUs, and very fast multipliers (to divide by constant powers of 10) to write efficient routines.

I/O ports have always been weird though. :)

rogerbinns•2w ago
Did it make things simpler or more complex for the byte order they picked? It is notable that new RISC designs not much later all started big endian, implying that is simpler. Can you even tell the endianess from the dies?
kens•2w ago
The byte order doesn't make much difference. The more important difference compared to a typical RISC chip is that the 8086 supports unaligned memory access. So there's some complicated bus circuitry to perform two memory accesses and shuffle the bytes if necessary.

To understand why the 8086 uses little-endian, you need to go back to the Datapoint 2200, a 1970 desktop computer / smart terminal built from TTL chips (since this was pre-microprocessor). RAM was too expensive at the time, so the Datapoint 2200 used Intel shift-register memory chips along with a 1-bit serial ALU. To add numbers one bit at a time, you need to start with the lowest bit to handle carries, so little-endian is the practical ordering.

Datapoint talked to Intel and Texas Instruments about replacing the board full of TTL chips with a single-chip processor. Texas Instruments created the TMX1795 processor and Intel slightly later created the 8008 processor. Datapoint rejected both chips and continued using TTL. Texas Instruments tried to sell the TMX1795 to Ford as an engine controller, but they were unsuccessful and the TMX1795 disappeared. Intel, however, marketed the 8008 chip as a general-purpose processor, creating the microprocessor as a product (along with the unrelated 4-bit 4004). Since the 8008 was essentially a clone of the Datapoint 2200 processor, it was little-endian. Intel improved the 8008 with the 8080 and 8085, then made the 16-bit 8086, which led to the modern x86 line. For backward compatibility, Intel kept the little-endian order (along with other influences of the Datapoint 2200). The point of this history is that x86 is little-endian because the Datapoint 2200 was a serial processor, not because little-endian makes sense. (Big-endian is the obvious ordering. Among other things, it is compatible with punch cards where everything is typed left-to-right in the normal way.)

variaga•2w ago
Big-endian matches the way we commonly write numbers, but if you have to deal with multiple word widths or greater than word-width math I find little-endian much more straightforward because LE has the invariant that bit value = 2^bit_index and byte value = 2^(8byte_index).

E.g. a 1 in bit 7 on a LE system always represnts 2^7 for 8/16/32/64/ whatever bit word widths.

This is emphatically not true in BE systems and as evidence I offer that IBM (natively BE), MIPS natively BE) and ARM (natively LE but with a BE mode)

all have different mappings of bit and byte indices/lanes in larger word widths* while all LE systems assign the bit/byte lanes the same way.

Using the bit 7 example

- IBM 8-bit: bit 7 is in byte 0 and equal to 2^0

- IBM 16-bit: bit 7 is in byte o and equal to 2^8

- IBM 32-bit: bit 7 is in byte 0 and equal to 2^25

‐ MIPS 16-bit: bit 7 is in byte 1 and equal to 2^7

- MIPS 32-bit: bit 7 is in byte 3 and is equal to 2^7

- ARM 32-bit BE: bit 7 is in byte 0 and is equal to 2^31

Vs. every single LE system, regardless of word width

- bit N is in byte (N//8) and is equal to 2^N

(And of course none of these match how ethernet orders bits/bytes, but that's a different topic)

mjevans•1w ago
Not that it typically matters in a practical sense* (Unless you're writing to a register for a device)...

However I've always viewed Little Endian as 'bit 0' being on the left most / lowest part of the string of bits, but Big Endian 'bit 0' is all the way to the right / highest address of bits (but smallest order of power).

If encoding or decoding an analog value it makes sense to begin with the biggest bit first - but that mostly matters in a serial / output sense, not for machine word transfers which are (at least in that era were) parallel (today, of course, we have multiple high speed serial links between most chips, sometimes in parallel for wide paths).

Aside from the reduced complexity of aligned only access, forcing the bus to a machine word naturally also aligns / packs fractions of that word on RISC systems, which tended to be the big endian systems.

From that logical perspective it might even make sense to think of the RAM not in units of bytes but rather in units of whole machine words, which might be partly accessed by a fractional value.

unixhero•1w ago
I dod not know thr 8086 had microcode. What was in it?
kens•1w ago
It had the micro-instructions for most of the machine instructions, but a few simple instructions were implemented directly in hardware.

https://www.reenigne.org/blog/8086-microcode-disassembled/