1) He released it from MIT to avoid suspicion.
2) After he was convicted, he went from Cornell to Harvard to complete his Ph.D.
3) He became an assistant professor at MIT after that.
He had to be really spectacular/have crazy connections to still be able to finish his training at a top program and get a job at the institution he tried to frame.
His dad's also a badass and super fun to talk to. Never talked to the son though, but I'd love to some day.
”Computer crime” definitely was though.
What you didn't have back then was financial fraud on the scale that happens today, where even nominal damages run into 8-9 figures.
But looking into the specifics again after all these years [1], I read:
"The N.S.A. wanted to clamp a lid on as much of the affair as it could. Within days, the agency’s National Computer Security Center, where the elder Morris worked, asked Purdue University to remove from its computers information about the internal workings of the virus."
and that CERT at CMU was one response to the incident [2].
So there is a whiff of the incident being steered away from public prosecution and towards setting up security institutions.
Robert Morris did get a felony conviction, three years probation, and a $10K fine. As for hn users, aside from pg, Cliff Stoll has a minor role in the story.
[1] https://archive.nytimes.com/www.nytimes.com/times-insider/20...
Interesting random factoid: RTM's research in the early 2000s was on Chord [1], one of the earliest distributed hash tables. Chord inspired Kademlia [2], which later went on to power Limewire, Ethereum, and IPFS. So his research at MIT actually has had a bigger impact in terms of collected market cap than most YC startups have.
MIT really respects good hacks and good hackers. It was probably more effective than sending in some PDF of a paper.
Oooof in light of Aaron Swartz. He plugged directly into a network switch that was in an unlocked and unlabelled room at MIT so he could download faster and faced "charges of breaking and entering with intent, grand larceny, and unauthorized access to a computer network".
MIT really didn't lift a finger for this either.
>Swartz's attorneys requested that all pretrial discovery documents be made public, a move which MIT opposed
It's very hard to extract Robert Tappan Morris from the context of his father being an extremely powerful man when trying to figure out how he managed to get away with what he did.
---
¹Awarders of the Ig Nobel prize
He was and is very smart. This is not disputed. He was 23 at the time. Not exactly a child.
The worm was surprisingly elaborate containing three separate remote exploits.
It probably took a few weeks to build and test.
So sabotaging thousands of at the time very expensive network connected computers was a very deliberate action.
I posit that he likely did it to become famous and perhaps even successful, feeling safe with his dad’s position. And it worked. He did not end up in prison. He ended up cofounding Viaweb and YCombinator.
Clifford Stoll, author of The Cuckoo's Egg, wrote that "Rumors have it that [Morris] worked with a friend or two at Harvard's computing department (Harvard student Paul Graham sent him mail asking for 'Any news on the brilliant project')".
Has pg commented on this?
While we're much more conscientious and better at security than we were way back then, things are certainly not totally secure.
The best answer I have is the same as what a bio professor told me once about designer plagues: it hasn't happened because nobody's done it. The capability is out there, and the vulnerability is out there.
(Someone will chime in about COVID lab leak theories, but even if that's true that's not what I mean. If that happened it was the worst industrial accident in history, not an intentional designer plague.)
It's most obviously paralleled by Samy Kamkar's MySpace worm, which exploited fairly similar too-much-trust territory.
https://en.wikipedia.org/wiki/Botnet#Historical_list_of_botn...
https://en.wikipedia.org/wiki/Blaster_(computer_worm)
https://en.wikipedia.org/wiki/SQL_Slammer
https://en.wikipedia.org/wiki/Sasser_(computer_worm)
Bill Gates sent out the "Trusted Computing" memo to harden Windows and make it somewhat secure.
Essentially, Windows used to be trivial to exploit, in that Every single service was by default exposed to the web, full of very trivial buffer overflows that dovetailed nicely into remote code execution.
Since then, Windows has stopped exposing everything to the internet by default and added a firewall, fixed most buffer overflows in entry points of these services, and made it substantially harder to turn most vulnerabilities into the kind of remote code execution you would use to make simple worms.
>better at security than we were way back then
In some ways this is dramatically understated. Now the majority of malware comes from getting people to click on links, targeted attacks that drop it, piggyback riding in on infected downloads, and other forms of just getting the victim to run your code. Worms and botnets are either something you "Willingly" install through "free" VPNs, or target absolutely broken and insecure routers.
The days where simply plugging a computer into the internet would result in you immediately trying to infect 100 other computers with no interaction are pretty much gone. For all the bitching about forced updates and UAC and other security measures, they basically work.
had RTM actually RTM the world might be a bit different than it is today.
He did do us all a service; people back then didn't seem to realize that buffer overflows were a security risk. The model people had then, including my old boss at one of my first jobs in the early 80s, is that if you fed a program invalid input and it crashed, this was your fault because the program had a specification or documentation and you didn't comply with it.
It was Thomas Lopatic and 8lgm that really lit a fire under this (though likely they were inspired by Morris' work). Lopatic wrote the first public modern stack overflow exploit, for HPUX NCSA httpd, in 1995. Later that year, 8lgm teased (but didn't publish --- which was a big departure for them) a remote stack overflow in Sendmail 8.6.12 (it's important to understand what a big deal Sendmail vectors were at the time).
That 8lgm tease was what set Dave Goldsmith, Elias Levy, San Mehat, and Pieter Zatko (and presumably a bunch of other people I just don't know) off POC'ing the first wave of public stack overflow vulnerabilities. In the 9-18 months surrounding that work, you could look at basically any piece of privileged code, be it a remote service or an SUID binary or a kernel driver, and instantly spot overflows. It was the popularization with model exploits and articles like "Smashing The Stack" that really raised the alarm people took seriously.
That 7 year gap is really wild when you think about it, because during that time period, during which people jealously guarded fairly dumb bugs, like an errant pipe filter input to the calendar manager service that run by default on SunOS shelling out to commands, you could have owned up literally any system on the Internet, so prevalent were the bugs. And people blew them off!
I wrote a thread about this on Twitter back in the day, and Neil Woods from 8lgm responded... with the 8.6.12 exploit!
Not much was happening in the Eng and CS buildings on campus (except for those that had to deal with the worm).
Since it's all locked up, I just reboot the big vax single user - that takes about 10 minutes so I also start on a couple of the suns. You have to realize that everything including desktops runs sendmail in this era, and when some of these machines come up they are ok for a sec and then sendmail starts really eating into the cpu.
I'm pretty bleary eyed but I walk around restarting everything single and taking sendmail out of the rcs. The TMC applications engineer comes in around 7 and gets me a cup of coffee. He manages to get someone to pick up in Cambridge and they tell him that's happening everywhere.
Great course by the way.
On a sidenote, what did you do after the course?
It is an amazing course though!
60k computers ( mostly at institutions ) in 20 countries
I'm not dunking on Paul Graham here. If you know anything about me, if anything, this is a point in his favor. :)
Thanks for the answer, I'll check out the book.
However I think this is a solvable problem, and I started solving it a while ago with decent results:
https://github.com/Hello1024/shared-tensor
When someone gets this working well, I could totally see a distributed AI being tasked with expanding it's own pool of compute nodes by worming into things and developing new exploits and sucking up more training data.
canucker2016•4h ago
krustyburger•3h ago
1988
canucker2016•2h ago
- a github repo containing "the original, de-compiled source code for the Morris Worm" - see https://github.com/agiacalone/morris-worm-malware
- a high level report about the worm - see https://www.ee.torontomu.ca/~elf/hack/internet-worm.html
nilamo•2h ago
cgriswald•1h ago
However the article has been updated so only the HN title has this flaw.
IvyMike•48m ago
mlyle•1h ago
mmooss•2h ago