My old friend and usenet denizen Laura Creighton was the one who wrote the device driver and verified the story. (Not that your should trust anything ESR writes in his polluted version of the Jargon file, but Laura says it's true, and she's trustworthy.)
Always mount the scratch monkey (1987) (edp.org) 34 points by pook on May 7, 2010 | hide | past | favorite | 6
https://news.ycombinator.com/item?id=1327146
baha_man on May 7, 2010 | parent | next [–]
http://tafkac.org/faq2k/animal_618.html
"At this writing, the Jargon File claims the incident actually happened, at Toronto in 1979 or 1980, and that the sysadmin on duty was actually interviewed. The account doesn't provide enough details to track down an independent account, however.
Current University of Toronto sysadmins have expressed skepticism. For one thing, in almost all versions of the story, including the ostensibly documented one in the Jargon File, the computer is a VAX; at the time a VAX would have been a very unusual platform for this kind of data acquisition (they used PDP-11s). The Toronto zoology department has never been licensed to work with primates; the only section of the university that could have done experiments of this nature was the School of Medicine. Investigation continues."
ableal on May 7, 2010 | root | parent | next [–]
Actually, the original story does say it was Medicine buying the Vax and doing the experiment, with Zoo helping. The VAX 11-780 was the hot machine of 1979/80 ("a 1 MIPS beast!", I heard). I've seen Laura Creighton's name on Python lists - I believe she's on LinkedIn. The author's "Statistically Invalid Sampling of My Life" (http://edp.org/index.htm) is also pretty amusing - don't think he'd need to embellish that incident.
P.S. The story is, for instance, alluded to here: http://mail.python.org/pipermail/python-list/2002-May/756589... [broken link not on archive dot org], without either rebuttal or elaboration.
----
Laura's done a lot of work on PyPy, which I described in this email (c. 2007):
>Laura and three other members of the PyPy team are doing a benevolent world domination tour, and visiting the bay area soon. I think you would enjoy meeting each other and talking about PyPy!
>Laura does all kinds of other interesting stuff, with python in particular and computers and reality in general, like writing a device driver to interface monkeys to a VAX. (Google "always mount a scratch monkey"!)
https://www.python.org/about/success/strakt/
>Laura Creighton has 20 years experience in software training, and Human Factors Engineering. She is a founder of AB Strakt, and a founder and Treasurer of The Python Business Forum, an international non-profit trade association for businesses which develop in Python.
https://www.mhonarc.org/archive/html/nmh-workers/2017-10/msg...
>Yes, that is me. I actually think that Medicine had an 11/something-or-other dual space machine that was running Berkeley 2.98 bsd, not an 11/780 vax, but otherwise, seems correct enough to me.
I had to research silly Vedic nonsense on Hinduism stackexchange. While this one changes how your CPU behaves (and does not believe in the stars to do it for you), shubhcron waits for the right auspicious time to run your cronjobs (or other processes).
https://news.ycombinator.com/item?id=46585825
DonHopkins 22 days ago | prev | next [–]
From the HN discussion of "Motive.c: The Soul of the Sims (1997) (donhopkins.com)":
https://news.ycombinator.com/item?id=14997725
https://www.donhopkins.com/home/images/Sims/
https://news.ycombinator.com/item?id=15002840
DonHopkins on Aug 13, 2017 | parent | context | favorite | on: Motive.c: The Soul of the Sims (1997)
The trick of optimizing games is to off-load as much as the simulation from the computer into the user's brain, which is MUCH more powerful and creative. Implication is more efficient (and richer) than simulation.
During development, when we first added Astrological signs to the characters, there was a discussion about whether we should invent our own original "Sim Zodiac" signs, or use the traditional ones, which have a lot of baggage and history (which some of the designers thought might be a problem).
Will Wright argued that we actually wanted to leverage the baggage and history of the traditional Astrological signs of the Zodiac, so we should just use those and not invent our own.
The way it works is that Will came up with twelve archetypal vectors of personality traits corresponding to each of the twelve Astrological signs, so when you set their personality traits, it looks up the sign with the nearest euclidian distance to the character's personality, and displays that as their sign. But there was absolutely no actual effect on their behavior.
That decision paid off almost instantly and measurably in testing, after we implemented the user interface for showing the Astrological sign in the character creation screen, without writing any code to make their sign affect their behavior: The testers immediately started reporting bugs that their character's sign had too much of an effect on their personality, and claimed that the non-existent effect of astrological signs on behavior needed to be tuned down. But that effect was totally coming from their imagination!
They should call them Astrillogical Signs!
DonHopkins on Aug 13, 2017 [–]
The create-a-sim user interface hid the corresponding astrological sign for the initial all-zero personality you first see before you've spent any points, because that would be insulting to 1/12th of the players (implying [your sign] has zero personality)!
https://www.youtube.com/watch?v=ffzt12tEGpY
https://news.ycombinator.com/item?id=46584625
gwern 22 days ago | parent | context | favorite | on: Show HN: What if AI agents had Zodiac personalitie...
I think my take away is that you are seeing mostly mode-collapse here. There is a high consistency across all of the supposedly different personalities (higher than the naive count would indicate - remember the stochastic nature of responses will inflate the number of 'different' responses, since OP doesn't say anything about sampling a large number of times to get the true response).
DonHopkins 21 days ago | parent | next [–]
You are right about mode-collapse -- and that observation is exactly what makes this interesting. In my other comment here, I described The Sims' zodiac from 1997: Will Wright computed signs from personality via Euclidean distance to archetypal vectors, displayed them cosmetically, and wrote zero behavioral code. The zodiac affected nothing. Yet testers reported bugs: "The zodiac influence is too strong! Tune it down!"
Your "mode-collapse with stochastic noise" is the same phenomenon measured from the other direction. In The Sims: zero computed difference, perceived personality. In this LLM experiment: minimal computed difference, perceived personality. Same gap.
Will called it the Simulator Effect: players imagine more than you simulate. I would argue mode-collapse IS the Simulator Effect measured from the output side.
But here is where it becomes actionable: one voice is the wrong number of voices.
ChatGPT gives you the statistical center -- mode-collapse to the bland mean. The single answer that offends no one and inspires no one. You can not fix this with better prompting because it is the inevitable result of single-agent inference.
Timothy Leary built MIND MIRROR in 1985 -- psychology software visualizing personality as a circumplex, based on his 1950 PhD dissertation on the Interpersonal Circumplex. The Sims inherited this (neat, outgoing, active, playful, nice). But a personality profile is not an answer. It is a lens.
The wild part: in 1970, Leary took his own test during prison intake, gamed it to get minimum security classification (outdoor work detail), and escaped by climbing a telephone wire over the fence. The system's own tools became instruments of liberation.
https://github.com/SimHacker/moollm/tree/main/skills/mind-mi...
MOOLLM's response: simulate an adversarial committee within the same call. Multiple personas with opposing propensities -- a paranoid realist, an idealist, an evidence prosecutor -- debating via Robert's Rules. Stories that survive cross-examination are more robust than the statistical center.
https://github.com/SimHacker/moollm/tree/main/skills/adversa...
I wrote this up with links into the project:
https://github.com/SimHacker/moollm/blob/main/designs/sims-a...
The bigger project is MOOLLM -- treating the LLM as eval() for a microworld OS. K-lines, prototype-based instantiation, many-voiced deliberation. The question I keep wrestling with: mode-collapse as limitation vs feature. The Sims exploited it. MOOLLM routes around it.
Would value your take on the information-theoretic framing -- particularly whether multi-agent simulation actually increases effective entropy or just redistributes it.
https://github.com/SimHacker/moollm
The MOOLLM Eval Incarnate Framework: Skills are programs. The LLM is eval(). Empathy is the interface. Code. Graphics. Data. One interpreter. Many languages. The Axis of Eval.
https://github.com/SimHacker/moollm/blob/main/designs/MOOLLM...
Apart from trolling I am really curious what other useful functionality sched_ext enables. One of the primary reasons CFS was replaced with EEVDF was because it has a better set of defaults which don't require tuning / patching which is exactly what sched_ext simplifies.
b800h•1h ago
cl0ckt0wer•16m ago