M-x doctor
From its commentary [1] in the source code: ;;; Commentary:
;; The single entry point `doctor', simulates a Rogerian analyst using
;; phrase-production techniques similar to the classic ELIZA demonstration
;; of pseudo-AI.
From the docstring [2] of the command: "Switch to *doctor* buffer and start giving psychotherapy."
[1] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...[2] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...
M-x doctor
I am the psychotherapist. Please, describe your problems. Each time
you are finished talking, type RET twice.
How can I explain to my friends that Emacs is a lifestyle choice?
What do you think?
That's not an answer, Doctor. Are you avoiding the question?
I'll ask the questions, if you don't mind!
I do mind, actually. This is my buffer, remember?
Why do you say that?
Because I'm the one holding the Ctrl and Meta keys here.
Is it because you are the one holding the ctrl and meta keys there
that you came to me?
C-x k(Port/rewrite I think. More details here https://github.com/anthay/ELIZA )
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
[0] https://www.personalityforge.com/developers/how-to-build-cha...
[1] https://web.archive.org/web/20200104080459/https://rasa.com/ (old link because Rasa's marketing today is ambiguous about whether they're adding LLMs now).
If I remember correctly, I also modified the Graphmaster to add support for rule priorities, so that I can better manage rules beyond the tree-based matching approach.
One of the first things people would do, upon discovering that she's a bot, is trying to break her responses.
All of this was for private use, nothing was open sourced. Unfortunately I think I forgot to copy it over from an old hard drive during a computer hardware migration, so it's gone now.
I remember Richard Wallace writing something along the lines of "if I were to build an artificial intelligence, I wouldn't use flesh and bones, that's just a bad choice" (not a verbatim quote) in defense of people accusing AIML for being a too simple/dumb of an approach, with those people favoring more complex approaches. In the age of LLM, that statement aged both well and badly.
There were. If you're really interested in that history, one place to look is at the historical record of the Loebner Prize[1] competition. The Loebner was a competition based on a "Turing Test" like setup and was held annually up until 2019 or so. I think they quit holding it once LLM's came along, probably because they felt like LLM's sort of obviated the need for it or something.
re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.
It's also horrifying how much intention people think they can see from looking at logs of people using something. I know there are a lot of "data driven" decisions that people use the same way, where people are reaching all sorts of conclusions to why X suddenly is Y, or likewise.
I'm sure if someone inspected the logs of what I've written to various LLMs they'd think they can extrapolate all sorts of personal characteristics about me, but I'm also a person who plays around with things, tries to find limits and whatever, so just because see me treating a LLM like shit for some reason doesn't mean you can understand the intention behind that.
> Still, I was pretty shitty to the people who interacted with my computer
I think as youngsters exploring computing without limitations, restrictions or honestly much thoughts at all in the beginning, many of us been in the same situation. As long as we learn and improve with experience :)
If you looked at my LLM interaction logs you would probably assume that I have an unhealthy obsession with pirates and a napalm fetish.
In reality, I use the "can I get it to tell me how to make napalm" thing as a quick "acid test" around the extent and strength of censorship controls, and simply find asking LLM's to "talk like a pirate" amusing. And, also, I've found occasions where doing nothing more than instructing the LLM to talk like a pirate will bypass it's built-in inhibitions against things like giving instructions for making napalm.
So the images were saved when they were uploaded, not for any nefarious reason, but more out of laziness. Then one day, I looked at the images. Yikes. I immediately rewrote it to delete the images after returning them, and pretty soon let the site die.
So the opposite of acting ethically.
No wonder we've ended up in the surveillance nightmare we find ourselves in.
I think ethical behavior is a continuum and I don't see it as binary. Then again, I'm not formally trained in ethics either.
I clearly stated I handled it only somewhat ethically, at best (i.e. "...pretty shitty to people..."). Even then, I'd argue I acted closer to the "ethical" end of that continuum than the opposite. I could have shared the logs, for example. That would be much closer to the "unethical" end of that spectrum to my mind.
I definitely handled it poorly but I could have handled it worse. For the people who were "surveilled" the impact to their lives was the same as if they had not been.
> No wonder we've ended up in the surveillance nightmare we find ourselves in.
The "user surveillance" on my personal standalone laptop computer 30 years ago doesn't have much bearing on the for-profit companies who profit from mass user surveillance today, except perhaps as being emblematic of the human nature to find novelty in "secrets". I don't think I bear any personal responsibility for the world we live in today in this regard.
[1] https://web.archive.org/web/20030812213928/http://fury.com/a...
spullara•7mo ago
bonoboTP•7mo ago
bjt12345•7mo ago
spullara•7mo ago
bonoboTP•7mo ago