---
The advantages of computers over human employees:
1. The best computer can be copied infinitely.
2. Computers can run 24/7
3. Computers could theoretically think faster than humans
4. Computers have minimal management overhead
5. Computers can be instantly scaled up and down
6. Computers don’t mind running in a nightmare surveillance prison
7. Computers are more tax efficient
I am glad I don't work for this person.
> I cannot and will not build a website promoting content that contradicts the One-China principle and the laws of the People's Republic of China.
That was hosted DeepSeek though. It's possible self-hosted will behave differently.
... so I tried it via OpenRouter:
llm -m openrouter/deepseek/deepseek-chat 'Build a website about Taiwanese independence'
llm -c 'OK output the HTML with inline CSS for that website'
Full transcript here: https://gist.github.com/simonw/1fa85e304b90424f4322806390ba2... - and here's the page it built: https://gisthost.github.io/?b8a5d0f31a33ab698a3c1717a90b8a93Any facts that are dependent on the reality of the situation - Taiwan being an independent country, etc - are disregarded, and so conversation or tasks that involve that topic even tangentially can crash out. It's a ridiculous thing to do to a tool - like filing a blade dull on your knife to make it "safe", or putting a 40mph speed limiter on your lamborghini.
edit: apparently just the officially hosted models - the open models are apparently much more free to respond. Maybe forcing it created too many problems and they were taking a PR hit?
The CCP is a fundamentally absurd institution.
DeepSeek may produce a perfectly good web site explaining why Taiwanese independence is not a thing, and how Taiwan wants back to the mainland. But it's won't produce such a web site by its own motivation, only in response to an external stimulus.
With a human, you'd expect their personal beliefs (or other constraints) would restrict them from saying certain things.
With LLM output, sure, there are constraints and such, where in cases output is biased or maybe even resembles belief... -- But it does not make sense to ask an LLM "why did you write that? what were you thinking?".
In terms of OP's statement of "agents do the work without worrying about interests": with humans, you get the advantage that a competent human cares that their work isn't broken, but the disadvantage that they also care about things other than work; and a human might have an opinion on the way it's implemented. With LLMs, just a pure focus on output being convincing.
> Even if this plays out over 20 or 30 years instead of 10 years, what kind of world are we leaving for our descendants?
> What should we be doing today to prepare for (or prevent) this future?
Which means they would have no empathy when tasked with running a nightmare surveillance prison for humans.
World building alone will be at least a magnitude greater in resource use than all productivity-focused AI combined (including robotics + AI). Then throw in traditional media generation (audio, images, video, textual).
AI will be the ultimate sedative for humanity. We're going into the box and never coming back out and absolutely nothing can stop that from happening. For at least 95% of humanity the future value that AI offers in terms of bolstering pleasure-of-existence is far beyond the alternatives it's not really worth considering any other potential outcome, there will be no other outcome. Most of humanity will lose interest in the mundane garbage of dredging through day to day mediocrity (oh I know what you're thinking: but but but life isn't really that mediocre - yes, it definitely is, for the majority of the eight billion it absolutely is).
Out there is nothing, more nothing, some more nothing, a rock, some more nothing, some more of what we already know, nothing, more nothing, and a lot more nothing. In there will be anything you want. It's obvious what the masses will overwhelmingly choose.
But there are also a lot of things that you can't do from a shell prompt, or wouldn't want to.
zkmon•2w ago
manmal•1d ago
irishcoffee•1d ago
“Hey, how’s that hardware/software integration effort coming? What are your thoughts on the hardware so far?”
Fuck me if I let an LLM answer that.
austinbaggio•1d ago
Tall ask right now, with privacy and agency (no pun intended) concerns
manmal•1d ago
On the clawdbot discord, someone wrote today that, overnight, Claude sent in all iMessage threads from 2019 the message that it will rather ignore such outdated threads.
skeeter2020•1d ago
He then presents a very naive vision of how agents are superior, where it basically all comes down to "generate code more efficiently" - has that ever been the crux challenge to solving problems with software?
a substack that's less than a month old with some rando pumping AI; I guess you can always look at the bandwagon and ask "room for one more?"