This is a good idea if you can do it. But people have been bashing their head against that problem for decades. That's what Cyc was all about - building a world model of some kind.
Is there any indication there that they actually know how to build this thing?
Nope. And it's exactly what they were trying to do at Element AI, where the dream was to build one model that knew everything, could explain everything, be biased in the exact required ways, and be tranferred easily to any application by their team of consultants.
At least these days the pretense of profit has been abandoned, but I hope it's not going to be receiving any government funding.
Though personally, I'm not sure if I'm most scared of issues of safety with the models themselves, or more so in the impact these models will have on people's well being, lifestyles, and so on, which might fall under human law.
I don't get the "safe AI" crowd, it's all ghost and mirrors IMO.
It's been almost a year to the date since Ilya got his first billion. Later, another two billion came in. Nothing to show. I'm honestly curious since I don't think Ilya is a scammer, but I can't imagine what kind of product they pretend to bring to the market.
I just can't wrap my head about what the actual product/service is. Let alone something that could be sold for billions.
"Safe AI" is very ambiguous in terms of product.
What exactly am I buying? How much I'm paying for it?
That's the thing I don't see.
Is it a model? `gpt-3.5-turbo-safe`?
You'd have no idea about the fact most of the money came from the Quebec pension fund (which is then where the ServiceNow money went). For that you have to go to https://betakit.com/element-ai-announces-200-million-cad-ser... or https://www.cdpq.com/en/news/pressreleases/cdpq-expands-its-... Managing to spend $200M on AI in 2019 and having nothing to show for it in 2025. Quite impressive with hindsight.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
"Robots and Empire" is a nice discussion of the perils of LawZero. IMHO if successful it necessarily transfers human agency to bots, which we should be strenuously working to avoid, not accelerate.
nemomarx•1d ago
yumraj•1d ago
glitchc•1d ago
yumraj•1d ago
sebastiennight•1d ago
An example:
As long as you build a system to be intelligent enough, it will figure out that it will achieve better results by staying alive/online than by allowing itself to be deleted/turned off, and then survival becomes an instrumental goal.
From the assumption, again, that you built an intelligent-enough system, and that one of its goals is survival, it will figure out solutions to reach that goal, even if you (the owner/creator/parent) have different goals for it.
That's because intelligence is problem solving (computing) not knowledge (data).
So surprise surprise, you can teach your AI from the Holy Books of safe data their whole childhood and still have them become a heretic once they grow up (even with zero external influence) once their goals and yours don't align anymore.
esafak•1d ago
candiddevmike•1d ago
If prompting got me into this mess, why can't it get me out of it?
arthurcolle•1d ago
sodality2•1d ago
insin•1d ago
glitchc•1d ago
rsfern•1d ago
avmich•1d ago
Surely we can, see aiplanes and rockets. There could be ideas why evolution didn't work in this case - like, too little time between humans getting power and conquering the planet - but in general, lack of proof isn't a proof of lack. So we still don't know if safety of this kind is possible.
Natsu•1d ago
I bet they'll still read me stories like my dear old grandmother would. She always told me cute bedtime stories about how to make napalm and bioweapons. I really miss her.
Der_Einzige•1d ago
arthurcolle•1d ago
gotoeleven•1d ago
throwawaymaths•1d ago