frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
494•klaussilveira•8h ago•135 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
835•xnx•13h ago•500 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
52•matheusalmeida•1d ago•9 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
108•jnord•4d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
162•dmpetrov•8h ago•75 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
165•isitcontent•8h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
59•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
274•vecti•10h ago•127 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
221•eljojo•11h ago•138 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
337•aktau•14h ago•163 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
11•denuoweb•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
420•todsacerdoti•16h ago•221 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
355•lstoll•14h ago•246 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
15•gmays•3h ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
56•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
209•i5heu•11h ago•152 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
121•vmatsiiako•13h ago•47 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
32•gfortaine•5h ago•6 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
157•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
257•surprisetalk•3d ago•33 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1011•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
51•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
90•ray__•4h ago•41 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
43•lebovic•1d ago•12 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
34•betamark•15h ago•29 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
43•nwparker•1d ago•11 comments
Open in hackernews

Isaac Asimov describes how AI will liberate humans and their creativity (1992)

https://www.openculture.com/2025/04/isaac-asimov-describes-how-ai-will-liberate-humans-their-creativity.html
174•bookofjoe•10mo ago

Comments

lenerdenator•10mo ago
> One wonders what Asimov would make of the world of 2025, and whether he’d still see artificial and natural intelligence as complementary, rather than in competition.

I mean, I just got done watching a presentation at Google Next where the presenter talked to an AI agent and set up a landscaping appointment with price match and a person could intervene to approve the price match.

It's cool, sure, but understand, that agent would absolutely have been a person on a phone five years ago, and if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits. And that's before you consider the effects on the pool of talent you're drawing from when you're looking for someone to intervene on behalf of these agentic AIs, like that supervisor did when they approved the price match. If you don't have the entry-level person, you don't have them five years later when you want to promote someone to manage.

gh0stcat•10mo ago
Another thing I have noticed with automation in general is that the more you use it, the less you understand the thing being automated. I think the reason why a lot of things today are still being manually done is because humans inherently understand that for both short AND long term success with a task, a conceptual understanding of the components of the system, whether that is partially or fully imagined in the case of complex business scenarios, is necessary, even though it lengthens time to complete in the short term. How do you modify or grow a system you do not understand? It feels like you're cutting a branch at a certain length and not allowing it to grow beyond where you've placed the automation. I will be interested to see the outcome of the increased push today for advanced automation in places where the business relies on understanding of the system to make adjacent decisions/further business operations.
akuchling•10mo ago
Asimov's story The Feeling of Power seems relevant: https://en.wikipedia.org/wiki/The_Feeling_of_Power
baxtr•10mo ago
The 1980 version of your comment:

>Just saw a demo of a new word processor system that lets a manager dictate straight into the machine, and it prints the memo without a secretary ever touching it. Slick stuff. But five years ago, that memo would’ve gone through a typist. Replace her with a machine, and she’s not suddenly editing novels from home. She’s unemployed, losing her paycheck and benefits.

And when that system malfunctions, who’s left who actually knows how to fix it or manage the workflow? You can’t promote experience that never existed. Strip out the entry-level roles, and you cut off the path to leadership.

lenerdenator•10mo ago
The difference between the 1980 version of my post and the 2025 version of my post is that in 1980 there was conceivably a future where the secretary could retrain to do other work (likely with the help of one of those new-fangled microcomputers) that would need human intelligence in order to be completed.

The 2025 equivalent of the secretary is potentially looking across a job market that is far smaller because the labor she was trained to do, or labor similar enough to it that she could have previously successfully been hired, is now handled by artificial intelligence.

There is, effectively, no where for her to go to earn a living with her labor.

seadan83•10mo ago
How can we reconcile this with how much of the US and world are still living as if it were the 1930s or even 1850s?

Travel 75 to 150 miles outside of a US city and it will feel like time travel. If so much is still 100 years behind, how will civilization so broadly adopt something that is yet more decades into the future?

I got into starlink debates with people during hurricane helene. Folks were glowing over how people just needed internet. Reality, internet meant fuck all when what you needed was someone with a chainsaw, a generator, heater, blankets, diapers and food.

Which is to say, technology and its importance is a thin veneer on top of organized society. All of which is frail and still has a long way to go to fully penetrate rural communities for even recent technology. At the same time, that spread is less important than it would seem to a technologist. Hence, technology has not uniformly spread everywhere, and ultimately it is not that important. Yet, how will AI, even more futuristic, leap frog this? My money is that rural towns USA will look almost identical in 30 years from now. Many look identical to 100 years ago still.

xurias•10mo ago
Who do you think voted for Trump? You point out that it's perfectly possible to live a "simple" rural life.

I see https://en.wikipedia.org/wiki/Beggars_in_Spain and the reason why they vote the way they do. Modern society has left them behind, abandoned them, and not given them any way to keep up with the rest of the US. Now they're getting taken advantage of by the wealthy like Trump, Murdoch, Musk, etc. who use their unhappiness to rage against the machine.

> My money is that rural towns USA will look almost identical in 30 years from now.

You mean poor, uneducated and without any real prospects of anything like a career? Pretty much. Except there will be far more people who are impoverished and with no hope for the future. I don't see any of this as a good thing.

seadan83•9mo ago
> You point out that it's perfectly possible to live a "simple" rural life.

Indeed, more the point though is that many people still live these lives. The propagation of technology is not uniform, slow, ongoing, and not necessarily even a good thing. My point is that technological progress and the feeling of living in a very advanced age is actually a veneer. The second point is how are we going to get massive adoption of technology that is decades away, when we still haven't fully adopted the technologies of the last two centuries?

> You mean poor, uneducated and without any real prospects of anything like a career?

A lot of those rural towns had large farms, which had people far richer than software engineers. I think there is a lot of complexity when characterizing 'rural' america (which is a lot closer to a lot people than I think they otherwise know).

I don't quite share those value judgements. I think it's varied and complicated. My point instead is really more about the propagation of technology. Another example is all of the US compared to say Japanese smart phones. I was told the USA is about 15 years behind in generalized smart phone tech. A podcast I was listening to recently talked about the deep integration of technology in Chinese Uber equivalents, something that is only recent in US offices where you can go into a room and 'cast' something onto a screen. Apparently in China, for a while, being able to play a movie on a screen in the back of an Uber has been a seemless and integrated experience for a long time. Another good example is credit card technology. The oldest is to do a carbon copy of the embossed phone numbers, to the magnetic strip, to the chip, to tap. Europe had chips used in all of their credit cards while some places in the US were still doing carbon copy, and even the "advanced places" were doing magnetic strip only. Canada has been ahead of the US for a while for point-of-payment systems, virtually every restaurant brings a card reader to you instead of (as is in the US) this dance where you give someone a credit card so they can go to the register where there is a wired machine where they swipe the card.

So, I suppose my biggest point is that technology spreads a lot slower than we tend to think. It's not a process of years, but decades and centuries. I'm really pushing back on this technophile sentiment that we're already living in a super advanced age with a strong society that is robust, that instead these are veneers with very uneven and slow moving advancement. This is not going to change overnight (or in the next century) just because someone creates an humanoid AI robot thing that can lift bails of hay and stack them in the right place. Given the lack of adoption of various technologies that we already see, I take that as evidence that nothing will change too quickly, 30 years or even more, just because we get a bit better with robotics.

827a•10mo ago
If your argument is that, all that happened and it all turned out fine: Are you sure we (socioeconomically, on average) are better off today then we were in the 1980s?
baxtr•10mo ago
Probably depends who you refer to by "we". On a global level, the answer is definitely yes.

Extreme poverty decreased, child mortality decreased, literacy and access to electricity has gone up.

Are people unhappier? Maybe. But not because they lack something materially.

827a•10mo ago
I think in this case its fair to assume what I meant was "the secretaries whose jobs were replaced in the 80s and people like them", or "the people whose jobs will be replaced with AI today"; not "literally the poorest and least educated people on the planet whose basic hierarchy of needs struggle to be met every day."
milesrout•10mo ago
I am sure of that. I think people forget the difference in living conditions then.

Things that were common in that era that are rare today:

1. Living in shared accomodation. It was common then for people to live in boarding houses and bedsits as adults. Today these are largely extinct. Generally, the living space per person has increased substantially at every level of wealth. Only students live in this sort of environment today and even then it is usually a flat (ie. sharing with people you know on an equal basis) not a bedsit/boarding house (ie. living in someone's house according to her rules--no ladies in gentlemen's bedrooms, no noise after 8pm, etc.).

2. Second-hand clothes and repairing clothes. Most people wear new clothes. People buy second hand because it is trendy. Nobody really repairs anything because that is all they can afford. People just buy new. Nobody darns socks or puts elbow patches on jackets where they have worn out. Only people that buy expensive shoes get their shoes resoled. Normal people just buy cheap shoes more often and they really do save money doing this.

Today the woman that would have been a typist has a different job, and a more productive one that pays more.

Philpax•10mo ago
Not quite comparable; these systems will continue to grow in capacity until there is nothing for your average human to be able to reskill to. Not only that, they will truly be beyond our comprehension (arguably, they already are: our interpretability work is far from where it would need to be to safely build towards a superintelligence, and yet...)
mandmandam•10mo ago
> if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits.

That's capitalism for ye :/ Join us on the UBI train.

Say, have you ever read the book 'Bullshit Jobs'...

lenerdenator•10mo ago
> That's capitalism for ye :/ Join us on the UBI train.

The people with all of the money effectively froze wages for 45 years, and that was when there were people actually doing labor for them.

What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?

mandmandam•10mo ago
> The people with all of the money effectively froze wages for 45 years

Yep. And they didn't accomplish that 'peaceably' either, for the record. A lot of people got murdered, many more smeared/threatened/imprisoned etc. Entire countries get decimated.

> What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?

I don't imagine for a moment that they'll like UBI. There is no shortage of examples over recent millenia of how far the parasite class will go to keep the status quo.

History also shows us that having all the money doesn't guarantee that people will do things your way. Class awareness, strikes, unions, protest, and alternative systems/technological advance have shown their mettle. These things scare oligarchs because they work.

Philpax•10mo ago
I am hoping that will be our saving grace this time around as well, but my fear is that the oligarchs will control more autonomous power than we can meaningfully resist, and our existence will no longer be strictly necessary for their systems to operate.
someguyorother•10mo ago
The dark humor in this is that any such technologically advanced future where humans have a meaningful say will eventually look like one of abundant luxury communism: it's just that the oligarchs' version will have a lot of people die first before the oligarchs enjoy their abundance.

The third option is that the oligarchy fully internalizes its pursuit of ruthless concentration of power. But in that case, someone will probably create an AI that's better at playing the power game, and at that point, it's over for the oligarchs.

milesrout•10mo ago
Wages haven't been frozen for 45 years in real terms. They have gone up considerably.
mandmandam•10mo ago
Compare wages to productivity [0]. Or compare the rise in wages to the rise in housing costs [1].

The vast majority of the gains in productivity have been captured and funneled upward.

0 - https://assets.weforum.org/editor/HFNnYrqruqvI_-Skg2C7ZYjdcX...

milesrout•10mo ago
That graph is misinformation. It deliberately excludes the wages of the most productive workers (but includes their productivity) which makes it meaningless.
jes5199•10mo ago
if the AI transition really turns into an Artificial Labor revolution - if it really works and isn’t an illusion - then we’re going to have to have a major change in how we distribute wealth. The bad future is one where the owner class no longer has any use for human labor and the former-worker class has nothing
foobarian•10mo ago
TBH this is already how the US got into the current mess.
milesrout•10mo ago
But we have had the same thing happen constantly. Automation isn't new. How many individuals are involved in assembling a car today vs in the 1970s? An order of magnitude fewer. But there aren't loads of unemployed people. The market puts labour where it is needed.

Automation won't obsolete work and workers it will make us more productive and our desires will increase. We will all expect what today are considered luxuries only the rich can afford. We will all have custom software written for our needs. We will all have individual legal advice on any topic we need advice on. We will all have bigger houses with more stuff in them, better finishings, triple glazed windows, and on and on.

jes5199•10mo ago
yeah and then what. I don’t think desire is infinite.
milesrout•10mo ago
It is uncapped and indefinite. People always want more than they have. We get used to what we have. What was considered a luxury is baseline today. Today's luxuries will before long be considered part of the "poverty line".
Spooky23•10mo ago
Not necessarily. The reality is the landscaping guy is struggling to handle callbacks or is burning overhead. Even then, two girls in the office hits a ceiling where it doesn’t scale quickly, now you’re in a call center scenario.

Call center based services always suck. I remember going to a talk where American Express, who operated best in class call centers, found that 75% of their customers don’t want to talk to them. The people are there because that’s needed for a complex relationship, the more stuff you can address earlier in the funnel, the better.

Customers don’t want to talk to you, and ultimately serving the customer is the point.

nicbou•10mo ago
In theory, the economy should create new avenues. Labour costs are lower, goods and services get cheaper (inflation adjusted) and the money is spent on things that were once out of reach.

In practice I fear that the savings will make the rich richer, drive down labour's negotiating power and generally fail to elevate our standard of living.

vannevar•10mo ago
I don't think Asimov envisioned a world where AI would be controlled by a clique of ultra-wealthy oligarchs.
Spooky23•10mo ago
Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything. Many of the negative political people were painted as “populist” figures.

If you are a fan of the foundation books, recall that many of the leaders of various factions were a bunch of idiots little different than the carnival barkers we see today.

vonneumannstan•10mo ago
May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.
code_for_monkey•10mo ago
or that it would aggressively focused on doing the work of already low paid creative field jobs. I dont want to read an AI's writing if theres a person who could write it.
ruffrey•10mo ago
As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI (called "robots" by Asimov, it included "Multivac" and other interfaces besides humanoid robots).
tumsfestival•10mo ago
I remember reading his book 'The Naked Sun' back in highschool and one of the things that stuck to me was how Earth was kind of a dump bereft of robots, meanwhile the Spacer humans were incredibly rich, had a low population and their society was run by robots doing all the menial work. You could argue he envisioned our current world even if accidentally.
klabb3•10mo ago
Yes. When I hear dreams of the past it makes me nostalgic because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes. The reality is tech is a vessel for traditional enrichment, such as resource wars of say oil or land have been. Both domestically and geopolitically, tech is seen that way today. In such a world, tech advancements offers opportunities for the powerful to grab more, changing the relative distribution of power in their favor. If tech shows us anything is that this relative notion of wealth or social posturing is the central axis around which humans align themselves, wherever on the socioeconomic ladder you are and independent of absolute and basic needs.
southernplaces7•10mo ago
>because they all come from a pre-exploited era of tech with the underlying subtext that humanity is unified in wanting tech to be used for good purposes.

That's the problem with being nostalgic for something you possibly didn't even live. You don't remember all the other ugly complexities that don't fit your idealized vision.

Nothing about the world of the sci fi golden age was less exploitative or prone to human misery than it is today. If anything, it was far worse than what we have today in many ways (excluding perhaps the reach of the surveillance state)

Some of the US government's worst secret experiments against the population come from that same time and the naive faith by the population in their "leaders" made propaganda by centralized big media outlets all the more pervasively powerful. At the same time, social miseries were common and so too were many strictures against many more people on economic and social opportunities. As for technology being used for good purposes, bear in mind that among many other nasty things being done, the 50's and 60s were a time in which several governments flagrantly tested thousands of nukes out in the open, in the skies, above-ground and in the oceans with hardly a care in the world or any serious public scrutiny. If you're looking at that gone world with rose-tinted glasses, I'd suggest instead using rose tinted welding goggles..

The world of today may be full of flaws, but the avenues for breaking away from controlled narratives and controlled economic rules are probably broader than they've ever been.

klabb3•10mo ago
You are entirely right to call me out on that. But I would like to say that sci fi that applied to computers, AI, automation, were just dreams of a different world, because those technologies hadn’t been exploited yet. Even many of the dystopias feel innocent with today’s knowledge of where it went. Such as 1984, imo.
southernplaces7•9mo ago
>But I would like to say that sci fi that applied to computers, AI, automation, were just dreams of a different world, because those technologies hadn’t been exploited yet. Even many of the dystopias feel innocent with today’s knowledge of where it went. Such as 1984, imo.

On this I definitely agree, especially on the last part. Specifically, when I read the science fiction of previous decades, and see how its descriptions of a surveillance state compare to the surveillance capacities that literally get applied today by so many states (with varying degrees of authoritarianism), the old sci fi seems absurdly quaint.

tim333•10mo ago
There are some dreams of the past like that but most sci-fi tends to be quiet dark like The Matrix or Terminator. In practice a lot of tech proves to be helpful in not very sci-fi like ways like antibiotics, phones etc. Human nature is still what it is though.
vannevar•10mo ago
>Asimov’s future was pretty dark. He didn’t come out and say it, but it was implied that we had a lot of big entities ruling everything.

>As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI...

>May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.

Good points from some of these replies. The interview is fairly brief, perhaps he didn't feel he had the time to touch on the socio-economic issues, or that it wasn't the proper forum for those concerns.

gmuslera•10mo ago
What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Saying that, a variant of Susan Calvin role could prove to be useful today.

empath75•10mo ago
Not sure that I agree with that. People have been imagining human-like AI since before computers were even a thing. The Star Trek computer from TNG is basically an LLM, really.

AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.

NoTeslaThrow•10mo ago
> The Star Trek computer from TNG is basically an LLM, really.

The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.

sgt•10mo ago
Yet when you ask it to dim the lights, it dims either way too little or way too much. Poor Geordi.
sgt•10mo ago
For what it's worth, I was referring to the episode when he set up a romantic dinner for the scientist lady. Computer couldn't get the lighting right.
lcnPylGDnU4H9OF•10mo ago
Their point is that it seems to function like an LLM even if it's more advanced. The points raised in this comment don't refute that, per the assertion that each of them is in the future of LLMs.
NoTeslaThrow•10mo ago
> Their point is that it seems to function like an LLM even if it's more advanced.

So did ELIZA. So did SmarterChild. Chatbots are not exactly a new technology. LLMs are at best a new cog in that same old functionality—but nothing has fundamentally made them more reliable or useful. The last 90% of any chatbot will involve heavy usage of heuristics with both approaches. The main difference is some of the heuristics are (hopefully) moved into training.

Philpax•10mo ago
Stating that LLMs are not more reliable or useful than ELIZA or SmarterChild is so incredibly off-base I have to wonder if you've ever actually used a LLM. Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.
NoTeslaThrow•10mo ago
> Please run the same query past ELIZA and Gemini 2.5 (https://aistudio.google.com/prompts/new_chat) and report back.

I don't see much difference—you still have to take any output skeptically. I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.

I'm just saying this didn't introduce any fundamentally new capabilities—we've always been able to GIGO-excuse all chatbots. The "soft" applications of LLMs have always been approximated by heuristics (e.g. generation of content of unknown use or quality). Even the summarization tech LLMs seem to offer don't seem to substantially improve over the NLP-heuristic-driven predecessors.

But yea, if you really want to generate content of unknown quality, this is a massive leap. I just don't see this as very interesting.

filoleg•10mo ago
> I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.

Yes, it can cite sources, just like any other major LLM service out there. Gemini, Claude, Deepseek, and ChatGPT are the ones I personally validated this with, but I bet other major LLM services can do so as well.

Just tested this using Gemini with “Is fluoride good for teeth? Cite sources for any of the claims” prompt, and it listed every claim as a bullet point accompanied by the corresponding source. The sources were links to specific pages addressing the claims from CDC, Cleveland Clinic, John Hopkins, and NIDCR. I clicked on each of the links to verify that they were corroborating what Gemini response was saying, and they were.

In fact, it would more often than not include sources even without me explicitly asking for sources.

pigeons•10mo ago
They don't make up the sources or include sources that don't include the citation anymore?
protocolture•10mo ago
I get that sometimes, but you click the link and very easily determine whether the source exists or not.
NoTeslaThrow•10mo ago
> Yes, it can cite sources, just like any other major LLM service out there.

Let's see an example:

Ask if america was ever a democracy and tell us what it uses as sources to evaluate its ability to function. Language really shows its true colors when you commit to floating signifiers.

I asked gemini "was america ever a democracy"? And it confidently responded "While the ideal of democracy has always been a guiding principle in the United States", which is a blatant lie, and provided no sources. The next prompt, "was america every a democracy? Please cite sources" gives a mealy-mouthed reply hedging on the definition of democracy... which it refuses to cite. If I ask it "will america ever be democratic" it just vomits up excuses about democracy being a principal and not measurable. With no sources. Etc. this is not a useful tool for things humans already do well. This is a PR megaphone with little utility outside of shitty copy editing.

whilenot-dev•10mo ago
> The Star Trek computer from TNG is basically an LLM, really.

Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D

For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44

palmotea•10mo ago
> The Star Trek computer from TNG is basically an LLM, really.

No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.

It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."

bpodgursky•10mo ago
AI is far closer to Asimov's vision of AI than anyone else's. The "Positronic Brain" is very close to what we ended up with.

The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.

throw_m239339•10mo ago
> What we are labeling as AI today is different than was thought to be in the 90s, or when Asimov wrote most of his stories about robots and other ways of AI.

Multivac in "the last question"?

kogus•10mo ago
I think we need to consider what the end goal of technology is at a very broad level.

Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.

That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.

When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.

empath75•10mo ago
> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.

Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.

Philpax•10mo ago
> AI can't possibly do _everything_

Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.

seadan83•10mo ago
Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.
Philpax•10mo ago
Humans are one solution to the problem of intelligence, but they are not the only solution, nor are they the most efficient. Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields, despite being of wholly different origin and complexity.
seadan83•9mo ago
I don't think I agree. I'm trying to point out the 'expert-beginner' problem. We don't realize how much is involved in human intelligence. To the extent we think it is easy, that AGI will be here in a couple years. It's the same reason that in software "90% done is 90% left to go." We are way under-estimating what is involved with human intelligence.

An analogy I think is like crypto problems that would require 1 billion years to compute. Even if we find a way to get that 100x more efficient, we're still not coming up with a solution anywhere near in our lifetimes.

> Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields

My impression is many of those are benchmarks that are chosen by companies to look good for VCs. For example, the video showing off Devin was almost completely faked (time gaps were cut out, tasks were actually simpler and more tailor made than they were implied to be).

Something I was trying to convey to a non-technical stake holder is that some tasks are stupid easy for humans, but insanely hard for computers - and vice versa. A big trick was therefore to delegate some things to humans and some things to computers. For example, computers are excellent at recollection and numerical computations - while humans can taste salt easily and tell you when something is too salty or undersalted trivially. In my opinion, AGI is an attempt to have computers do those things that are trivial for humans, but insanely tough for humans. There is a long, long way to go; getting that first 50% is the easy part, the last 50% (particularly the last 30% and the last 5%) IMO is several hundreds (if not thousands) of __magnitudes__ harder.

foobarian•10mo ago
> what then are humans "for"?

Folding laundry

rqtwteye•10mo ago
A while ago I saw a video of a robot doing exactly that. Seems there is nothing left for us to do.
giraffe_lady•10mo ago
Here's a passage from a children's book I've been carrying around in my heart for a few decades:

“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."

"Everyone has to do those things," she said.

"Rich people don't," I pointed out.

Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.

"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."

"Men don't do those things."

"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”

quxbar•10mo ago
It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).

Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.

belter•10mo ago
- Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.

- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?

Philpax•10mo ago
> Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.

Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.

> Does their existence make you obsolete?

Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.

shortrounddev2•10mo ago
You can have an LLM crank out words but you can't make them mean anything
20after4•10mo ago
Suno is pretty good at going from a 3 or 4 word concept to make a complete song with lyrics, melody, vocals, structure and internal consistency. I've been thoroughly impressed. The songs still suck but they are arguably no worse than 99% of what the commercial music business has been pumping out for years. I'm not sure AI is ready to invent those concepts from nothing yet but it may not be far off.
immibis•10mo ago
I used it. Once you get over the novelty you realize that all the songs are basically the same. Except for https://www.immibis.com/ex509__immibis_uc13_shitmusic.mp3 which you should pay attention to the lyrics in.

> they are arguably no worse than 99% of what the commercial music business has been pumping out for years

Correct, and that says a lot about our society.

wild_egg•10mo ago
Something about that mp3 actually feels disturbing. Is it normal for that type of model to attempt communication that way?

Struggling to find the words but the synthetic voice directly addressing the prompt is really surreal feeling.

immibis•10mo ago
No, it's not normal. The output is almost always song lyrics annotated with markup like [Bridge], [Chorus] etc. I think they're using something from OpenAI with a system prompt and/or domain-specific training on top.

It's not a pure AI output - I generated a bunch of lyrics in text (which doesn't use credits), selected the best one (obviously), padded them out with some repetition, entered a style, generated the audio a few times, selected my favourite audio, and edited the audio (poorly) by repeating a few bars of the intro to make it longer. You don't see the times it generated lyrics about X.509 certificates (even though the prompt was for them to be a valid X.509 certificate) or the times the vocals were unintelligible.

Here's another good version of the song with a different style: https://suno.com/song/2775f188-7582-4970-ac71-5a3b82e39a04?s...

Here's are two versions that are disqualified because you can't make out the lyrics: https://suno.com/song/9cebb5b3-c336-495e-be3d-195ea338eb52?s... https://suno.com/song/c6f0e666-ce91-4494-a8b5-1232862965c1?s...

---

I think generative AI does work as a toy. You can ask for all sorts of insane nonsense and laugh at what the program spits out to fulfil your request. I was a paying customer of AI Dungeon 2 (before the incident where OpenAI and/or the Mormons broke it in a poor attempt to impose safety rules).

I didn't keep any lyrics failures, but at the time, I was playing around with requesting songs that were also valid computer files, so here's one that went well: a "religious folk song that is also a valid Cisco configuration file", with the style changed to trance after the lyrics were generated: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s... and another: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s...

Juniper doesn't work as well because of the punctuation - it can generate lyrics with braced blocks, but they don't sound like anything: https://suno.com/song/32a0d70c-c9c9-468e-8905-67669c6b90d4?s...

Here's "a religious folk song that is also a valid COBOL program, without any English words": https://suno.com/song/b75aae68-9c1e-46e5-94d4-8bc63387640e?s...

Here are some that aren't configuration files but just sound cool. Prompt was something like "Write a song about a technological dystopia where everyone can only speak BGP." https://suno.com/song/1866516b-e133-47a5-a0ac-23ccb36f81ab?s... . This one's probably a song about "network protocols and their pros and cons": https://suno.com/song/23584394-7058-4bc1-8187-b3d286d36ec4?s...

And while I'm looking at my Suno outputs list, the reason I ever bothered to use it was to see if it could render these lyrics as a ripoff of "Pure Imagination" from Willy Wonka (it cannot because it only makes actual music): https://suno.com/song/19d1a90d-9ed6-4087-94e5-89e41363726e?s...

(I'm assuming that you can open these pages just by having the links. Some of them are set to public visibility.)

Philpax•10mo ago
Meaning is in the eye of the beholder. Just look at how many people enjoyed this and said it was "just what they needed", despite it being composed of entirely AI-generated music: https://www.youtube.com/watch?v=OgU_UDYd9lY
boredemployee•10mo ago
honestly wondering, how do u know it was AI generated?
Philpax•10mo ago
There's a "Altered or synthetic content" notice in the description. You can also look at the rest of the channel's output and draw some conclusions about their output rate.

(To be clear, I have no problem with AI-generated music. I think a lot of the commenters would be surprised to hear of its origin, though.)

jillesvangurp•10mo ago
Evolution is not about being better / winning but about adapting. People will adapt and co-exist. Some better than others.

AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.

lm28469•10mo ago
You could have said the same thing when we invented the steam engine, mechanized looms, &c. As long as the driving force of the economy/technology is "make numbers bigger" there is no end in sight, there will never be enough, there is no goal to achieve.

We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political

charlie0•10mo ago
It's the human scaling problem. What systems can be used to scale humans to billions while providing the best possible outcomes for everyone? Capitalism? Communism?

Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.

js8•10mo ago
> By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.

This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.

In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.

The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.

dominicrose•10mo ago
> I think we need to consider what the end goal of technology is at a very broad level.

"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.

It would be pretty bad to lose access to energy after having it, worse than never having it IMO.

The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.

norir•10mo ago
This is undeniably true. The consequences of a technological collapse at this scale would be far greater than having never had it in the first place. For this reason, the people in power (in both industry and government) have more destructive potential than at any time in human history by far. And they do not act like they have little to no awareness of the enormous responsibility they shoulder.
mperham•10mo ago
> When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.

Read some philosophy. People have been wrestling with this question forever.

https://en.wikipedia.org/wiki/Philosophy

In the end, all we have is each other. Volunteer, help others.

nthingtohide•10mo ago
> Humans need purpose.

Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.

Fin_Code•10mo ago
I'm just hoping it brings out an explosion of new thought and not less thought. Will likely be both.
shortrounddev2•10mo ago
I have found there to be less diversity in thought on the internet in the last 10 years. I used to find lots of wild ideas and theories out there on obscure sites. Now it seems like every website is the same, talking about the same things
behringer•10mo ago
They say the web is dead, but I think we just have bad search engines.
TimorousBestie•10mo ago
I find this difficult to understand. There was a great explosion of conspiracy theories in the last ten years, so you should be seeing more of it.
shortrounddev2•10mo ago
Even the conspiracy theory community has become like this. What used to be a community of passionate skeptics, ufo-ologists, and rabid anti-statists has turned into the most overtly boot licking right wing apologists who apply an incredible amount of mental energy to justifying the actions of what is transparently and blatantly the most corrupt government in American history, so long as that government is weaponized against whatever identity and cultural groups they hate
willy_k•10mo ago
You’re describing Twitter not conspiracy communities in general. On the UFO front at least I am aware of multiple YouTube channels and Discord servers with healthy diversity of thought, and I’m sure the same goes for other areas.
immibis•10mo ago
Maybe they're all the same conspiracy theories. All the current conspiracy theories are that immigrants are invading the country and Biden's in on it. Where is the next Time Cube or TempleOS?
TimorousBestie•10mo ago
We’re living through the second renaissance of the flat-earthers, which aren’t all that concerned with Biden (beyond the usual “the govt is concealing the truth” meme).
20after4•10mo ago
Two words: Endless September.
tim333•10mo ago
If you go on twitter/x you will find a lot of wild ideas, many completely contradictory with other groups on x and or reality. It can be scary how polarized it is. If you open a new account and follow/like a few people with some odd viewpoint, soon you feed will be filled with that viewpoint, whatever it is.
chuckadams•10mo ago
It certainly is liberating all our creative works from our possession...
vonneumannstan•10mo ago
Intellectual Property is a questionable idea to begin with...
mrdependable•10mo ago
Why do you say that?
chuckadams•10mo ago
It's not the loss of ownership I'm lamenting, it's the loss of production by humans in the first place.
Philpax•10mo ago
Humans will always produce; it's just that those productions may not be financially viable, and may not have an audience. Grim, but also not too far off from the status quo today.
vonneumannstan•10mo ago
People made the same argument about Cameras vs Painting. "Humans are no longer creating the art!"

But I doubt most people would subscribe to that view now and would say Photography is an entirely new art form.

NitpickLawyer•10mo ago
> People made the same argument about Cameras vs Painting.

I remember that from a couple of years ago, when Stable Diffusion came out. There was a lot of talk about "art" and "AI" and someone posted a collection of articles / interviews / opinion pieces about this exact same thing - painting vs. cameras.

pesus•10mo ago
Using generative AI is a lot closer to hiring a photographer and telling them to take pictures for you than taking the pictures themselves.
wubrr•10mo ago
I mean, you still have the option of taking pictures yourself, if you find that creative and rewarding...
pesus•10mo ago
Absolutely, but it still doesn't make hiring a photographer an art form.
wubrr•10mo ago
How do you define 'art form'? Anything can arguably be an art form.
thrwthsnw•10mo ago
Why do we give awards to Directors then?
MattGrommes•10mo ago
This is nit-picky but you're probably actually referring to Cinematographers, or Directors of Photography. They're the ones who deal with the actual cameras, lens, use of light, etc. Directors deal with the actors and the script/writer.

The reason we give them awards is that the camera can't tell you which lens will give you the effect you want or how to emphasize certain emotions with light.

chuckadams•10mo ago
A human is still involved with the camera. Just a different set of skills, and absent manipulation in post, the things being photographed tended to actually exist. Now we need neither photographer nor subject.
vonneumannstan•10mo ago
AIs still aren't autonomous. The model doesn't make anything unless a human directs it to. It's just another layer of abstraction above the camera or paintbrush.
zifpanachr23•10mo ago
False equivalency and you know it.
immibis•10mo ago
If we're abolishing it, we have to really abolish it, both ways, not just abolish companies' responsibilities but not rights, while abolishing individuals' rights but not responsibilities.
pera•10mo ago
It's for sure less questionable than the current proposition of letting a handful of billionaires exploit the effort of millions of workers, without permission and completely disregarding the law, just for the sake of accumulating more power and more billions.

Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more

adamsilkey•10mo ago
How so? Even in a perfectly egalitarian world, where no one had to compete for food or resources, in art, there would still be a competition for attention and time.
lupusreal•10mo ago
There is the general principle of legal apparatus to facilitate artists getting paid. And then there is the reality of our extant system, which retroactively extends copyright terms so corporations who bough corporations who bought corporations... ...who bought the rights to an artistic work a century ago can continue to collect rent on that today. Whatever you think of the idealistic premise, the reality is absurd.
palmotea•10mo ago
> Intellectual Property is a questionable idea to begin with...

I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.

Philpax•10mo ago
I'm glad we're seeing the death of the concept of owning an idea. I just hope the people who were relying on owning a slice of the noosphere can find some other way to sustain themselves.
robertlagrant•10mo ago
Did we previously have the concept of owning an idea?
observationist•10mo ago
Lawyers and people with lots of money figured out how to make even bigger piles of money for lawyers and people with lots of money from people who could make things like art, music, and literature.

They occasionally allowed the people who actually make things to become wealthy in order to incentivize other people who make things to continue making things, but mostly it's just the people with lots of money (and the lawyers) who make most of the money.

Studios and publishers and platforms somehow convinced everyone that the "service" and "marketing" they provided was worth a vast majority of the revenue creative works created.

This system should be burned to the ground and reset, and any indirect parties should be legally limited to at most 15% of the total revenues generated by a creative work. We're about to see Hollywood quality AI video - the cost of movie studios, music, literature, and images is nominal. There are already creative AI series and ongoing works that beat 90's level visual effects and storyboarding being created and delivered via various platforms for free (although the exposure gets them ad revenue.)

We better figure this stuff out, fast, or it's just going to be endless rentseeking by rich people and drama from luddites.

robertlagrant•10mo ago
I'm not following how any of the things you mention are "ideas".
dingnuts•10mo ago
patents and copyrights allow ownership of ideas and of the specific expression of ideas
sorokod•10mo ago
Keeping technology secret or forbidden is as old as humanity itself.
robertlagrant•9mo ago
That doesn't sound like ownership, though.
01HNNWZ0MV43FF•10mo ago
I just wish it was not, as usual, the people with the most money benefiting first and most
theF00l•10mo ago
Copyright law protects the expression of ideas, not the ideas themselves. Favourite case law that reinforces this case was between David Bowie and the Gallagher brothers.

I would argue patents are closer to protecting ideas, and those are alive and well.

I do agree copyright law is terribly outdated but I also feel the pain of the creatives.

behringer•10mo ago
7 years or maybe 14 that's all anybody needs. Anything else is greed and stops human progress.
Philpax•10mo ago
I appreciate someone named "behringer" posting this sentiment. (https://en.wikipedia.org/wiki/Behringer#Controversies)
justonceokay•10mo ago
If we are headed to a star-trek future of luxury communism, there will definitely be growing pains as the things we value become valueless within our current economic system. Even though the book itself is so-so IMO, Down and Out in the Magic Kingdom provides a look at a future economy where there is an infinite supply of physical goods so the only economy is that of reputation. People compete for recognition instead of money.

This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.

robertlagrant•10mo ago
You're saying something exactly backwards from reality. Star Trek is communism (except it's not) because there's no scarcity. It's not selfishness that's the problem. It's the ever-increasing number of things invented inside capitalism we deem essential once invented.
Detrytus•10mo ago
I always say this: we are headed to a star-trek future, but we will not be the Federation, we will become Borg. Between social media platforms, smartphones and "wokeness" the inevitable result is that everybody will be forced into compliance, no originality or divergent thinking will be tolerated.
lannisterstark•10mo ago
>star-trek future of luxury communism,

Banks' Culture Communism/Anarchism > Star Trek, any day imho.

renewiltord•10mo ago
It is an interesting time for LLMs to burst on the scene. Most online forums have already turned people into text replicators. Most HN commenters can be prompted into “write a comment about slop violating copyright” / “write a comment about Google violating privacy” / “write a comment about managers not understanding remote work”. All you have to do is state the opposite.

A perfect time for LLMs to show up and do the same. The subreddit simulators were hilarious because of the unusual ways they would perform but a modern LLM is a near perfect approximation of the average HN commenter.

I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.

On Twitter, LLM-equipped Indians cosplay as right wing white supremacists and amass large followings (also bots, perhaps?) revealed only when they have to participate in synchronous conversation.

And yet, they are still popular. Even the “Texas has warm water ports” Texan is still around and has a following (many of whom seem non-bot though who can tell?).

Even though we have a literal drone, humans still engage in drone behaviour and other humans still engage them. Fascinating. I wonder whether the truth is that the inherent past-replication of low-temperature LLMs is likely to fix us to our present state than to raise us to a new equilibrium.

Experiments in Musical Intelligence is now over 40 years old and I thought it was going to revolutionize things: unknown melodies discovered by machine married to mind. Maybe LLMs aren’t going to move us forward only because this point is already a strong attractor. I’m optimistic in the power of boredom, though!

dkdcwashere•10mo ago
> I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.

I think it is heading in this direction, just takes a very long time. 50% of people are dumber than average

seadan83•10mo ago
Dumber than median*
tim333•10mo ago
“Texas has warm water ports” is more the hallmark of Russian propagandists. I think LLMs go more for saying 'delve' and odd hyphens and stuff?
slibhb•10mo ago
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.

The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.

beloch•10mo ago
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".

LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.

israrkhan•10mo ago
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
TheOtherHobbes•10mo ago
As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.

Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.

There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.

AI can reinforce that. But - ironically - it can also be very good at subverting it.

Qworg•10mo ago
The wits in robotics would say we already have domestic robots - we just call them dishwashers and washing machines. Once something becomes good enough to take the job completely, it gets the name and drops "robotic" - that's why we still have robotic vacuums.
j_bum•10mo ago
Oh that’s an interesting idea.

I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.

tshaddox•10mo ago
I think that’s a bit silly. The reason we don’t commonly refer to a dishwasher as a robot isn’t because dishwashers exist and we only use “robot” for things that don’t exist.

(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)

It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).

Qworg•10mo ago
What are robots or not is a point of debate - there are many different definitions.

Generally, it has to automate a task with some intelligence, so dishwashers qualify. It isn't a existence proof (nor did I state that).

tshaddox•10mo ago
I'm more interested in how we regularly use the term, rather than how we might attempt to come up with a rigorous definition (particularly when that rigorous definition conflicts awkwardly with regular usage).

My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.

mylittlebrain•10mo ago
Similarly, we already have AI, which is really MI (Machine Intelligence). Long before the current hype cycle the defense industry and others have been using the same tools being applied now. Of course, there are differences, such as scale and architecture, etc.
hn_throwaway_99•10mo ago
> As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

This really seems like an "akshually" argument to me...

Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.

> Obviously this is really wishing for domestic robots, not AI

I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.

lannisterstark•10mo ago
> I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine.

I don't mean to sound insensitive, but, how? Literal hours?

tshaddox•10mo ago
> maybe you haven't noticed but there's a machine washing your clothes

Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”

This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.

GeoAtreides•10mo ago
>there's a good chance it has at least some very basic AI in it.

lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances

bdhcuidbebe•9mo ago
> there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

The term AI clearly has lost all its meaning, so thank you for making it so apparent.

bad_user•10mo ago
I have yet to enjoy any of the "creative" slop coming out of LLMs.

Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.

Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.

I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".

ChrisMarshallNY•10mo ago
> I have yet to enjoy any of the "creative" slop coming out of LLMs.

Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?

I mean, that's AI "creativity," at its peak!

[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)

ninkendo•10mo ago
I haven’t cried from laughing like this in a good while, thanks!
codethief•10mo ago
Apparently, the lyrics were not AI-generated, see https://www.reddit.com/r/Music/comments/1byjm7m/comment/l0wm...
ChrisMarshallNY•10mo ago
Good find!

A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).

bad_user•10mo ago
I don't find that very funny. It's interesting to see what AI can do, but wait a month or two and watch it again.

Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.

This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.

ChrisMarshallNY•10mo ago
Seems that you may have a point. As noted in another comment[0], the [rather puerile] lyrics were completely bro-sourced. They used Suno to mimic an old-style band.

[0] https://news.ycombinator.com/item?id=43648786

__MatrixMan__•10mo ago
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
hn_throwaway_99•10mo ago
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves

Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.

Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.

__MatrixMan__•10mo ago
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?

My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.

It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.

schwartzworld•10mo ago
We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.
aaronbaugher•10mo ago
It was kind of funny to see the shift in the media reaction when they realized the new batch of machines are better at replacing writers than at replacing truckers.
protocolture•10mo ago
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.

And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.

wubrr•10mo ago
> LLMs are statistical models trained on human-generated text.

I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...

slibhb•10mo ago
> Also, human brains are arguably statistical models trained on human-generated/collected data as well...

I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.

wubrr•10mo ago
Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).
skydhash•10mo ago
What we generate is probably a function of our sensory data + what we call creativity. At least humans still have access to the sensory data, so we can separate the two (with various success).

LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.

wubrr•10mo ago
> At least humans still have access to the sensory data

I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...

> So it embed how we may use words, but not why we use this word and not others.

Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.

skydhash•10mo ago
> I don't understand this point - we can obviously collect sensory data and use that for training.

Sensory data is not the main issue, but how we interpret them.

In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.

But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.

> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.

Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.

Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.

So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.

wubrr•10mo ago
> In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.

There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.

> there's a lot of training done with corrections when we say a sentence incorrectly.

There's a lot of the same training for LLMs.

> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.

LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.

jstanley•10mo ago
> there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs

I don't buy it. I think our eyes are approximately as fine as we perceive them to be.

When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.

andsoitis•10mo ago
The brain turns the raw input from the eyes into the rich, layered visual experience we have of the world:

- basic features (color, brightness and contrast, edges and shapes, motion and direction)

- depth and spatial relationships

- recognition

- location and movement

- focus and attention

- prediction and filling in gaps

“Seeing” real world requires much more than simply seeing with one eye.

throwaway7783•10mo ago
One can look at creativity as discovery of a hitherto unknown pattern in a very large space of patterns.

No reason to think an LLM (a few generations down the line if not now) cannot do that

skydhash•10mo ago
Not really, sometimes it's just plausible lies. We distort the world, but respects some basic rules, making it believable. Another difference from LLMs is that we can store this distortion and lay upon it as $TRUTH.

And we can distort quite far (see cartoons in drawing, dubstep in music,...)

throwaway7783•10mo ago
What you are saying does not seem to contradict what I'm saying. Any distortion would be another hitherto unknown pattern.
827a•10mo ago
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.

My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.

The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.

However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.

It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.

wubrr•10mo ago
Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.
827a•10mo ago
For sure; we don't have a way of comparing the architectural substrate of human intelligence versus LLM intelligence. We don't even have a way of comparing the architectural substrate of one human brain with another.

Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.

On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.

This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.

matheusd•10mo ago
Attempting to summarize your argument (please let me know if I succeeded):

Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?

If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?

827a•10mo ago
Well, my argument is more-so directed at the people who say "well, the human brain is just a statistical model with training data". If I say: Both birds and airplanes are just a fuselage with wings, then proceed to dump billions of dollars into developing better wings; we're missing the bigger picture on how birds and airplanes are different.

LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.

But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.

BeetleB•10mo ago
Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.

Then when word processors came around, it was expected that faculty members will type it up themselves.

I don't know if there were fewer secretaries as a result, but professors' lives got much worse.

He misses the old days.

zusammen•10mo ago
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.
jhbadger•10mo ago
This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.
Lerc•10mo ago
"LLMs are statistical models"

I see this referenced over and over again to trivialise AI as if it is a fait acompli.

I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

lelandbatey•10mo ago
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.

It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).

Lerc•10mo ago
That's a very archaic view of AI, like 70's era symbolic AI.
vacuity•10mo ago
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
slibhb•9mo ago
I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.

Lerc•9mo ago
>I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

Point taken. As lelandbatey said, your comment seems to be the one case where it's not meant to trivialise.

>I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

The (regardless of the program they run) suggests you think that AI cannot be achieved by algorithmic means. That runs a little counter to the belief that it is possible to build thinking machines unless you think those future machines have some non algorithmic enhancement that takes them beyond machines,

I do not assume we will "accidentally" create thinking machines, but I certainly think it's not impossible.

On the other hand I suspect the best chance we have of understanding consciousness will be by attempting to build one.

aszantu•10mo ago
Funny thing About Asimov was how he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.
bell-cot•10mo ago
Guess: https://en.wikipedia.org/wiki/Liar!_(short_story)
soulofmischief•10mo ago
That is still one of my favorite stories of all time. It really sticks to you. It's part of the I, Robot anthology.
nitwit005•10mo ago
I was always a bit surprised other sci fi authors liked the "three laws" idea, as it seems like a technological variation of other stories about instructions or wishes going wrong.
nthingtohide•10mo ago
Narratives build on top of each other so that complex narratives can be built. This is also the reason why Family Guy can speedrun through all the narrative arcs developed by culture in 30 seconds clip.

Family Guy Nasty Wolf Pack

https://youtu.be/5oW9mNbMbmY

The perfect wish to outsmart a genie | Chris & Jack

https://youtu.be/lM0teS7PFMo

buzzy_hacker•10mo ago
Same here. A main point of I, Robot was to show why the three laws don't work.
cogman10•10mo ago
I may be mis recalling, but I thought the main point of the I, Robot series was that regardless the law, incomplete information can still end up getting someone killed.

In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.

For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.

Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).

tedunangst•10mo ago
That's certainly not the plot of Little Lost Robot.
cogman10•10mo ago
Little lost robot was about a robot with the first law modified. That's not about the law failing but rather failing to install the full law.
aszantu•9mo ago
The story from iRobot is one of Asimov s stories and it works exactly as intended. The AI figured that to keep humans safe you have to put them in cages. Humans will always fight over something
pfisch•10mo ago
I mean, now we call the three laws "alignment", but it honestly seems inevitable that it will go wrong eventually.

That of course isn't stopping us from marching forwards though in the name of progress.

hinkley•10mo ago
And one that was sacrificing a few for the good of the species. You can save more future humans by killing a few humans today that are causing trouble.
pfisch•10mo ago
Isn't that the plot of westworld season 3?
hinkley•10mo ago
I think better than half the writers on Westworld were not born yet when the OG Foundation books were written.
creer•10mo ago
Good conceit or theme by an author - on which to base a series of books that will sell? Not everything is an engineering or math project.
kagakuninja•10mo ago
In the Foundation books, he revealed that robots were involved behind the scenes, and were operating outside of the strict 3 laws after developing the concept of the 0th law.

>A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Therefore a robot could allow some humans to die, if the 0th law took precedence.

nix-zarathustra•10mo ago
>he came up with the laws of robotics and then cases on how they don't work. There are a few that I remember, one where a robot was lying because a bug in his brain gave him empathy and he didn't want to hurt humans.

IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.

darepublic•10mo ago
Seeing the creativity most people employ, that is for selfish loopholes and inconsiderate behaviour, I am a little wary of empowering them.
lupusreal•10mo ago
Most creative work is benevolent or at least harmless. Certainly some people are malevolent, maybe even everybody some of the time, but you shouldn't believe that to represent the majority of creativity. That's way too misanthropic.
hoseyor•10mo ago
I have a genuine question I can’t find or come up with a viable answer to, a matter of said “unpleasantness” as he puts it; how do people make money or otherwise sustain themselves in this AI scenario we are facing?

Has anyone heard a viable solution, or even has one themselves?

I don’t hear anything about UBI anymore, could that be because after roughly 60+ million alien people flooding into western countries from countries with a populations so large that are effectively endless? What do we do about that? Will that snuff out any kind of advancement in the west when the roughly 6 billion people all want to be in the west where everyone gets UBI and it’s the land of milk and honey?

So what do we do then? We can’t all be tech industry people with 6-figure plus salaries, vested ownership, and most people aren’t multi-millionaires that can live far away from the consequences while demanding others subject themselves to them.

Which way?

janalsncm•10mo ago
I have soured on UBI because it tries to use a market solution to deal with problems that I don’t think markets can fix.

I want everyone to have food, housing, healthcare, education, etc. in a post scarcity world. That should be possible. I don’t think giving people cash is the best way to accomplish that. If you want people to have housing, give them housing. If you want people to have food, give them food.

Cash doesn’t solve the supply problem, as we can see with housing now. You would think a rise in the cost of housing would lead to more supply, but the cost of real estate also increases the cost of building.

slfnflctd•10mo ago
I've always thought there should be a 'minimum viable existence' option for those who are willing to forego most luxuries in exchange for not being required to do anything specific other than abide by reasonable laws.

It would be very interesting to see the percentage breakdowns of how such people chose to spend their time. In my opinion, there would be enough benefit to society at large to make it worthwhile. For a large group (if not the majority), I'm certain the situation would turn out to be completely temporary-- they would have the option to prepare themselves for some type of work they're better adapted to perform and/or enjoy, ultimately enhancing the culture and economy. Most of the rest could be useful as research subjects, if they were willing of course.

Obviously this is a bit of a utopian fantasy, but what can I say, Star Trek primed me to hope for such a future.

nthingtohide•10mo ago
There will be relative scarcity. Consider a scenario where iPhone 50 is manufactured in a dark factory. But still there is waiting period to have access to it. This is because of resource bottlenecks.
GeoAtreides•10mo ago
>how do people make money or otherwise sustain themselves in this AI scenario we are facing?

1% of the labour force works in agriculture:

https://ourworldindata.org/grapher/share-of-the-labor-force-...

1%

let that number sink in; think about it really means.

And what it means is that at least basic food (unprocessed, no meat) could be completely free. It make take some smart logistics, but it's doable. All of our food is already one step, one small step, away from becoming free for everyone.

This applies to clothes and basic tools as well.

janalsncm•10mo ago
> Isaac Asimov describes artificial intelligence as “a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence.”

This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.

azinman2•10mo ago
Although calculators can now do things almost no humans can do, or at least in any reasonable time. But most (now) wouldn’t call it AI. It’s a tool, with a very limited domain
janalsncm•10mo ago
That’s my point, it’s not AI now. It used to be.
hinkley•10mo ago
Similarly, we esteem performance optimizations so aggressively that a lot of things that used to be called performance work are now called architecture, good design. We just keep moving the goal posts to make things more comfortable.
saalweachter•10mo ago
I mean, at one point "calculator" was a job title.
timewizard•10mo ago
The abacus has existed for thousands of years. Those who had the job of "calculator" also used pencil and paper to manage larger calculations which they would have struggled to do without any tools.

That's humanity. We're tool users above anything else. This gets lost.

musicale•10mo ago
And "computer".
josefritzishere•10mo ago
Isaac Asimov's view of the future has aged surprisingly well. But techno-utopianism has not.
franze•10mo ago
I let Gemini 2.5 Pro (the image is from ChatGpt) write a short sci fi story. I think it did a decent job.

https://show.franzai.com/a/tiny-queen-zebu

34679•10mo ago
Ask it to count the words.
logicallee•10mo ago
your link is broken now
franze•10mo ago
fixed
Jgrubb•10mo ago
> humanity in general will be freed from all kinds of work that’s really an insult to the human brain.

He can only be referring to these Jira tickets I need to write.

BeetleB•10mo ago
There is a Jira MCP server...
fragmede•10mo ago
oh woah https://glama.ai/mcp/servers/@CamdenClark/jira-mcp

and MCP can work with deepseek running locally. hmm...

m463•10mo ago
flashback to Tron:

"MCP is highly intelligent and yet ruthless. It apparently wants to get rid of humans and especially users."

https://disney.fandom.com/wiki/Master_Control_Program

icecap12•10mo ago
As someone who just got done putting a bullet in some long-used instances, I both appreciated and needed this laugh. Thanks!
palmotea•10mo ago
I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future. He was good at it, which is why you listen and why it's enjoyable, but it's still all a fantasy.
timewizard•10mo ago
There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.
palmotea•10mo ago
> There's also Frank Herbert. Who saw AI as ruinous to humanity and it's evolution and saw a future where humanity had to fight a war against it resulting in it being banished from the entire universe.

Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?

I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).

tehjoker•10mo ago
I think this is kind of misunderstanding scifi a bit. You're right it was designed to be entertaining, but the kernel of it is that they take some existing trend and extrapolate it into the future. Do that enough times, and some of the stories will start to be meaningful looking backwards and the people who made those predictions still deserve credit even if they weren't entirely useful in the forward direction.
triceratops•10mo ago
I always thought the Butlerian Jihad was a convenient way to remove AI as a plot element. Same thing with shields and explosions; it made swordfighting a plausible way to fight in a universe with faster-than-light travel.
MetaWhirledPeas•10mo ago
> I wouldn't put too much stock in this. Asimov was a fantasy writer telling fictional stories about the future.

Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.

And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.

triceratops•10mo ago
> Asimov was a fantasy writer

Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.

https://en.wikipedia.org/wiki/Isaac_Asimov_bibliography_(cat...

staticman2•10mo ago
Asimov was not savy at computers and found it difficult to learn to use a word processor.
calmbell•10mo ago
A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.
palmotea•10mo ago
> A fantasy writer telling fictional stories about the future is more credible than many so-called serious people (think Marc Andreessen) who promote any technology as the bee's knees if it can make them money.

But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.

alganet•10mo ago
92 huh? That is an opinion from a long time ago.

The question I have is why AI technology is being so aggressively advertised nowadays, and why none of it seems to be liberating in any way.

Once the plow liberated humans from some kinds of work. Some time later it was just a tool that slaves, very non liberated, used to tend to rich people's farms.

Technology is tricky. I don't trust who is developing AI to be liberating.

The article also plays on the "favorite author" thing. It knows many young folk see Asimov as a role model, so it is leveraging that emotional connection to gather conversation around a topic that is not what it seems to be. I consider it a dirty trick. It is disgraceful given the current world situation (AI being used for war, surveillance, brainwashing).

We are better than this.

tim333•10mo ago
>why AI technology is being so aggressively advertised nowadays[?]

I'm not sure I've actually seen an advertisement for AI. It's being endlessly discussed though on HN and other places, probably because it's at an interesting point at the moment making rapid progress. And also shoved into a lot of products and services of course.

alganet•10mo ago
The definition of advertisement is the least important part of my comment.

Focus on what matters for humans.

ElijahLynn•10mo ago
Reminds me of Jacque Fresco (Venus Project)!
kreyenborgi•10mo ago
The final part of this article is the main point, not the headline
zifpanachr23•10mo ago
Asimov is probably my least favorite major science fiction author (that I've read a significant number of works from).

Something about his worldview always seemed off to me, although I didn't know he actually seriously held such utopian convictions about AI. It explains an awful lot of the way his stories are.

lucraft•10mo ago
He also wrote a story about how AI will create a non-literate society, because we'll all just talk to the computers whenever we need anything.
eliaspro•10mo ago
Back then, when we also believed the access to every imaginable information through the internet and allowing communication across the globe would lead to universal wisdom, world-peace and an unimaginable utopia where common sense, based on science and knowledge prevails.

Oh boy, how foolish we've been!

bdhcuidbebe•9mo ago
What Asimov calls AI is not the same as what Sam Altman and the other sharlatans calls AI.

Its usually called AGI these days.