And just keep scrolling, you can make it to the story eventually.
Stationary gasoline engines were already changing the farm and reducing the head of horses necessary to feed a nation. It, too, was a faster horse for them.
Anyways.. it took the Detroit police to eventually deploy the first automatic stoplight. The real innovations seem to be often found downstream of the simple increases in capacity.
That all being said, it seems to me the current crop of LLMs haven't done this, their power and training budgets do not seem to be scaling favorably against adoption rates and profit margins. Absent a significant change in algorithm or computing substrate I don't think this strategy is the leap everyone hopes it will be.
I'm much more curious about the results of 80k people who don't use AI regularly.
What consumer benefits is ai driving? at least with industrial automation consumers benefited from new technologies, cheaper goods, and new job categories.
It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
I have no clue what this would look like other than maybe an investment fund for people creating apps/businesses based on Claude tools.
I can at least “imagine” a model that tries to crack this nut.
You'll either need to freelance, or start a company (or maybe a co-op) to capture the new value created by your ability to leverage AI.
It won't be much different to when a company buys more CNC machines and the employees don't get any more money despite producing way more parts.
What I need instead is something that takes the burden off my entire society and gives them a breather. Universal health care to start. They could also use a higher minimum wage, and lower housing costs.
All perl programmers should be wishing for ponies, that's definitely less narrow minded.
That's just the system we have, but slightly better and completely achievable.
This is quite easy. Just optimize the models to do reviews and bug finding. This would make developers (who normally hate reviews) quite happy and let them do more coding, thus delivering more value and possibly earning more...
If they wanted to do this, they could put their models in a public trust for the public's access and benefit in research, education, etc. Then it could be licensed, pay a dividend like a sovereign wealth fund, etc.
Considering that they copy and train on the sum total of all human creativity, a public trust is something that would be in line with both the spirit, and first and fourth considerations, of fair use doctrine:
1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
2. the nature of the copyrighted work;
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
4. the effect of the use upon the potential market for or value of the copyrighted work.
That way everyone is rewarded with the benefits of running a model that was trained on everyone's creations.The intrinsic satisfaction of increasing the wealth of shareholders. We should all be happy to devote ourselves to getting them more, nothing is more important than that.
Basically consumers don't really pay for software in the first place, and the leverage from labour companies get through software is already through the roof even before AI. Will much change for consumers of software?
So... not much benefit either.
My kids like to use AI to discuss things they learned in school in greater depth, and from different angles than they learned in the textbook. They can also ask "What if" and "Why not" questions from this infinitely patient teacher.
That might not apply to the kinds of parents that hang out here though
01. Professional excellence 18.8%
02. Personal transformation 13.7%
03. Life management 13.5%
04. Time freedom 11.1%
05. Financial independence 9.7%
06. Societal transformation 9.4%
07. Entrepreneurship 8.7%
08. Learning & growth 8.4%
09. Creative expression 5.6%
I find this highly suspicious. I'm sure there would be at least 10% who respond "I want it to go away".
> These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision.
Some quotes that stuck out to me:
"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
The number 1 ask from the interviewed cohort is « professional excellence »
It is telling about what we prioritize in our society.
I am usually an optimistic person, but I struggle to see how this does not end up with more misery and worse lifestyle all around.
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
____tom____•1h ago
suzzer99•1h ago
erwinmatijsen•1h ago
MikeTheGreat•25m ago
It's like those recipe sites that have 5 pages of nice photos and background story and side tracks and whatnot as the author waxes verbose, so they need to put a 'Jump to recipe' button in so people don't just click 'Back' immediately.
Except this time for an article.
I can't tell if 'skip the junk' is good (junk can be skipped!) or bad (maybe this means there's too much junk on the page?)