Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.
Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
I gladly pay the (modest/token) late fees to help keep them open at this point. If someone set up a local arcade man…I’d be in heaven ha
Keeping movies longer and paying late fees may be hurting them more than helping them. It's entirely possible that the late fees are underpriced to avoid scaring away customers. New customers going away disappointed they movie they want wasn't returned on time hurts them more than your late fees help.
Additionally, the odds that my kids are holding on to exactly what somebody else wants in that timeframe is very small. It’s a small shop within a larger co-op situation with a modest following and pretty substantial stock. I know for instance we’ve never had an issue of wanting something that was rented.
Has it happened? Maybe. But the fees I’ve paid probably net positive against that rare instance. They aren’t open half the week so I can’t return them once Monday passes for several days anyway. Owner certainly hasn’t expressed concern and has even waived the fee before because clearly it’s of little consequence.
In the end, it'll probably require something like model-based RL like Yann LeCun talks about and that's totally different to the LLMs.
Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.
Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".
mRNA vaccines? Sure, they're a huge medical advance. With great potential, in that area. But it's just an area.
Space? Maybe, if we get past LEO, find something useful to do there, and don't succumb to Kessler syndrome.
> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.
Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.
Now, with the advent of LLMs I've had to pull out my old textbooks from storage.
Not unrelated: https://blog.gardeviance.org/2015/03/on-pioneers-settlers-to...
AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.
I'm wondering if this is something that hits new developers faster than more experienced ones?
Almost certainly, at least according to Ebbinghaus' forgetting curve.
People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.
> https://arxiv.org/abs/2506.08872
There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.
If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.
My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.
I guess that's an advantage? People shouldn't have to burden their memory with boilerplate and CRUD code.
i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.
Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.
Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.
Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.
> I cannot remember basic boilerplate stuff.
I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.
I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
Use it or lose it, as it were.
0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...
1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around
but- it will also be
2) new technologies and ideas enabled by a completely new set of capabilities
The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.
But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
Cost cutting has less uncertainty than making something new, so they do that first. If something else comes along, then great.
This is also why the people should make the transition as difficult as possible for companies doing layoffs when the companies are paying proportionally very little in taxes compared to the people they are laying off.
Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.
Give your best example of something that is novel, ie isn't just replacing existing processes at scale.
It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.
Wouldn't that apply to most technological advances? Cars, computers, cell phones.
There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.
I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.
When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.
But also you don’t need SOTA frontier models for that!
It's always win some, lose some with the economy, but technology itself opens previously impossible capabilities.
If 3D printers could’ve given usage away for years directly in our homes then I bet we would’ve seen wider adoption there too.
> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,
i thought the transformer and large language models came from google research.
> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.
the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.
addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.
previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
All hypothetical, but if compute + AI research continues at pace, in 5 years we should see extremely good local models.
They will keep bleeding money by the way.
There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.
To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.
Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.
it be the beginning of vast and infinite potentia spreading out beyond us
I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.
Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.
At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.
So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.
I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
jmstfv•1h ago
I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
Cilvic•50m ago
DougN7•5m ago