Bahhh, boring!!!
0 - https://www.tumblr.com/elodieunderglass/186312312148/luritto...
Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.
Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.
Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.
With great power.. i guess.
Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.
What makes me suspicious: Is there a certain number on their balance sheets at which the system turns sour, when the trap snaps? Because the numbers all seem absurdly large already and keep increasing. Am I to believe that it will all come down after the next trillion or 10? I mean, it's not unthinkable of course, I just don't know why. Even from the most cynical view: Why would they want to crash a system where everything is going great for them?
So I do wonder: Are large amounts of wealth in the hands of a few per se a real world problem for us or is our notion of what the number means, what it does to or for us, simply off?
the problem is that western civilisation is so far up their asses with capitalism that they think that some corporation cares about them (users).
What I am asking to interview this viewpoint is: Why will it be a problem when a company reaches a certain size? If they have no other goal than making money, are the biggest assholes of all time, and make money through customers, unless you can actually simply extort customers (monopolies), they will continue to want to do stuff that makes customers want to give them more money. Why would that fail as soon as the company goes from 1 trillion to 2 trillion?
I completely agree: The amount of money that corps wield feels obscene. I am just looking for a clear explanation of what the necessary failure mode is at a certain size, because that is something that we generally just assume. Unequal distribution is the problem and it's always doom, but that clashes with ever improving living standards on any metric I think is interesting.
So I think it's a fair question how and when the collapse is going to happen, to understand if that was even a reasonable assumption to begin with. I have my doubts.
How can the world continue to function this way if fewer of us have so much wealth that the rest of us effectively have no say in how the world works?
Our society is so cooked, man. We don’t even realize how over it is, and even people genuinely trying to understand are not able to.
Non data driven living is 1x
Therefore data driven beings will outcompete
Same reasoning shows that 3.10 is better than 3.1
By their own definition, its a feature nobody asked for.
Also, this needs a cute/mocking name. How about "vibe living"?
Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.
> Downvote all you want
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
- People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.
- ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.
- When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.
For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.
The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.
Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.
They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.
Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.
However, I take your point - OpenAI has an interest in some other party paying them a fuckton of money for those tokens and then publicly crediting OpenAI and asserting the tokens would have been worth it at ten fucktons of money. And also, of course, in having that other party take on the risk that infinity fucktons of money worth of OpenAI tokens is not enough to make a gmail.
So they would really need to believe in the strategic necessity (and feasibility) of making their own gmail to go ahead with it.
In that case, there's some ancillary value in being able to claim "look, we needed a gmail and ChatGPT made one for us - what do YOU need that ChatGPT can make for YOU?"
Even at our small scale I wouldn’t want to be locked out of something.
Then again there’s also the sign in with google type stuff that keeps us further locked in.
The challenge in migrating email isn't that you have to move the existing email messages; any standard email client will download them all for you. The challenge is that there are thousands of external people and systems pointing to your email address.
Your LLMs memory is roughly analogous to the existing email messages. It's not stored in the contacts of hundreds of friends and acquaintances, or used to log in to each of a thousand different services. It's all contained in a single system, just like your email messages are.
Hide the notifications from uber which are just adverts and leave the one from your friend sending you a message on the lock screen.
Gmail already does filter the noise through "Categories" (Social, Updates, Forums, Promotions). I've turned them off as I'm pretty good about unsubscribing from junk and don't get a ton of email. However, they could place an alert at the top of your inbox to your "daily report" or whatever. Just as they have started to put an alert on incoming deliveries (ex. Amazon orders). You can then just dismiss it, so perhaps it's not an email so much as a "message" or something.
Or more likely: `[object, object]`
But a device that reaches out to you reminds you to hook back in.
This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.
On one hand this can be exciting. Following up with information from my recent deep dive would be cool.
On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.
I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.
I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.
It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology
But again, how does this work? After twirling my moustache that I wax with Evil Villian brand moustache wax, I just go on HN and make up shit to post about companies that aren't even public but are in the same industry, and that'll drive the stock price up... somehow? Someone's going to read a comment from me saying "I use plan mode in Claude Code to make a Todo.md, and then have it generate code", and based on that, that's the straw that breaks the camels back, and they rush out to buy stock in AI companies because they'd never heard of the stock market before I mentioned Claude Code.
Then, based on randos reading a comment from me about Claude Code, the share price goes up by a couple of cents, but I can't sell the handful of shares I have because of blackout windows anyway, but okay so eventually those shares do sell, and I go on a lavishly expensive vacation in Europe all because I made a couple of positive comments on HN about AI that were total lies.
Yeah, that's totally how that works. I also get paid to go out and protest on weekends to supplement the European vacation money. Just three more shitposts about Tesla and I get to go to South East Asia as well!
The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".
The architectural limits will always be there, regardless of training.
CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.
IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.
Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.
The GPT model alone does not offer autonomy. It only acts in response to explicit input. That's not to say that you couldn't built autonomy on top of GPT, though. In fact, that appears to be exactly what Pulse is trying to accomplish.
But Microsoft and OpenAI's contractual agreements state that the autonomy must also be economically useful to the tune of hundreds of billions of dollars in autonomously-created economic activity, so OpenAI will not call it as such until that time.
Every human every day has the choice to not go to work, has the choice not to follow the law, has a choice to... These AI doesn't have nearly as much autonomy as that.
> The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved
Edit -
> That is ultimately what sets AGI apart from AI.
No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.
It says that one guy who came up with his own AGI classification system says it might not be required. And despite it being his own system, he still was only able to land on "might not", meaning that he doesn't even understand his own system. He can safely be ignored. Outliers are always implied, of course.
> No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.
I suppose if you don't consider the wide range of human intelligence as the marker of general intelligence then a "bird classifier" plus a "chess bot" gives you general intelligence. We had nearly a millennia ago!
But usually general intelligence expects human-like intelligence, which would necessitate autonomy — the most notable feature of human intelligence. Humans would not be able to exist without the intelligence to perform autonomously.
But, regardless, you make a good point: A "language classifier" can be no more AGI than a "bird classifier". These are narrow systems, focused on a single task. A "bird classifier" doesn't become a general intelligence when it crosses some threshold of being able to classify n number of birds just as a "language classifier" wouldn't become a general intelligence when it is able to classify n number of language features, no matter how large n becomes.
Conceivably these classifiers could be used as part of a larger system to achieve general intelligence, but on their own, impossible.
They have to do this manually for every single particular bias that the models generate that is noticed by the public.
I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.
What do you think humans have?
LLMs need a retrain for that.
Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.
Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.
Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...
If a model has a statistical tendency to recommend python scripts over bash, is that a PREFERENCE? Argue it’s not alive and doesn’t have feelings all you want. But putting that aside, it prefers python. Saying the word preference is meaningless is just pedantic and annoying.
Perhaps instead of "preference", "propensity" would be a more broadly applicable term?
Try explaining ionic bonds to a high schooler without anthropomorphising atoms and their desires for electrons. And then ask yourself why you’re doing that? It’s easier to say and understand with the analogy.
It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).
AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.
Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.
I am seeing a pattern here. It appears that AI isn't for everyone. Not everyone's personality may be a good fit for using AI. Just like not everybody is a good candidate for being a software dev, or police officer etc.
I used to think that it is a tool. Like a car is. Everybody would want one. But that appears not be the case.
For me, I used AI every day as a tool, for work and and home tasks. It is a massive help for me.
It's hard for me to imagine many. It's not doing the dishes or watering the plants.
If I wanted to rearrange the room I could have it mock up some images, I guess...
How can you verify the recommendations are sound, valid, safe, complete, etc., without trying them out? And trying out unsound, invalid, unsafe, incomplete, etc., recommendations might result in dead plants in a couple of weeks.
I've found it immensely helpful for giving real world recommendations about things like this, that I know how to find on my own but don't have the time to do all the reading and synthesizing.
Such an odd complaint about LLMs. Did people just blindly trust Google searches before hand?
If it's something important, you verify it the same way you did anything else. Check the sources and use more than a single query. I have found the various LLMs to very useful in these cases, especially when I'm coming at something brand new and have no idea what to even search for.
Use only actionable prompts, negations don't work on ai and they don't work on people.
Ok, so subjective
Task: Walk to the shops & buy some milk.
Deliverables: 1. Video of walking to the shops (including capturing the newspaper for that day at the local shop) 2. Reciept from local store for milk. 3. Physical bottle of Milk.
milk (noun):
1. A whitish liquid containing proteins, fats, lactose, and various vitamins and minerals that is produced by the mammary glands of all mature female mammals after they have given birth and serves as nourishment for their young.
2. The milk of cows, goats, or other animals, used as food by humans.
3. Any of various potable liquids resembling milk, such as coconut milk or soymilk.
You get what you asked for, or you didn't sufficiently define it.
There's nothing worse than a task where you can deliver one item and then have to rely on someone else to be able to deliver a second. Was once in a role where performance was judged on closing tasks; getting the burn-down chart to 0, and also having it nicely stepped. Was given a good tip to make sure each task had one deliverable and where possible—be completed independent of any other task.
Why would you write down "Buy Milk", then go buy whatever thing you call milk, then come back home and be confused about it?
Only an imbecile would get stuck in such a thing.
If I could have something that said, "Here are some things that it looks like you're procrastinating on -- do you want me to get started on them for you?" -- that would probably be crazy useful.
https://archive.ph/20250924025805/https://www.nytimes.com/20...
The privacy implications are horrifying. But if done right, you’re taking about a kind of digital ‘executive function’ that could help a lot of kids that struggle with things like prioritization and time blindness.
> If you were expecting iOS 19 after iOS 18, you might be a little surprised to see Apple jump to iOS 26, but the new number reflects the 2025-2026 release season for the software update.
As someone with ADHD, I say: Please don't build this.
Open source transcription models are already good enough to do this, and with good context engineering, the base models might be good enough, too.
It wouldn't be trivial to implement, but I think it's possible already.
I was diagnosed with ADHD and my interpretation of that diagnoses was not "I need something to take over this functionality for me," but "I need to develop this functionality so that I can function as a better version of myself or to fight against a system which is not oriented towards human dignity but some other end."
I guess I am reluctant to replace the unique faculties of individual children with a generic faculty approved by and concordant with the requirements of the larger society. How dismal to replace the unique aspects of children's minds with a cookie cutter prosthetic meant to integrate nicely into our bullshit hell world. Very dismal.
For many people, it's easier to improve a bad first version of a piece of writing than to start from scratch. Even current mediocre LLM are great at writing bad first drafts.
Anyone is great at creating a bad first draft. You don’t need help to create something bad, that’s why that’s a common tip. Dan Harmon is constantly hammering on that advice for writer’s block: “prove you’re a bad writer”.
https://www.youtube.com/watch?v=BVqYUaUO1cQ
If you get an LLM to write a first draft for you, it’ll be full of ideas which aren’t yours which will condition your writing.
Famously not so! Writer's block is real!
Getting an LLM to produce a not-so-bad first draft is just another technique.
Could you give some examples, and an indication of your level of experience in the domains?
The statement has a much different meaning if you were a junior developer 2 years ago versus a staff engineer.
I've been adding small features in a language I don't program in using libraries I'm not familiar with thhat meet my modest functional requirements in a couple minutes each. I work with an LLM to refine my prompt, put it into cursor, run the app locally, look at the diffs, commit, push and I'm live on vercel within a minute or two.
I don't have any good metrics for productivity, so I'm 100% subjective but I can say that even if I'd been building in Rails (it's been ~4 years but I coded in it for a decade) it would have taken me at least 8 hours to have an app where I was happy with both the functionality and the look and feel so a 10x improvement in productivity for that task feels about right.
And having a "buddy" I can discuss a project with makes activation energy lower allowing me to complete more.
Also, YC videos I don't have the time to watch, I get a transcript, feed into chatGTP, ask for the key take aways I could apply to my business (it's in a project where it has context on stage, industry, maturity, business goals, key challenges, etc) so I get the benefits of 90 minutes of listening plus maybe 15 minutes of summarizing, reviewing and synthesis in typically 5-6 minutes - and it'd be quicker if I built a pipeline (something I'm vibe coding next month)
Wouldn't want to do business without it.
And have another AI review your unit tests and code. It's pretty amazing how much nuance they pick up. And just rinse and repeat until the AI can't find anything anymore (or you notice it going in circles with suggestions)
Are the frontend folks having such great results from LLMs that they're OK with "just let the LLM check for security too" for non-frontend-engineer created projects that get hosted publicly?
Hell, in the past few days I started making something to help me write documents for work (https://www.writelucid.cc) and a viewer for all my blood tests (https://github.com/skorokithakis/bt-viewer), and I don't think I would have made either without an LLM.
Would have never done that without LLMs.
Personally, I think it really shines at doing the boring maintenance and tech debt work. None of these are hard or complex tasks but they all take up time and for a buck or two in tokens I can have it doing simple but tedious things while I'm working on something else.
It shines at doing the boring maintenance and tech debt work for web. My experiences with it, as a firmware dev, have been the diametric opposite of yours. The only model I've had any luck with as an agent is Sonnet 4 in reasoning mode. At an absolutely glacial pace, it will sometimes write some almost-correct unit tests. This is only valuable because I can have it to do that while I'm in a meeting or reading emails. The only reason I use it at all is because it's coming out of my company's pocket, not mine.
If you're doing JS/Python/Ruby/Java, it's probably the best at that. But even with our stack (elixir), it's not as good as, say, React/NextJS, but it's definitely good enough to implement tons of stuff for us.
And with a handful of good CLAUDE.md or rules files that guide it in the right direction, it's almost as good as React/NextJS for us.
Random Postgres stuff:
- Showed a couple of Geo/PostGIS queries that were taking up more CPU according to our metrics, asked it to make it faster, it rewrote it in away that it actually used the index. (using the <-> operator for example for proximity). One-shotted. Whole effort was about 5 mins.
- Regularly asking for maintenance scripts (like give me a script that shows me the most fragmented tables, or highest storage, etc).
CSS:
Built a whole horizontal logo marquee with CSS animations, I didn't write a single line, then I asked for little things like "have the people's avatars gently pulsate" – all this was done in about 15 mins. I would've normally spent 8-16 hours on all that pixel pushing.
Elixir App:
- I asked it to look at my GitHub actions file and make it go faster. In about 2-3 iterations, it cut my build time from 6 minutes to 2 minutes. The effort was about an hour (most of it spent waiting for builds, or fiddling with some weird syntax errors or just combining a couple extra steps, but I didn't have to spend a second doing all the research, its suggestions were spot on)
- In our repo (900 files) we had created an umbrella app (a certain kind of elixir app). I wanted to make it a non-umbrella. This one did require more work and me pushing it, but I've been putting off this task for 3 YEARS since it just didn't feel like a priority to spend 2-3 days on. I got it done in about 2 hours.
- Built a whole discussion board in about 6 hours.
- There are probably 3-6 tickets per week where I just say "implement FND-234", and it one-shots a bugfix, or implementation, especially if it's a well defined smaller ticket. For example, make this list sortable. (it knows to reuse my sortablejs hook and look at how we implemented it elsewhere).
- With the Appsignal MCP, I've had it summarize the top 5 errors in production, and write a bug fix for one I picked (I only did this once, the MCP is new). That one was one-shotted.
- Rust library (It's just an elixir binding to a rust library, the actual rust is like 20 lines, so not at all complex)... I've never coded a day of rust in my life, but all my cargo updates and occasional syntax/API deprecations, I have claude do my upgrades and fixes. I still don't know how to write any Rust.
NextJS App:
- I haven't fixed a single typescript error in probably 5 months now, I can't be bothered, CC gets it right about 99% of the time.
- Pasted in a Figma file and asked it to implement. This rarely is one-shotted. But it's still about 10x faster than me developing it manually.
The best combination is if you have a robust component library and well documented patterns. Then stuff goes even faster.
All on the $100 plan in which I've hit the limit only twice in two months. I think if they raised the price to $500, it would still feel like a no-brainer.
I think Anthropic knows this. My guess is that they're going to get us hooked on the productivity gains, and we will happily pay 5x more if they raised the prices, since the gains are that big.
One thing I've noticed is that I don't have a circle of people where I can discus programming with, and having an LLM to answer questions and wireframe up code has been amazing.
My job doesn't require programming, but programming makes my job much easier, and the benefits have been great.
I'm tackling projects solo I never would have even attempted before but I could see people getting bad results and giving up.
IOW, can you redo it by yourself? If you can't then you did not learn it.
Knowing the abstract steps and tripwires yes, but details will always have to be looked up. If just not to miss any new developments.
Well, yes it is; you can't very well claim to have learned something if you are unable to do it.
I get that point, but the original post I replied to didn't say "Hey, I know have $THING set up when I never had it before", he said "I learned to do $THING", which is a whole different assertion.
I'm not contending the assertion that he now has a thing he did not have before, I'm contending the assertion that he has learned something.
This is a truism and, I believe, is at the core of the disagreement on how useful AI tools are. Some people keep talking about outlier success. Other people are unimpressed with the performance in ordinary tasks, which seem to take longer because of back-and-forth prompting.
I want to second your experience as I’ve had the same as well. Tackling SO many more tasks than before and at such a crazy pace. I’ve started entire businesses I wouldn’t have just because of AI.
But at the same time, some people have weird blockers and just can’t use AI. I don’t know what it is about it - maybe it’s a mental block? Wrong frame of mind? It’s those same people who end up saying “I spend more time fighting the ai and refining prompts than I would on the end task”.
I’m very curious what it is that actually causes this divide.
I've been using it for almost a year now, and it's definitely improved my productivity. I've reduced work that normally takes a few hours to 20 minutes. Where I work, my manager was going to hire a junior developer and ended up getting a pro subscription to Claude instead.
I also think it will be a concern for that 50-something developer that gets laid off in the coming years, has no experience with AI, and then can't find a job because it's a requirement.
My cousin was a 53 year old developer and got laid off two years ago. He looked for a job for 6 months and then ended up becoming an auto mechanic at half the salary, when his unemployment ran out.
The problem is that he was the subject matter expert on old technology and virtually nobody uses it anymore.
This. 100x this.
https://www.fightforthehuman.com/are-developers-slowed-down-...
The study gets so much attention since it's one of the few studies on the topic with this level of rigor on real-world scenarios, and it explains why previous studies or anecdotes may have claimed perceived increases in productivity even if there wasn't any actual increases. It clearly sets a standard that we can't just ask people if they felt more productive (or they need to feel massively more productive to clearly overcome this bias).
Yes, but most people don't seem aware of those caveats, and this is a good summary of them, and I think it does undercut the "level of rigour" of the study. Additionally, some of what the article points out is not explicitly acknowledged and connected by the study itself.
For instance, if you actually split up the tasks by type, some tasks show a speed up and some show a slowdown, and the qualitative comments by developers about where they thought AI was good/bad aligned very well with which saw what results.
Or (iirc) the fact that the task timing was per task, but developer's post hoc assessments were a prediction of how much they thought they were sped up on average across all tasks, meaning it's not really comparing the same things when comparing how developers felt vs how things actually went.
Or the fact that developers were actually no less accurate in predicting times to task completion overall wrt to AI vs non-AI.
> and it explains why previous studies or anecdotes may have claimed perceived increases in productivity even if there wasn't any actual increases.
Framing it that way assumes as an already established fact that needs to be explained that AI does not provide more productivity Which actually demonstrates, inadvertently, why the study is so popular! People want it to be true, so even if the study is so chock full of caveats that it can't really prove that fact let alone explain it, people appeal to it anyway.
> It clearly sets a standard that we can't just ask people if they felt more productive
Like we do for literally every other technological tool we use in software?
> (or they need to feel massively more productive to clearly overcome this bias).
All of this assumes a definition of productivity that's based on time per work unit done, instead of perhaps the amount of effort required to get a unit of work done, or the extra time for testing, documentation, shoring up edge cases, polishing features, that better tools allow. Or the ability to overcome dread and procrastination that comes from dealing with rote, boilerplate tasks. AI makes me so much more productive that friends and my wife have commented on it explicitly without needing to be prompted, for a lot of reasons.
I find that people who dismiss LoC out of hand without supplying better metrics tend to be low performers trying to run for cover.
If nobody is watching loc, it’s generally a good metric. But as soon as people start valuing it, it becomes useless.
and, in the case of "Lines of code" as a metric: https://en.wikipedia.org/wiki/Cobra_effect
To clarify, people critical of the “productivity increase” argument question whether the productivity is of the useful kind or of the increased useless output kind.
There are none. All are various variant of bad. LoC is probably the worst metric of all. Because it says nothing about quality, or features, or number of products shipped. It's also the easiest metric to game. Just write GoF-style Java, and you're off to the races. Don't forget to have a source code license at the beginning of every file. Boom. LoC.
The only metrics that barely work are:
- features delivered per unit of time. Requires an actual plan for the product, and an understanding that some features will inevitably take a long time
- number of bugs delivered per unit of time. This one is somewhat inversely correlated with LoC and features, by the way: the fewer lines of code and/or features, the fewer bugs
- number of bugs fixed per unit of time. The faster bugs are fixed the better
None of the other bullshit works.
Oh no, you've caught me.
On a serious note: LoC can be useful in certain cases (e.g. to estimate the complexity of a code base before you dive in, even though it's imperfect here, too). But, as other have said, it's not a good metric for the quality of a software. If anything, I would say fewer LoC is a better indication of high quality software (but again, not very useful metric).
There is no simple way to just look at the code and draw conclusions about the quality or usefulness of a piece of software. It depends on sooo many factors. Anybody who tells you otherwise is either naive or lying.
I'll pass on this data point.
From the one random file I opened:
/// Real LSP server implementation for Lens pub struct LensLspServer
/// Configuration for the LSP server
pub struct LspServerConfig
/// Convert search results to LSP locations
async fn search_results_to_locations()
/// Perform search based on workspace symbol request
async fn search_workspace_symbols()
/// Search for text in workspace
async fn search_text_in_workspace()
etc, etc, etc, x1000.
I don't see a single piece of logic actually documented with why it's doing what it's doing, or how it works, or why values are what they are, nearly 100% of the comments are just:
function-do-x() // Function that does x
Second, as you seem to be an entrepreneur, I would suggest you consider adopting the belief that you've not been productive until the thing's shipped into prod and available for purchase. Until then you've just been active.
But I’m not even going to argue about that. I want to raise something no one else seems to mention about AI in coding work. I do a lot of work now with AI that I used to code by hand, and if you told me I was 20% slower on average, I would say “that’s totally fine it’s still worth it” because the EFFORT level from my end feels so much less.
It’s like, a robot vacuum might take way longer to clean the house than if I did it by hand sure. But I don’t regret the purchase, because I have to do so much less _work_.
Coding work that I used to procrastinate about because it was tedious or painful I just breeze through now. I’m so much less burnt out week to week.
I couldn’t care less if I’m slower at a specific task, my LIFE is way better now I have AI to assist me with my coding work, and that’s super valuable no matter what the study says.
(Though I will say, I believe I have extremely good evidence that in my case I’m also more productive, averages are averages and I suspect many people are bad at using AI, but that’s an argument for another time).
In your particular case it sounds like you’re rapidly loosing your developer skills, and enjoy that now you have to put less effort and think less.
Same with LLMs. I am better with it, because I know how to solve things without the help of it. I understand the problem space and the limitations. Also I understand how hype works and why they think they need it (investors money).
In other words, no, just using google maps or ChatGPT does not make me dumb. Only using it and blindly trusting it would.
Pretty much like muscles decay when we stop using them.
But if you were to be literally chained to a bike, and could not move in any other way than surely you would "forget"/atrophy in specific ways that you wouldn't be able to walk without relearning/practicing.
Take my personal experience for whatever it is worth, but my knees do not lie.
I believe the same holds true for cognitive tasks. If you enjoy going through weird build file errors, or it feels like it helps you understand the build system better, by all means, go ahead!
I just don't like the idea of somehow branding it as a moral failing to outsource these things to an LLM.
>> Pretty much like muscles decay when we stop using them.
> Sure, but sticking with that analogy, bicycles haven’t caused the muscles of people that used to go for walks and runs to atrophy either ...
This is an invalid continuation of the analogy, as bicycling involves the same muscles used for walking. A better analogy to describe the effect of no longer using learned skills could be:
Asking Amazon's Alexa to play videos of people
bicycling the Tour de France[0] and then walking
from the couch to the your car every workday
does not equate to being able to participate in
the Tour de France[0], even if years ago you
once did.
0 - https://www.letour.fr/en/Then the citation served its purpose.
You're welcome.
A similar phenomena occurs when people see or hear information and whether they record it in writing or not. The act of writing the percepts, in and of itself, assists in short-term to long-term memory transference.
Applying all of this to LLMs has felt similar.
That stuff kills my motivation to solve actual problems like nothing else. Being able to send off an agent to e.g. fix some build script bug so that I can get to the actual problem is amazing even with only a 50% success rate.
I feel like the past few decades of framework churn has shown that we're really never going to agree on what this means
Otherwise, I’ll continue using what works for me now.
It’s the same story with UI/UX. Previously, I’d often have to skip little UI niceties because they take time and aren’t that important. Now even relatively minor user flows can be very well polished because there isn’t much cost to doing so.
Well your perfectionism needs to be pointed towards this line. If you get truly large numbers of users this will either slow down token checking directly or your process for removing ancient expired tokens (I'm assuming there is such a process...) much slower and more problematic.
Consultancy A submit work, Consultancy B reviews/tests. As A increases the use of AI, B will have to match with more staff or more AI. More staff for B, mean higher costs, at slower pace. More AI for B, means higher burden of proof, an A vs B race condition is likely.
Ultimately clients will suffer from AI fatigue and inadvertently incur more costs at later stage (post-delivery).
Just the other day I was complaining that no one knows how to use a slide rule anymore...
Also C++ is producing bytecode that's hot garbage. It's like no one understands assembly anymore...
Even simple tools are often misused (like hammering a screw). Sometimes they are extremely useful in right hands though. I think we'll discover that the actual writing of code isn't as meaningful as thinking about code.
The problem is, there are very few if any other studies.
All the hype around LLMs we are supposed to just believe. Any criticism is "this study has serious problems".
> It’s like, a robot vacuum might take way longer
> Coding work that I used to procrastinate
Note how your answer to "the study had serious problems" is totally problem-free analogies and personal anecdotes.
Not at all, the METR study just got a ton of attention. There are tons out there at much larger scales, almost all of them showing significant productivity boosts for various measures of "productivity".
If you stick to the standard of "Randomly controlled trials on real-world tasks" here are a few:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 (4867 developers across 3 large companies including Microsoft, measuring closed PRs)
https://www.bis.org/publ/work1208.pdf (1219 programmers at a Chinese BigTech, measuring LoC)
https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)
If you look beyond real-world tasks and consider things like standardized tasks, there are a few more:
https://ieeexplore.ieee.org/abstract/document/11121676 (96 Google engineers, but same "enterprise grade" task rather than different tasks.)
https://aaltodoc.aalto.fi/server/api/core/bitstreams/dfab4e9... (25 professional developers across 7 tasks at a Finnish technology consultancy.)
They all find productivity boosts in the 15 - 30% range -- with a ton of nuance, of course. If you look beyond these at things like open source commits, code reviews, developer surveys etc. you'll find even more evidence of positive impacts from AI.
With ai code you have more loc and NEED more PRs to fix all its slop.
In the end you have increased numbers with net negative effect
> https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)
Emphasis added. They modeled a way to detect when AI output is being reworked, and still find a 15-20% increase in throughput. Specific timestamp: https://youtu.be/tbDDYKRFjhk?t=590&si=63qBzP6jc7OLtGyk
> https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)
I like this one a lot, though I just skimmed through it. At 11:58 they talk about what many find correlates with their personal experience. It talks about easy vs complex in greenfield vs brownfield.
> They all find productivity boosts in the 15 - 30% range -- with a ton of nuance, of course.
Or 5-30% with "Ai is likely to reduce productivity in high complexity tasks" ;) But yeah, a ton nuance is needed
However even these levels are surprising to me. One of my common refrains is that harnessing AI effectively has a deceptively steep learning curve, and often individuals need to figure out for themselves what works best for them and their current project. Took me many months, personally.
Yet many of these studies show immediate boosts in productivity, hinting that even novice AI users are seeing significant improvements. Many of the engineers involved didn't even get additional training, so it's likely a lot of them simply used the autocompletion features and never even touched the powerful chat-based features. Furthermore, current workflows, codebases and tools are not suited for this new modality.
As things are figured out and adopted, I expect we'll see even more gains.
I did say I wasn’t going to argue the point that study made, and I didn’t.
I completely get this and I often have an LLM do boring stupid crap that I just don't wanna do. I frequently find myself thinking "wow I could've done it by hand faster." But I would've burned some energy that could be better put towards other stuff.
I don't know if that's a net positive, though.
On one hand, my being lazy may be less of a hindrance compared to someone willing to grind more boring crap for longer.
On the other hand, will it lessen my edge in more complicated or intricate stuff that keeps the boring-crap-grinders from being able to take my job?
Instead of getting overwhelmed doing to many things, I can offload a lot of menial and time-driven tasks
Reviews are absolutely necessary but take less time than creation
The right AI, good patterns in the codebase and 20 years of experience and it is wild how productive I can be.
Compare that to a few years ago, when at the end of the week, it was the opposite.
I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.
I had a dozen domains and projects on the shelf for years and now 8 of them have significant active development. I've already deployed 2 sites to production. My github activity is lighting up like a Christmas tree.
on the other hand i believe my coworker may have taken it too far. it seems like productivity has significantly slipped. in my perception the approaches hes using are convoluted and have no useful outcome. im almost worried about him because his descriptions of what hes doing make no sense to me or my teammates. hes spending a lot of time on it. im considering telling him to chill out but who knows, maybe im just not as advanced a user as him? anyone have experience with this?
it started as an approach to a mass legacy code migration. sound idea with potential to save time. i followed along and understood his markdown and agent stuff for analyzing and porting legacy code
i reviewed results which apply to my projects. results were mixed bag but i think it saved some time overall. but now i dont get where hes going with his ai aspirations
my best attempt to understand is he wants to work entirely though chats, no writing code, and hes doing so by improving agents through chats. hes really swept up in the entire concept. i consider myself optimistic about ai but his enthusiasm feels misplaced
its to the point where his work is slipping and management is asking him where his results are. were a small team and management isnt savvy enough to see hes getting NOTHING done and i wont sell him out. however if this is a known delusional pattern id like to address it and point to a definition and/or past cases so he can recognize the pattern and avoid trouble
but I do recall seeing some Amazon engineer who worked on Amazon q and his repos and they were... something.
like making PRs that were him telling the ai that "we are going to utilize the x principle by z for this" and like 100s of lines of "principles" and stuff that obviously would just pollute the context and etc.
like huge amounts of commits but it was just all this and him trying to basically get magic working or something.
and to someone like me it was obvious that this was a futile effort but clearly he didn't seem to quite get it.
I think the problem is that people don't understand transformers, that they're basically huge datasets in a model form where it'll auto-generated based on queries from the context (your prompts and the models reponses)
so you basically are just getting mimicked responses
which can be helpful but I have this feeling that there's a fundamental limit, like a mathematical one where you can't get it really to do stuff unless you provide the solution itself in your prompt, that covers everything because otherwise it'd have to be in its training data (which it may have, for common stuff like boilerplate, hello world etc.)
but maybe I'm just missing something. maybe I don't get it
but I guess if you really wanna help him, I'd maybe play around with claude/gpt and see how it just plays along even if you pretend, like you're going along with a really stupid plan or something and how it'll just string you along
and then you could show him.
Orr.... you could ask management to buy more AI tools and make him head of AI and transition to being an AI-native company..
you put it nicely when you mention a fundamental limit and will borrow that if i think hes wasting a risky amount of time
i really like the sibling idea of having him try to explain again, then use claude to explain if he cant
genuine thanks to you and sibling for offering advice
I've seen a lot of people who previously touted that it doesn't work at all use that study as a way to move the goalpost and pretend they've been right all along.
I just recently had to rate whether I felt like I got more done by leaning more on Claude Code for a week to do a toy project and while I _feel_ like I was more productive, I was already biased to think so, and so I was a lot more careful with my answer, especially as I had to spend a considerable amount of time either reworking the generated code or throwing away several hours of work because it simply made things up.
There's a reason self-reported measures are questioned: they have been wildly off in different domains. Objectively verifying that a car is faster than walking is easy. When it's not easy to objectively prove something, then there are a lot that could go wrong, including the disagreements on the definition of what's being measured.
Again, people who were already highly productive without AI won't understand how profound the increase is.
If I showed them time gains, they’d just say “well you don’t know how much tech debt you’re creating”, they’d find a weasel way to ignore any methodology we used.
If they didn’t, they wouldn’t be conveniently ignoring all but that one study that is skeptical of productivity gains.
So - this thing would never be in existance and work without a 20 USD ClaudeAI subscription :)
I would ask, then, if you're qualified to evaluate that what 'you' are doing now is what you think it is? Writing off 'does it lead to other problems' with 'no doubt, but' feels like something to watch closely.
I imagine a would-be novelist who can't write a line. They've got some general notions they want to be brilliant at, but they're nowhere. Apply AI, and now there's ten million words, a series, after their continual prompts. Are they a novelist, or have they wasted a lot of time and energy cosplaying a novelist? Is their work a communication, or is it more like an inbox full of spam into which they're reading great significance because they want to believe?
You can currently go to websites and use character generators and plot idea generators to get unstuck from writers block or provide inspiration and professional writers already do this _all the time_.
I truly don’t know how to account for the discrepancy, I can imagine many possible explanations.
But what really gets my goat is how political this debate is becoming. To the point that the productivity-camp, of which I’m a part, is being accused of deluding themselves.
I get that OpenAI has big ethical issues. And that there’s a bubble. And that ai is damaging education. And that it may cause all sorts of economic dislocation. (I emphatically Do Not get the doomers, give me a break).
But all those things don’t negate the simple fact that for many of us, LLMs are an amazing programming tool, and we’ve been around long enough to distinguish substance from illusion. I don’t need a study to confirm what’s right in front of me.
I work with many developers of varying skill levels, all of which use AI. The only ones who have attempted to turn in slop are ones that basically turned out that they can’t code at all and didn’t keep their job long. Those who know what they’re doing, use it as a TOOL. They carefully build, modify, review and test everything and usually write about half of it themselves and it meets our strict standards.
Which you would know if you’d listened to what we’ve been telling you in good faith.
I have always been a careful tester, so my UAT hasn't blown up out of proportion.
The big issue I see is rust it generates code using 2023-recent conventions, though I understand there is some improvement in thst direction.
Our hiring pipeline is changing dramatically as well, since the normal things a junior needs to know (code, syntax) is no longer as expensive. Joel Spolsky's mantra to higher curious people who get things done captures well the folks I find are growing well as juniors.
AI has not made me much more productive at work.
I can only work on my hobby project when I’m tired after the kids go to bed. AI has made me 3x productive there because reviewing code is easier than architecting. I can sense if it’s bad, I have good tests, the requests are pretty manageable (make a new crud page for this DTO using app conventions).
But at work where I’m fresh and tackling hard problems that are 50% business political will? If anything it slows me down.
Interesting to consider that if our first vibecode prompt isn't what we actually want; it can train on how we direct it further.
Offloading human intelligence is useful but... we're losing something.
As with many other technologies, AI can be an enabler of this, or it can be used as a tool to empower and enhance learning and personal growth. That ultimately depends on the human to decide. One can dramatically accelerate personal and professional growth using these tools.
Admittedly the degree to which one can offload tasks is greatly increased with this iteration, to the extent that at times you can almost seem like offloading your own autonomy. But many people already exist in this state, exclusively parroting other people's opinions without examining them, etc.
The performance gains come from being able to ask specific questions about problems I deal with and (basically) have a staff engineer that I can bounce ideas off of.
I am way faster at writing tasks on problems I am familiar with vs an AI.
But me trying to figure out the database I should deeply look at for my usecase or debug android code when I don't know kotlin has saved me 5000x time.
Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.
I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.
____________
I was also under the impression that adoption was fairly strong in many of these regions, and after looking into it, I see far more evidence in favor of that than a single anecdotal claim on a discussion board...
>Venezuela remains one of Latin America’s fastest-growing crypto markets. Venezuela’s year-over-year growth of 110% far exceeds that of any other country in the region. -Chainalysis
>Cryptocurrency Remittances Spike 40% in Latin America -AUSTRAC
>Crypto adoption has grown so entrenched that even policymakers are rumored to be considering it as part of the solution. -CCN
>By mid-2021, trading volumes had risen 75%, making Venezuela a regional leader. -Binance
_______
It actually wouldn't surprise me if most of this was hot air, but certainly you have actual data backing up the claim, not just an anecdotal view?
I don't really care enough about this to do proper research, but as another anecdote from someone living under 40% yearly inflation: nobody here gives a shit about cryptocurrencies. Those who can afford it buy foreign stock, houses and apartments; those who cannot, buy up whatever USD and EUR we can find.
Cryptocurrency was used by very few people for short-term speculation around 5 years ago, but even that died down to nothing.
You need legal systems to enforce trust in societies, not code. Otherwise you'll end up with endless $10 wrench attacks until we all agree to let someone else hold our personal wealth for us in a secure, easy-to-access place. We might call it a bank.
The end state of crypto is always just a nightmarish dystopia. Wealth isn't created by hoarding digital currency, it's created by productivity. People just think they found a shortcut, but it's not the first (or last) time humans will learn this lesson.
https://en.wikipedia.org/wiki/Rai_stones
The first photo in Wikipedia is great. I wonder how often foreigners bought them and then lugged them back home to have in their garden.
If gold loses its speculative value, you still have a very heavy, extremely conductive, corrosion resistant, malleable metal with substantial cultural importance.
When crypto collapses, you have literally nothing. It is supported entirely and exclusively by its value to speculators who only buy so that they can resell for profit and never intend to use it.
We are burning through scarce fuel in amounts sufficient to power a small developed nation in order to reverse engineer... one way hashcodes! Literally that is even less value than turning matter into paperclips.
By that logic, banks don’t work either, since people get kidnapped and forced to drain accounts. The difference is that with crypto, you can design custody systems (multi-sig, social recovery, hardware wallets, decentralized custody) that make such attacks far less effective than just targeting a centralized bank vault or insider.
As for the “end state” being dystopian, history shows centralized finance has already produced dystopias: hyperinflations, banking crises, mass surveillance, de-banking of political opponents, and global inequality enabled by monetary monopolies. Crypto doesn’t claim to magically create productivity—it creates an alternative infrastructure where value can be exchanged without gatekeepers. Productivity and crypto aren’t at odds: blockchains enable new forms of coordination, ownership, and global markets that can expand productive potential.
People now have the option of choosing between institutional trust and cryptographic trust—or even blending them. Dismissing crypto as doomed to dystopia ignores why it exists: because our current systems already fail millions every day.
> Dismissing crypto as doomed to dystopia ignores why it exists: because our current systems already fail millions every day.
This only makes sense if crypto solves the problems that current systems fail at. This have not been shown to be the case despite many years of attempts.
Matt Levine (of Money Stuff fame) came up with another use case in a corporate setting: in many companies, especially banks, their systems are fragmented and full of technical debt. As a CEO it's hard to get workers and shareholders excited about a database cleanup. But for a time, it was easy to get people fired up about blockchain. Well, and the first thing you have to do before you can put all your data on the blockchain, is get all your data into common formats.
Thus the exciting but useless blockchain can provide motivational cover for the useful but dull sounding database cleanup.
(Feel free to be as cynical as you want to be about this.)
No more powerful than I without the A. The only advantage AI has over I is that it is cheaper, but that's the appeal of the blockchain as well: It's cheaper than VISA.
The trouble with the blockchain is that it hasn't figured out how to be useful generally. Much like AI, it only works in certain niches. The past interest in the blockchain was premised on it reaching its "AGI" moment, where it could completely replace VISA at a much lower cost. We didn't get there and then interest started to wane. AI too is still being hyped on future prospects of it becoming much more broadly useful and is bound to face the same crisis as the blockchain faced if AGI doesn't arrive soon.
1) Bitcoin figured out how to create artificial scarcity, and got enough buy-in that the scarcity actually became valuable.
2)Some privacy coins serve an actual economic niche for illegal activity.
Then there's a long list of snake oil uses, and competition with payment providers doesn't even crack the top 20 of those. Modern day tulip mania.
1) Langauge tasks.
2) ...
I can't even think of what #2 is. If the technology gets better at writing code perhaps it can start to do other things by way of writing software to do it, but then you effectively have AGI, so...
Also I think you’re cherry picking your experience. For one thing, I much prefer code review to code writing these days personally. And I find you don’t need to do “25 review steps” to ensure good output.
I’m just sensing lots of frustration and personal anecdote in your points here.
If you don't mind me asking, what do you do?
so useless there is almost $3 Trillion of value on blockchains.
Side-rant pet-peeve: People who try to rescue the reputation of "Blockchain" as a promising way forward by saying its weaknesses go away once you do a "private blockchain."
This is equivalent to claiming the self-balancing Segway vehicles are still the future, they just need to be "improved even more" by adding another set of wheels, an enclosed cabin, and disabling the self-balancing feature.
Congratulations, you've backtracked back to a classic [distributed database / car].
I do like the technology for its own sake, but I agree that it's mostly useless today. (At least most of it. The part of blockchain that's basically 'git' is useful, as you can see with git. Ie an immutable, garbage colected Merkle-tree as a database, but you trust that Linux Torvalds has the pointer to the 'official' Linux kernel commit, instead of using a more complicated consensus mechanism.)
However there's one thing that's coming out of that ecosystem that has the potential to be generally useful: Zero Knowledge Proofs.
To quote myself (https://news.ycombinator.com/item?id=45357320):
Yes, what Zero Knowledge proofs give you however is composability.
Eg suppose you have one system that lets you verify 'this person has X dollars in their bank account' and another system that lets you verify 'this person has a passport of Honduras' and another system that lets you verify 'this person has a passport of Germany', then whether the authors of these three systems ever intended to or not, you can prove a statement like 'this person has a prime number amount of dollars and has a passport from either Honduras or Germany'.
I see the big application not in building a union. For that you'd want something like Off-The-Record messaging probably? See https://en.wikipedia.org/wiki/Off-the-record_messaging
Where I see the big application is in compliance, especially implementing know-your-customer rules, while preserving privacy. So with a system outlined as above, a bank can store a proof that the customer comes from one of the approved countries (ie not North Korea or Russia etc) without having to store an actual copy of the customer's passport or ever even learning where the customer is from.
As you mentioned, for this to work you need to have an 'anchor' to the real world. What ZKP gives you is a way to weave a net between these anchors.
1. The self-balancing tech spread to a variety of more interesting and cheaper "toy" platforms like hoverboards and self-balancing electric unicycles. 2. They encouraged the interest in electric micromobility, leading to, especially, electric scooters (which are simpler, cheaper, and use less space) becoming widespread.
This is kind of like the point someone else made that the actual useful thing the blockchain craze produced was "cleaning up your database schemas with the idea of putting them on the blockchain, then never putting them on the blockchain".
This is an incredibly uneducated take on multiple levels. If you're talking about Bitcoin specifically, even though you said "blockchain", I can understand this as a political talking about 8 years ago. But you're still banging this drum despite the current state of affairs? Why not have the courage to say you're politically against it or bitter or whatever your true underlying issue is?
You think a technology that allows millions of people all around the world to keep & trustlessly update a database, showing cryptographic ownership of something "the most useless technology ever invented"?
You can use blockchains to gamble and get rich quick, if you're lucky.
That's a useful thing. Unlike "AI", which only creates more blogspam and technical debt in the world.
You are mistaken. The transfer of cryptocurrencies takes place on the blockchain ledger. That's, like, the core "thing".
If you choose to bank with an exchange, that's like you bringing your cash money to a regular bank and depositing it. And the developers are no more intermediaries to your transaction than the Mint is an intermediary to your cash transaction.
1. Understand how to use the technology and onboard onto it. In this case the technology is whichever blockchain and cryptocurrency.
2. Convert their money into the cryptocurrency. For this they need an exchange.
Then they can send money to others who also went through those steps.
Thus, there must be some kind of interaction with documentation made by devs for step 1 and a transfer of currency to an exchange for step 2. 2 middlemen/intermediaries.
You also do not need an exchange. You need to find a person willing to trade whatever for your cryptocurrency, exchanges merely make this more convenient but are by no means the only option.
And saying onboarding requires an intermediary is like saying using a powerdrill or hooking up a washing machine requires an intermediary. The knowledge is so widely available and not contingent on a single or even few authorities that it's like picking fruit from a tree. It's just there.
I mean, you have to onboard to transfer alternative currencies. People have cash and bank accounts. They don't have any random cryptocurrency.
> You need to find a person willing to trade whatever for your cryptocurrency, exchanges merely make this more convenient but are by no means the only option.
This isn't a realistic option that will work at scale and you know that.
If that's dead, as you say, it would mean billions in value destruction for every tech stock. They have already promised so far beyond that.
- Gambling (or politely called, speculation or even 'investment')
- Ransomware payments, but that's a distant second
I guess you say that influencers make money off the first? There's a third group making money off blockchain: the people making markets and running the infrastructure.
At some point in the past, you could perhaps buy illicit drugs with bitcoin, but that has largely died out, as far as I can tell. You can't even buy a pizza anymore.
HN crowd is generally anti-AI, so you're not going to get a lot of positive feedback here.
As a developer and someone who runs my own company, AI helps save me tons of hours - especially with research tasks, code audits - just to see if I've missed something, rapid frontend interface development and so much more if we go beyond the realm of just code.
YMMV but I use ChatGPT even for stuff like cooking recipes based on whatever's inside my fridge. And this maybe just me, but I feel like my usage of Google has probably gone down to 50-60%.
All this for $20 is really insane. Like you said, "people who are willing to put in work" really matters - that usually means being more intentional about your prompts, giving it as much context as possible.
Is this a joke? The front page of HN is 99% AI-positive. Things have calmed down a bit but there was a time when it felt like every other article on the front page was promoting AI.
That sounds like calling stupid and lazy to anyone that is not able to get their "productivity skyrocketed" with LLMs. The reality is that LLMs are mostly a search engine that lies time to time.
No, people is not stupid nor lazy for not gaining any substantial productivity using LLMs. The technology is just not able to do what big tech corporations say it can do.
Given the prevailing trend of virtually all modern governments in the developed world that seems to be rather short-sighted. Who knows, if you won't be made a criminal for dissenting? Or for trying to keep your communication private?
Are you sure you are not confusing blockchain with crypto coins? Blockchain as a decentralised immutable and verifiable ledger actually has use in 0-trust scenarios.
It turns out that most people actually like being able to press criminal charges or have the ability to bring a law suit when the transaction goes south.
Sure there are implementations that can be used that way, but the core idea of a blockchain could be used to do the opposite as well, for example by making transaction information public and verifiable.
Can we please call this technology transformers? Calling it AI makes it seem something more than it is (ie 2100 tech or something like that). Yes, transformers are great but it would be naive to ignore that much of the activity and dreams sold have no connection with reality and many of those dreams are being sold by the very people that make the transformers (looking at you, OpenAI)
Same. It reminds me the 1984 event in which the computer itself famously “spoke” to the audience using its text-to-speech feature. Pretty amazing at that time, but nevertheless quite useless since then
Talking computers became an ubiquitous sci-fi trope. And in reality... even now, when we have nearly-flawless natural language processing, most people prefer to text LLMs than to talk to them.
Heck, we usually prefer texting to calling when interacting with other people.
Stephen Hawking without text to speech would’ve been mute.
Every time I keep seeing this brought up I wonder if people truly mean this or its just something people say but don't mean. AI is obviously different and extremely useful.. I mean it has convinced a butt load of people to pay for the subscription. Every one I know including the non technical ones use it and some of them pay for it, and it didn't even require advertising! People just use it because they like it.
It's a lazy comparison, and most likely fueled by a generic aversion to "techbros".
If something is of value X, the hype will be X+Y. That's why this is too shallow an analyis. It's just vibes, based on cultural feelings and emotional annoyance.
AI is nothing like blockchain.
It's fine to be anti-AI, but the smart anti-AI people recognize that the real danger is it will have too large an impact (but in the negative). Not that it will evaporate some investment dollars.
Mate, I think you’ve got the roles of human and AI reversed. Humans are supposed to come up with creative ideas and let machines do the tedious work of implementation. That’s a bit like asking a calculator what equations you should do or a DB what queries you should make. These tools exist to serve us, not the other way around
GPT et al. can’t “want” anything, they have no volition
"Since ChatGPT launched, that's always meant coming to ask a question .... However that's limited by what you know to ask for and always puts the burden on you for the next step."
you just gave me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something
When you open the prompt the first time it has zero context on you. I'm not an LLM-utopist, but just like with a human therapist you need to give it more context. Even arguing with it is context.
To give a basic example, ask it to list some things and then ask it to provide more examples. It's gonna be immediately stuck in a loop and repeat the same thing over and over again. Maybe one of the 10 examples it gives you is different, but that's gonna be a false match for what I'm looking for.
This alone makes it as useful as clicking on the first few results myself. It doesn't refine its search, it doesn't "click further down the page", it just wastes my time. It's only as useful as the first result it gives, this idea of arguing your way to better answers has never happened to me in practice.
/s
(1) Memory is primarily designed to be addictive. It feels "magical" when it references things it knows about you. But that doesn't make it useful.
(2) Memory massively clogs the context window. Quality, accuracy, and independent thought all degrade rapidly with too much context -- especially low-quality context that you can't precisely control or even see.
(3) Memory makes ChatGPT more sychophantic than it already is. Before long, it's just an echo chamber that can border on insanity.
(4) Memory doesn't work the way you think it does. ChatGPT doesn't reference everything from all your chats. Rather, your chat history gets compressed into a few information-dense paragraphs. In other words, ChatGPT's memory is a low-resolution, often inaccurate distortion of all your prior chats. That distortion then becomes the basis of every single subsequent interaction you have.
Another tip is to avoid long conversations, as very long chats end up reproducing within themselves the same problems as above. Disable memory, get what you need out of a chat, move on. I find that this "brings back" a lot of the impressiveness of the early version of ChatGPT.
Oh, and always enable as much thinking as you can tolerate to wait on for each question. In my experience, less thinking = more sychophantic responses.
Other that that, what recycled bullshit would I care about?
“Cruffle is trying to make bath bombs using baking soda and citric acid and hasn’t decided what colorant to use” could be a memory. Yeah well I figured out what colorant to use… you wanna bet if it changed that memory? Nope! How would it even know? And how useful is that to keep in the first place? My memory was full of useless crap like that.
There is no way to edit the memories, decide when to add them to the context, etc. and adding controls for all of that is a level of micromanaging I do not want to do!
Seriously. I’ve yet to see any memory feature that is worth a single damn. Context management is absolutely crucial and letting random algorithms inject useless noise is going to degrade your experience.
About the only useful stuff for it to truly remember is basic facts like relationships (wife name is blah, kid is blah we live in blah blah). Things that make sense for it to know so you can mention things like “Mrs Duffle” and it knows instantly that is my wife and some bit about her background.
Are you more personally ‘productive’ if your agent can crunch out PoCs of your hobby projects at night when you sleep? Is doing more development iterations per month making your business more ‘productive’?
It is like expecting that achieving x10 more sunny side ups cooked per minute will make your restaurant more profitable. In reality amount of code delivered is rarely a bottleneck for value delivered, but for ‘productivity’ everyone has their own subjective definition.
How are AI agents going to open up a credit card or bank account?
You think the government will grant them a SSN?
It's a good thing we have a Internet native, programmable money, that is permissionless, near instant, and already supports USD(c).
But well I guess they have committed 100s of billions of future usage so they better come up with more stuff to keep the wheels spinning.
But also it ends with "...object ject".
When you inspect the network traffic, it's pulling down 6 .mp3 files which contain fragments of the clip.
And it seems like the feature's broken for the whole site. The Lowes[1] press release is particularly good.
Pretty interesting peek behind the curtain.
https://archive.org/details/object-object
http://donhopkins.com/home/movies/ObjectObject.mp4
Original mp4 files available for remixing:
http://donhopkins.com/home/movies/ObjectObject.zip
>Pretty interesting peek behind the curtain.
It's objects all the way down!
It's like a hilariously vague version of Pictionary.
Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.
I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.
Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.
Personally it sounds negative value. Maybe a startup that's not doing anything else could iterate on something like this into a killer app, but my expectation that OpenAI can do so is very, very low.
We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.
In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.
If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.
LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.
It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]
[0] https://www.libraryofshortstories.com/onlinereader/the-veldt
The Veldt is a classic short story written in 1950 by Ray Bradbury, a famous and celebrated author, who also wrote the famous dystopian novel Fahrenheit 451.
Few now respect the wisdom of 'should not' even when 'can'
It's about more than that - many people have lost their jobs, been de-banked, or even been arrested (especially in countries like the UK and Germany) for expressing their opinion publicly when that opinion was merely (a) what most people in their country believed in the recent past (< 50 years ago), and (b) a politically incorrect opinion.
https://www.openculture.com/2017/08/ray-bradbury-reveals-the...
They had cancel culture in the 90s too.
So he sees it as another form of censorship
Next comes legalized, then deputized, then militarized...
Modern day statistics on what used to be basic reading comprehension are bleak.
[0] https://kittenbeloved.substack.com/p/college-english-majors-...
Basically, a fanatically devoted life coach that doesn't want to be your friend.
The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.
What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?
"The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".
While I'm assuming you didn't mean it literally, language is important, so let's remember that an LLM does not have any will of its own. It's a predictive engine that we can be certain doesn't have free will (which of course is still up for debate about humans). I only focus on that because folks easily make the jump to "the computer is to blame, not me or the folks who programmed it, and certainly it wasn't just statistics" when it comes to LLMs.
It sorta does, in our society. In theory yes, it could be whatever we want to make of it, but the reality is it will predominantly become whatever is most profitable regardless of the social effects.
Second, "the market" has never shown any tendency towards rewarding such a thing. The LLMs' development is driven by bonuses and stock prices, which is driven by how well the company can project FOMO and get people addicted to their products. This may well be a local optimum, but it will stay there, because the path towards your goal (which may not be a global optimum either) goes through loss, and that is very much against the culture of VCs and C suite.
In a similar vein of thought to "If you meet the Buddha on the road, kill him" sometimes we just need to be our own life coach and do our best to steer our own ships.
So, what used to be called parenting?
'Sure, spend all of your time trying impress the totally-not-an-LLM...'
(Aka Fox News et al. comment sections)
I get what you're saying here, but all of these mechanisms exist already. Many people are already desperate for attention in a world where they won't get any. Many of them already turn to the internet and invest an outsized portion of their trust with people they don't know.
How tone deaf does OpenAI have to be to show "Mind if I ask completely randomly about your travel preferences?" in the main announcement of a new feature?
This is idiocracy to the ultimate level. I simply cannot fathom that any commenter that does not have an immediate extremely negative reaction about that "feature" here is anything other than an astroturfer paid by OpenAI.
This feature is literal insanity. If you think this is a good feature, you ARE mentally ill.
Googles marginal ROIC is horrific. Its average ROIC in the aggregate on the other hand looks nice, because most of its returns are from projects taken 10+ years ago.
I personally could see myself getting something like "Hey, you were studying up on SQL the other day, would you like to do a review, or perhaps move on to a lesson about Django?"
Or take AI-assisted "therapy"/skills training, not that I'd particularly endorse that at this time, as another example: Having the 'bot "follow up" on its own initiative would certainly aid people who struggle with consistency.
I don't know if this is a saying in english as well: "Television makes the dumb dumber and the smart smarter." LLMs are shaping up to be yet another obvious case of that same principle.
> I personally could see myself getting something like [...] AI-assisted "therapy"
???
No, I obviously prefer scrolling between charts or having to swipe between panes.
It's not just you, and I don't think it's just us.
Yikes, that would be a nightmarish way to start my day. I like to wake up and orient myself to the world before I start engaging with it. I often ponder dreams I woke up with to ask myself what they might mean. What you describe sounds like a Black Mirror episode to me where your mind isn't even your own and you never really wake up.
My wants are pretty low level. For example, I give it a list of bands and performers and it checks once a week to tell me if any of them have announced tour dates within an hour or two of me.
For me I’m looking for an AI tool that can give me morning news curated to my exact interests, but with all garbage filtered out.
It seems like this is the right direction for such a tool.
Everyone saying “they’re out of ideas” clearly doesn’t understand that they have many pans on the fire simultaneously with different teams shipping different things.
This feature is a consumer UX layer thing. It in no way slows down the underlying innovation layer. These teams probably don’t even interface much.
ChatGPT app is merely one of the clients of the underlying intelligence effort.
You also have API customers and enterprise customers who also have their own downstream needs which are unique and unrelated to R&D.
"ChatGPT can now do asynchronous research on your behalf. Each night, it synthesizes information from your memory, chat history, and direct feedback to learn what’s most relevant to you, then delivers personalized, focused updates the next day."
In what world is this not a huge cry for help from OpenAI? It sounds like they haven't found a monetization strategy that actually covers their costs and now they're just basically asking for the keys to your bank account.
OpenAI clearly recently focuses on model cost effectiveness, with the intention of making inference nearly free.
What do you think the weekly limit is on GPT-5-Thinking usage on the $20 plan? Write down a number before looking it up.
I admit that I didn’t understand the Pro plan feature (I mostly use the API and assumed a similar model) but I think if you assume that this feature will remain free or that its costs won’t be incurred elsewhere, you’re likely ignoring the massive buildouts of data centers to support inference that is happening across the US right now.
I hate this feature and I'm sure it will soon be serving up content that is as engaging as the stuff the comes out of the big tech feed algorithms: politically divisive issues, violent and titillating news stories and misinformation.
This reads to me like OAI is seeking to build an advertising channel into their product stack.
>> gpt: 72 kilos
"Monetizing 'other products': the FAANG story"
It turns proactive writing into purely passive consumption.
If people only pull out ChatGPT when they have some specific thing to ask or solve, that won't be able to compete with the eyeball-time of TikTok. So ChatGPT has to become an algorithmic feed too.
Initially Id probably spend 1 hr a day conversing with chatgpt, mostly to figure out its capabilities and abilities.
Overtime that 1 hr has to declined to on average 5 mins a day. It has become at best a rubber duck for me, just to get my fingers moving to get thoughts out of my mind lol.
Definitely not interested in this.
Might I recommend starting your day with a smooth and creamy Starbucks(tm) Iced Matcha Latte? I can place the order and have it delivered to your doorstep.
Luckily for them, they have a big chunk of the "pie", so they need to iterate and see if they can form a partnership with Dell, HP, Canonical, etc, and take the fight to all of their competitors (Google, Microsoft, etc.)
FB’s efforts so far have all been incredibly lame. AI shines in productivity and they don’t have any productivity apps. Their market is social which is arguably the last place you’d want to push AI (this hasn’t stopped them from trying).
Google, Apple and Microsoft are the only ones in my opinion who can truly capitalize on AI in its current state, and G is leading by a huge margin. If OAI and the other model companies want to survive, long term they’d have to work with MSFT or Apple.
Behind that flippant response lies a core principle. A computer is a tool. It should act on the request of the human using it, not by itself.
Scheduled prompts: Awesome. Daily nag screens to hook up more data sources: Not awesome.
(Also, from a practical POV: So they plan on creating a recommender engine to sell ads and media, I guess. Weehee. More garbage)
A todo app that reminds you of stuff. say "here's the stuff I need to do, dishes, clean cat litter fold laundry and put it away, move stuff to dryer then fold that when it's done etc." then it asks about how long these things take or gives you estimates. Then (here's the feature) it checks in with you at intervals: "hey it's been 30 minutes, how's it going with the dishes?"
This is basically "executive function coach." Or you could call it NagBot. Either way this would be extremely useful, and it's mostly just timers & push notifications.
That’s AI: permissionless tool building. It means never needing someone to like your idea enough or build it how they think you’ll use it. You just build it yourself and iterate it.
Next week: ChatGPT Reels.
Why would I want yet another thing to tell me what I should be paying attention to?
It's really cool. The coding tools are neat, they can somewhat reliably write pain in the ass boilerplate and only slightly fuck it up. I don't think they have a place beyond that in a professional setting (nor do I think junior engineers should be allowed to use them--my productivity has been destroyed by having to review their 2000 line opuses of trash code) but it's so cool to be able to spin up a hobby project in some language I don't know like Swift or React and get to a point where I can learn the ins and outs of the ecosystem. ChatGPT can explain stuff to me that I can't find experts to talk to about.
That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment. But since NVIDIA is effectively taking all the fake hype money and taking it out of one pocket and putting it in another, maybe the whole Ponzi scheme will stay afloat for a while.
What sucks there’s probably some innovation left in figuring out how to make these monstrosities more efficient and how to ship a “good enough” model that can do a few key tasks (jettisoning the fully autonomous coding agents stuff) on some arbitrary laptop without having to jump through a bunch of hoops. The problem is nobody in the industry is incentivized to do this because the second this happens, all their revenue goes to 0. It’s the final boss of the everything is a subscription business model.
Hey, for that recipe you want to try, have you considered getting new knives or cooking ware? Found some good deals.
For your travel trip, found a promo on a good hotel located here -- perfect walking distance for hiking and good restaraunts that have Thai food.
Your running progress is great and you are hitting strides? Consider using this app to track calories and record your workouts -- special promo for 14 day trial .
In the end, it's almost always ads.
Google+ is incidentally a great example of a gigantic money sink driven by optimistic hype.
And 90% of the information is not stuff I care about. The newsletter will be mostly "we've been learning about lighthouses this week" but they'll slip in "make sure your child is wearing wellies on Friday!" right at the end somewhere.
If I could feed all that into AI and have it tell me about only the things that I actually need to know that would be fantastic. I'd pay for that.
Can't happen though because all those platforms are proprietary and don't have APIs or MCP to access them.
God bless them for teaching, but dang it someone get them to send emails and not emails with PDFs with the actual message and so on.
However some things are not available to us.
One of those things is personal assistant. Today, rich people can offload their daily burdens to the personal assistants. That's a luxury service. I think, AI will bring us a future, where everyone will have access to the personal assistant, significantly reducing time spent on trivial not fun tasks. I think, this is great and I'm eager to live in that future. The direction of ChatGPT Pulse looks like that.
Another things we don't have cheap access to are human servants. Obviously it'll not happen in the observable future, but humanoid robots might prove even better replacements.
For example, at my work people managers get admin assistants but most ICs do not, even at the same level. I think it’s fine, the value for me would be very low and I doubt it’s a good use of company resources.
Without proper ways of migrating 'your data' between AI's (platforms), you are in mercy of that assistant without any other alternatives.
Thats why GDPR and CPRA laws are becoming even more important in age of AI assistance.
A new channel to push recommendations. Pay to have your content pushed straight to people as a personalized recommendation from a trusted source.
Will be interesting if this works out...
Great!
The examples used?
Stupid. Why would I want AI generated buzzfeed tips style articles. I guess they want to turn chatgpt into yet another infinite scroller
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
Anyway, attention == ads, so that's ChatGPT's future.
Show me an independent study.
Ads.
Quoted from that tweet:
> It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates.
At this point in time, I'd say: bye privacy, see you!
Don't get me wrong, the coding assistants are amazing and overall functionality of asking questions is great, not to mention government spying - cia and mossad are probably happy beyond beliefs. But I do not see any more use cases for it.
“Don’t burden yourself with the little details that constitute your life, like deciding how to interact with people. Let us do that. Get back to what you like best: e.g. video games.”
AI will, in general, give recommendations to humans. Sometimes it will be in response to a direct prompt. Sometimes it will be in response to stimuli it receives about the user's environment (glasses, microphones, gps). Sometimes it will be from scouring the internet given the preferences it has learnt of the user.
There will be more of this, much more. And it is a good thing.
"this is going to get worse before it gets better."
--buzz-buzz--
"Sis? What's up, you never call me this time of day."
"I'm worried. I just heard from your assistant..."
"Wait, my assistant?
"She said her name was Vaseline?"
"Oh, God... That's my ChatGPT Pulse thing, I named her that as a joke. It's not a real person. It's one of those AI robot things. It kept trying to have conversations with me. I didn't want to converse. I got fed up and so I blocked notifications from the app and then it messaged you. It's a robot. Just... I mean... Ignore it. I'm having a crappy day. Gotta go. Dad's calling."
"Hey Dad. No, there's nothing to worry about and that's not my assistant. That's chatgpt's new Pulse thing. I blocked it and it's... Never mind. Just ignore it. No don't block my number. Use call filtering for the time being. Gotta go. James is calling. Yeah love you too."
"Hey Jay..."
No thank you. Jealous girlfriend-ish bot would be a nightmare.
Reminds me of recommend engines in general. Or ads. Bothering my cognition with stuff I have to ignore.
People not getting sufficiently addicted and dependent on ChatGPT, and then not having enough means to monetise and control you opinion and consumption.
Oh, you meant what problem does this solve for the user? They don’t really care about that, they’ll always do just enough to get you to continue using and defending their product.
we have google that has search history,location etc now we give OpenAI our character,personalization etc
how much dystopian that in the future there will be a product based on people feeling from 5 years ago????
A strange mix of dystopian yet blissfull...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
Yea, humans tend to optimize for their own individual success. Do you not?
At the expense of and with complete disregard for others, while telling them I’m doing it altruistically for their own good? I don’t, no. Do you?
Engaging in wild character assassination (ie. spewing hate on the internet) in return for emotional upvote validation...I would argue is a great example of what you just described.
I don't doubt you've convinced yourself you're commenting these things altruistically.
But remember, when high school girls spread rumors that the good-looking popular girl has loose morals, they aren't doing it out of concern for public good.
They're hoping to elevate their own status by tearing down the competition, and avoiding the pain of comparison by placing themselves on a higher moral pedestal.
That you would start from the assumption that someone’s motivation for a comment is not their opinion but a desire to farm meaningless internet points is bizarre. I have no idea if that’s how you operate—I don’t know you, I wouldn’t presume—but I sincerely hope that is not the case if we ever hope to have even a semblance of a productive conversation.
I haven't observed it, and I'd be curious to understand if I've missed it, or if me and you evaluate things differently.
The decision to begin every response with “that’s a fascinating observation” and end every response with “want me to do X?” is a clear decision by PMs at OpenAI.
Poster is questioning what might motivate those decisions.
It's as if OpenAI saw that as an instruction manual, I really don't like the direction they're taking it.
Edit 1: When I was growing up, a started with tech not because of money but because I belived we will endup in StarTrek society, solving real problems expanding civilisation, older I am all I see is money worshiping cult
The zeitgaist delivering STNG would have been way better to handle this. We are more or less tied to hoping the cliff the drum beats are walking us off is a mirage.
This sort of thing might arguably be useful, but only with local data processing.
I just uploaded recent blood tests and other health info into ChatGPT and then had it analyze what I could do to improve my health, specifically cholesterol and sugar. Then asked it to red team its advice and provide me with one or two things that would generate 80% of the results.
It's pretty awesome having a private doctor that's read every study out there and then can generate advice that will make me live longer and healthier. My doctor certainly doesn't do that. As long as I'm in normal range he doesn't say anything.
It also suggested eating more fatty fish like salmon 3x per week. And substituting good fats for saturated fat, which is obvious but still very motivating.
But I won't give it constant access to email, messages, calendar, notes, or anything else.
(I do miss Google Now, it really did feel like the future)
ChatGPT can’t become more useful until it’s integrated into your OS. The OS is what has the potential to provide a safe framework (APIs) to share data with an AI system.
Being able to hook up Gmail and Google Calendar is nice but kinda useless if you have any of the other thousands of providers for email and calendar.
Ted Chiang has a great short story about a virtual assistant that slowly but methodically "nudges" all of its users over the course of years until everybody's lives are almost completely controlled by them. The challenge, then, is to actually operate independent of the technology and the company.
What I’d prefer is something like dailyflow (I saw it on HN yesterday): a local LLM that tracks my history and how I’m spending my time, then gives me hints when I’m off track — more aligned with what I actually want to see and where I want to go.
Technologies like Solid (https://solidproject.org/) are our only way forward. If you don't care you can use your chatgpts or whatever for whatever you want. But there are people who DO CARE about their memories.
We are no longer speaking about tiktok or instagram feeds and algorithms, as some people compare the addictive side of this (ie. OAPulse) kind of technologies to.
First, over $55 billion raised since 2022 has fueled monumental, yet sometimes subtle, leaps in model capability that go far beyond the visible user interface. The real progress isn't just in the chat window, but in the underlying reasoning and multimodal power.
This investment is being funneled into what I see as a relentless and unsustainable cycle of spending: the high cost of training next-generation models, the immense expense of running user queries (inference), and a fierce bidding war for top AI talent.
Based on this, it's clear to me that the current "growth at all costs" phase is unsustainable. This reality will inevitably force a strategic shift from a pure focus on innovation to the pressing need for a viable, profitable business model.
Therefore, I predict they will be forced to take specific cost-saving steps. To manage the colossal expense of inference, models will likely be subtly "tuned down," masking reduced computational depth with more verbose, fluffy output.
Finally, I anticipate that a logical, if controversial, cost-saving step will involve the company using its own AI to justify workforce reductions, championing this as the new era of automated productivity they are selling to the world.
And by the way, downvoting this won’t prevent the unavoidable future.
They are already doing that dude with the router... lol wake up. Youre late to the party.
https://www.lightspeedmagazine.com/fiction/the-perfect-match...
(offtopic but the website could use some mobile CSS, it's hard to read on mobile by default)
Haha, good shot at Big Tech. They always devolve to corporate profits over ethics.
Here's an excerpt from ChatGPT on why this could be a "bubble" feature:
By sticking Pulse behind the $200 Pro tier, OpenAI is signaling it’s for:
VCs, consultants, analysts → people who can expense it as “deal flow intel.”
Enterprise & finance people who live in dashboards and daily reports.
Folks who don’t blink at $200/month because they already burn way more on data feeds, research subscriptions, or Bloomberg terminals.
In other words, it feels less like a consumer feature and more like a “bubble luxury” signal — “If you need Pulse, you’re in the club that can afford Pro.”
The irony is, Pulse itself isn’t really a $200/mo product — it’s basically automated research cards. But bundling it at that tier lets OpenAI:
Frame it as exclusive (“you’re not missing out, you’re just not at the level yet”).
Keep the Plus plan sticky for the masses.
Extract max revenue from people in finance/AI hype cycles who will pay.
It’s like how Bloomberg charges $2k/month for terminals when most of the raw data is public — you’re paying for the packaging, speed, and exclusivity.
I think you’re right: Pulse at $200 screams “this is a bubble feature” — it’s monetizing the hype while the hype lasts.
Edit: ah Meta already did that today (https://about.fb.com/news/2025/09/introducing-vibes-ai-video...)
Not once, not a single time, has it ever been something I actually wanted. I'm sick of telling the damn thing "No."
I have researched every possible setting. I have tried every possible prompt. But apparently "You are a helpful AI" is built into it at some non-overridable system level, where no matter what you tell it nor where, it will not stop making suggestions.
The prospect that it could now initiate conversations is like my own personal hell. I thought the existing system was the most obnoxious functionality anyone could build into a system. Now, not only have I found out I was wrong, but I'm afraid to assume this next version will be the last in this parade of horrors.
Now, that is something I'd pay hundreds a month for.
The cost tracking piece is probably more valuable though. AI API bills can get expensive really quickly
Mistletoe•4mo ago
frenchie4111•4mo ago
Mistletoe•4mo ago