It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?
I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.
Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.
A lot of the author's arguments could have been said about the internet in the 90's. This is a baby 4 year old leap in technology, why are people expecting it to be mature?
It is human nature to try and find silver bullets, to take solutions and find problems. The way I would look at the LLM-centered future is to consider LLM agents assistants and suggestion makers, personal consultants even. You don't ask an agent to write an essay for you, you write an essay, and as you write consider its suggestions and corrections. The models should be familiar with your writing style and preferences. Don't blame ChatGPT for human laziness.
There was this fad about every thing being smart* (smart home,smart tooth brush , smart sex toy,etc...). that wasn't smart, it was just connected to a network. This is "smart". and in the future technology might get past "smart" and become "intelligent" (we're not there yet, outside of scifi at least).
At the end of the day, everyone needs to step back and consider this: It's just a tool. period. it's not "AI", not really. there is no intelligence.
The problem is, the world is full of enshittification capitalists and their doomsday bandwagons.
I was very disgusted when I saw VC firms with billions in AUM put money into things like FartCoin, Digital Twins
The Boomer VCs financed stuff that is genuinely useful, MRI Scanners, Google, Apple Computers, Genetech (brought insulin to the masses).
The milenial VCs fund stuff that is at best convenient to have (Airbnb, Uber) but usually gimmicks, Instagram, Tiktok.
Sam Altman is the master of gimmicks.
He took the GPT model that already existed and wrapped it into chat format similar to Elizer[0]
Got Neural style that existed for a long time and paired it with Studio Ghibli fanatics. [1]
Because its fans act as though it is, and this article is a response to that overly-enthusiastic outlook on what the tool can do.
I thought the very nature of technology and progress is to allow humans to be lazy.
We build technology to reduce our own burdens.
And most of the AI marketing is revolving around giving you the luxury to think less and do more for a price.
> The way I would look at the LLM-centered future is to consider LLM agents assistants and suggestion makers, personal consultants even
I find this highly dubious. All the names (agents, assistants, suggestion makers) are synonyms. They are just pieces of text that come off a screen, for inputs given to them. I am highly skeptical of intelligence emanating from them, mainly because real innovation and insight seems to come from a brains ability to devolve something into its abstract self, mush it around other abstract ideas and find a link in the abstract level, that is then applied to the problem at hand. (Andrew Wiles solution to the Euler's problem comes to my mind)
Even problem solving ability or the ability to plan or the ability to anticipate, is not part of the regular content that you find on the internet.
For example, I may read about something a farmer does in Arkansas, and then relate it to something completely different, in a different domain.
Nowhere in the content on internet would I find those two things together.
Most of the agentic systems, the MCP stuff, seems to be a pseudo-deterministic system that is harder to debug.
> And most of the AI marketing is revolving around giving you the luxury to think less and do more for a price.
So yeah, intellectual laziness.
Goodness that's depressing. Is this going to crank individualism up to 11?
I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.
Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.
Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.
I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.
It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.
Have been wondering this ever since 1 week after the initial ChatGPT release.
My cynical take is that most people don't do real work (i.e. one that is objectively evaluated against reality), so are not able to see the difference between a gimmick and the real thing. Most people are in the business of making impressions, and LLMs are pretty good at that.
It's good that we have markets that will eventually sort it out.
But then again, it's not as if markets always rewarded real work either.
Ask it something where the Google SERP is full of trash and you might have a more sane result from the LLM.
It is also excellent for writing one-off code experiments and plots, saving some time from having to write them from scratch.
I’m sorry but you are just using it wrong.
The code it also generates is...questionable, and I'm a pretty middling dev.
I also use them for various coding tasks and they, together with agent frameworks, regularly do refactoring or small feature implementations in 1-2 minutes that would've taken me 10-20 minutes. They've probably increased my developer productivity by 2-3x overall, and by a lot more when I'm working with technology stacks that I'm not so familiar with or haven't worked with for a while. And I've been an engineer for almost 30 years.
So yea, I think you're just using them wrong.
It's also pretty useful for brainstorming : talking to AI helps you refine your thoughts. It probably won't give you any innovative idea, only a survey of mainstream ones, but it's a pretty good start for thinking about a problem.
But what if you _don't_ have that kind of problem? Yes LLMs can be useful to solve the above. But for many problems you ask for a solution and what you get is a suggested solution which takes a long to verify. Meaning: unless you are somewhat sure it will solve the problem you don't want to do it. You need some estimate of confidence. LLMs are useless for this. As a developer I find my problems are very rarely in the first category and more often in the second.
Yes it's "using them wrong". It's doing what they struggle with. But it's also what I struggle with. It's hard to stop yourself when you have a difficult problem and you are weighing googling it for an hour or chatgpt-ing it for an hour. But I often regret going the ChatGPT route after several hours.
* Super-powered thesaurus
A traditional thesaurus can only take a word and provide alternative words; with an LLM, you can take a whole phrase or sentence and say: "give me more ways to express the same idea".
I have done this occasionally when writing, and the results were great. No, I do not blindly cut-and-paste LLM output, and would never do so. But when I am struggling to phrase something just right, often the LLM will come up with a sentence which is close, and which I can tweak to get it exactly the way I want.
* Explaining a step in a mathematical proof.
When reading mathematical research papers or textbooks, I often find myself stuck at some point in a proof, not able to see how one step follows from the previous ones. Asking an LLM to explain can be a great way to get unstuck.
When doing so, you absolutely cannot take whatever the LLM says as 'gospel'. They can and will get confused and say illogical things. But if you call the LLM out on its nonsense, it can often correct itself and come up with a better explanation. Even if it doesn't get all the way to the right answer, as long as it gets close enough to give me the flash of inspiration I needed, that's enough for me.
* Super-powered programming language reference manual
I have written computer software in more than 20 programming languages, and can't remember all the standard library functions in each language, what the order of parameters are, and so on.
There are definitely times when going to a manpage or reference manual is better. But there are also times when asking an LLM is better.
I'm an AI sceptic (and generally disregard most AI announcements). I don't think it's going to replace SWE at all.
I've been chunking the same questions both to Gemini and GPT and I'd say about until ~8 months ago they were both as bad as each other and basically useless.
However, recently Gemini has gotten noticeable better and has never hallucinated.
I don't let it write any code for me. Instead I treat Gemini as a 10+ YoE on {{subject}}.
Working as platform engineer, my subjects are broad so it's very useful to have a rubber duck ready to go on almost any topic.
I don't use copilot or any other AI. So I can't compare it to those.
Same for philosophy questions, "explain this piece of news through the lens of X philosopher's Y concept".
It's much more helpful on popular topics where summarization itself is already high quality and sufficient.
Google Cloud. (2024). "Broadcast Transformation with Google Cloud." https://cloud.google.com/solutions/media-entertainment/broad...
Microsoft Azure. (2024). "Azure for Media and Entertainment." https://azure.microsoft.com/en-us/solutions/media-entertainm...
IBC365. (2023). "The Future of Broadcast Engineering: Skills and Training." https://www.ibc.org/tech-advances/the-future-of-broadcast-en...
Broadcast Bridge. (2023). "Cloud Skills for Broadcast Engineers." https://www.thebroadcastbridge.com/content/entry/18744/cloud...
SVG Europe. (2023). "OTT and Cloud: The New Normal for Broadcast." https://www.svgeurope.org/blog/headlines/ott-and-cloud-the-n...
None of these exist, neither at the provided URLs or elsewhere.
So maybe another LLM would have fared better, but still, so far it's mostly wasted time. It works quite well to summarise texts and creating filler images, but overall I still find them not reliable enough to care out of these two limited use cases.
No, no, you have to realise, most pianists don't make real music!
Some people seem to use them as a database of common programming patterns, but that's something I already have, both hundreds of scaffolds in many programming languages I've made myself and hundreds of FOSS and non-FOSS git repos I've collected out of interest or necessity. Often I also just go look at some public remote repo if I'm reading up on some topic in preparation for an implementation or experiment, mainly because when I ask an LLM the code usually has defects and incoherences and when I look at something that is already in production somewhere it's working and sits in a context I can learn from as well.
But hey, I rarely even use IDE autocomplete for browsing library methods and the like, in part because I've either read the relevant library code or picked a library with good documentation since that tells a lot more about intended use patterns and pitfalls.
“Computer” used to be a job, and human error rates are on the order of 1-2% no matter what level of training or experience they had. Work had to be done in triplicate and cross-checked if it mattered.
Digital computers are down to error rates roughly 10e-15 to 10e-22 and are hence treated as nearly infallible. We regularly write code routines where a trillion steps have to be executed flawlessly in sequence for things not to explode!
AIs can now output maybe 1K to 2K tokens in a sequence before they make a mistake. That’s 99.9% to 99.95%! Better than human already.
Don’t believe me?
Write me a 500 line program with pen and paper (not pencil!) and have it work the first time!
I’ve seen Gemini Pro 2.5 do this in a useful way.
As the error rates drop, the length of usefully correct sequences will get to 10K, then 100K, and maybe… who knows?
There was just a press release today about Gemini Diffusion that can alter already-generated tokens to correct mistakes.
Error rates will drop.
Useful output length will go up.
The issue seems to be more in the intelligence department. You can't really leave them in an agent-like loop with compiler/shell output and expect them to meaningfully progress on their tasks past some small number of steps.
Improving their initial error-free token length is solving the wrong problem. I would take less initial accuracy than a human but equally capable of iterating on their solution over time.
Programmers who "iterate" buggy shit for 10 rounds until they get it right are a post-Google push-update phenomenon.
Its quality seems to vary wildly between various subjects, but annoyingly it presents itself with uniform confidence.
If you aren't already, I suggest making sure to not forget, every 3-5 prompts, to throw in: "no waffling", "no flattery", "no obsequious garbage", etc. You can make it as salty as you like. If the AI says "Have fun!", or "Let's get coding!", you know you need to get the whip out haha.
Also, "3 sentences max on ...", "1 sentence explaining ...", "1 paragraph max on ...".
Another improvement for me was, you want to do procedure x in situation y, so you go "I'm in situation y, I'm considering procedure x, but I know I've missed something. Tell me what I could have missed". Or "list specific scenarios in which procedure x will lead to catastrophe".
Accepting the tool as a fundamentally dumb synthesiser and summariser is the first step to it getting a lot more useful, I think.
All that said, I use it pretty rarely. The revolution in learning we need is with John Holt and similar thinkers from that period, and is waiting to happen, and won't be provided by the next big tech thing, I fear.
I was up extremely late last night writing a project-status email. I could tell my paragraphs were not tight. I told Cursor: rewrite this 15% smaller. I didn't use the output verbatim, but it gave me several perfect rewrite ideas and the result was a crisp email.
I have it summarize my sloppy notes after interviewing someone, into full sentences. I double-check it for completeness and correctness, of course. But it saves me an hour of sweating the language.
I used it to get a better explanation to a polynomial problem with my child.
I use it to generate Google Spreadsheet formulas that I would never want to spend time figuring out on my own ("give me a formula that extracts the leading number from each cell, and treats blank cells as zero").
Part of the magic is finding a new use case that shaves another hour here and there.
It is the former.
When LLMs blew up a few years ago I was pretty excited about the novelty of the software, and that excitement was driven by what they might do rather than what they did do.
Now, years and many iterations later, the most vocal proponents of this stuff still pitch what it might do with a volume loud enough to drown out almost any discussion of what it does. What little discussion of what it does for individuals usually boils down to some variation of “it gives me answers to the questions for which I do not care about the answers”, but, —how ridiculous, wasteful, and contrary to the basic ideas of knowledge and reasoning that statement is aside— even that is usually given with a wink and a nod to suggest that maybe one day it will give answers to questions that matter.
P.S.: consider that when there are huge investments in something, people will do anything to see a return, including paying other people to create hype.
The last example is actually the most interesting! The essays are whatever, dumb or lazy kids are gonna cheat on their homework, schools have long needed better ways of teaching kids than regurgitative essays, but in the mean time just use an in-class essay or exam. But people aren't really making the brain-dead books and videos as anything other than a curiosity, despite the fears of various humanities professors.
The interesting part of AI, and I suspect the primary actual use case, is everything else.
In my camping car, somewhere in the desert, I sometimes have limited resources. Like a can of beans, some fresh potatoes, an apple, Italian spices, and so on.
I like to ask ChatGPT: Listen, I have this stuff, I want to create some food with strong umami taste, do you have an idea?
It is very good at that, the results were often amazing.
This is its core feature: 'feel' loose connections between concepts. Italian pasta with maple syrup? Yes, but only if you add some Arabic spices...
"AI" is, due to the nature of artificial neuronal networks, not intelligent. It does not learn intelligence, it does learn feelings. Not emotions, but feelings in the sense of unconscious learning ('I get a feeling how to ride the bicycle off-road ').
Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.
> But do the apologists even believe it themselves? Latham, the professor of strategy, gives away the game at the end of his reverie. “None of this can happen, though,” he writes, “if professors and administrators continue to have their heads in the sand.” So it’s not inevitable after all? Whoops.
This self-assured ‘gotcha’ attitude is pungent throughout the whole piece, but this may be as good an example as any. It’s ridden with cherry-picked choices and quotes from singular actors as if they’re representative of every educator, every decision maker, and it’s such a bad look from someone that clearly knows better. I don’t expect the author to take the most charitable position, but one of intellectual honesty would be nice. To pretend there isn’t, or perhaps ignore, those out there applying technological advancement, including current AI, in education in thoughtful, meaningful, and beneficial even if challenging to quantify ways, is obtuse. To decide there isn’t the possibility of those things being true, given their exclusion, is to do the same head-burying he ridicules others for.
…
> After I got her feedback, I finally asked ChatGPT if generative AI could be considered a gimmick in Ngai’s sense. I did not read its answer carefully. Whenever I see the words cascade down my computer screen, I get a sinking feeling. Do I really have to read this? I know I am unlikely to find anything truly interesting or surprising, and the ease with which the words appear really does cheapen them.
It may have well been the author’s point, but the disdain for the technology that drips from sentences like these, which are rife throughout, taints any appreciation for the argument they’re trying to make — and I’m really trying to take it in good faith. Knowing they come in with such strongly held preconceived notions makes me reflexively question their own introspection before putting pen to paper.
Ultimately, are you writing to convince me, or yourself, of your point?
>Ultimately, are you writing to convince me, or yourself, of your point?
I like that you point out here that the author clearly has a strong opinion, and then immediately say that the act of expressing that opinion may suggest that they do not hold that opinion at all.
By this logic, are you trying to convince us that you don’t love the way this article is written, or are you trying to convince yourself of that?
Rather, what I hoped to articulate was a sense that being able to viscerally feel that an author holds a very obvious position from the outset of an article, and then not seeing them make even the faintest attempt to proactively argue their point against the most obvious—the easiest—criticisms, comes across lazy.
I expect arguing in good faith, and this wasn’t that.
Anything else is just aesthetics and personal preference
I ask genuinely. I want to understand your position better here.
Also my original gripe was very clear. “Are you trying to convince yourself?” indicates that the author didn’t believe what they wrote. And your reasoning here for mentioning that is that they wrote it. It is a no-win scenario in which another person literally couldn’t hold an opinion that doesn’t conform to your aesthetic. That is insane!
That said, I disagree with the idea that it’s merely about aesthetics.(Hegel’s dialectic, for example, isn’t just a stylistic choice — its structure actively shapes meaning and allows for a better synthesis.)
I don't think the author wants to engage and have meaningful conversations, his position is clear.
A meaningful conversation - at least how i see it -, involves acknowledging both the pros and cons of any position. Even if you believe the pros outweigh the cons — which is a subjective judgment — you should still be able to clearly enumerate the cons. That’s is an analytical approach.
The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.
The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
The one area I would agree that AI and ML tools have been surprisingly good, art generation.
But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.
A picture now, has lost all meaning.
> be useful for “thinking” or analyzing a piece of writing
This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.
And I meant myself thinking and analyzing a piece of writing with the help of ChatGPT, not ChatGPT itself “thinking.” (Although I frankly think this is somewhat of an irrelevant point, if the machine is thinking.) Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.
Are you going to test them by building something or using these concepts in conversation with specialists?
And likewise, using AI to critique a piece of writing is already “testing it,” as it definitely makes useful suggestions.
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
In my own experience, that is absolute nonsense, and I have gotten immense amounts of value from it. Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
Please consider that there are some very clever people out there. I can respond to your point about languages personally - I speak three, and have lived and operated for extended periods in two others which I wouldn't call myself "fluent" in as it's been a number of years. I would not use an LLM to generate images for each word, as I have methods that I like already that work for me, and I would consider that a wasteful use of resources. I am into permacomputing, minimising resources, etc.
When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.
In summary - do read the article, it's very good! You're responding to an imagined argument based on a headline, ignoring a nuanced and serious argument, by saying: "yeah, but I use it well, so?! It's not a gimmick then, for me!"
30 second sketches also are not nearly as effective as detailed images and would likely have dubious value in implementing the Picture Superiority Effect.
Nowhere did I say that people who write essays about AI being useless are idiots. That's your terminology, not mine. Merely that they lack imagination and creativity when it comes to exploring the potential of a new tool and instead just make weak criticisms.
1. In a couple of contexts, as a non-expert, I'm getting excellent use out of these LLM tools, because I'm imaginative and creative in my use of them.
2. I get such great use out of them, as a non-expert, in these areas, that any expert claiming they are gimmicks, is simply wrong. They just need to get more imaginative and creative, like me.
Am I misunderstanding you here? Is this really what you're saying?
The holes in the thinking seem obvious, if I may be blunt. I would suggest you ask an LLM to help you analyse it, but I think they're quite bad at that, as they are programmed to reflect your biases back at you in a positive way. The largest epistemic issue they have is probably that - it is only possible to overcome this tendency to placate the user if the user has great knowledge of their biases, an issue even the best experts face!
This isn’t that complicated. Someone wrote an article saying X is a gimmick and made a weak argument. I said no, in my experience that isn’t the case, and here are a few examples.
Your patronizing tone is pretty irritating and distracts from whatever point you’re trying to make. But I’m not sure you’re actually engaging in good faith here, so I think that’s the end of this conversation.
I just want to point out that this is precisely how they described your perspective. It’s hard to see how you find their tone patronizing given they’re just explaining their point of view. It’s worth noting that others may find your words to be patronizing:
> These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination.
> Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
> I said that people making blanket statements about LLMs being gimmicks need to be more creative.
Or, you know, just imagine something. Which is what I have done for learning to speak 3 languages fluently other than my mother tongue.
Either that or different people have different views on life, tech, &c. If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
If you not part of a very small subset of tech enthusiasts or companies directly profiting from it it really isn't that big of a deal
It was written after the author attended a workshop where the presenter tried and seemingly failed to show how AI was able to write essays when prompted with the word "innovative" or produce a podcast on a book. The author also mentions an article by a university lecturer who claims that "Human interaction is not as important to today’s students" and that AI will basically replace it.
The subtitle of the article is "AI cannot save us from the effort of learning to live and die."
In other words, the article is about a specific trend in higher education to present AI as some sort of revolutionary tool that will completely change the way students learn.
The author disagrees and contends that pretending to replace most human interactions with genAI is a gimmick, and pretending that AI can make learning effortless is lying to students.
The way you use AI for learning language is certainly imaginative but you are not claiming that it replaces the quality of interacting with native speakers or possibly immersion in the culture. Your tool may be useful and clever but claiming it makes learning language effortless (as some AI apologists in education might) would make it a gimmick.
This swiss army knife is totally useless!
You didn't need AI for the things you list, and using AI has lowered the credibility and quality of your work.
I don't use any AI in my work. Which makes my work worth scanning by AI-- but not yours.
[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...
For simple coding questions it is also very good because it takes your current context into account. It is basically a smarter "copy paste from stack overflow".
At least for now LLMs do not replace any meaningful work for me, but they replace google more and more.
Does the author know he can include "be concise" in the prompt if that's what he wants?
I do agree with the author this whole thing is challenging. Frankly I wouldn't like to be youngster nowadays – so much information, so many options, so much flood of tip of success that makes you feel like shit, so easy not to learn anything, so much feeling of "pointless" discipline and hard work, such a wide distance to excel at something - just summarizing available avenues is project on its own.
Anyway there is no turning back. What we see now as best of models will get replaced quickly with better ones and that change will only accelerate with time. I'm still positive – I think we'll find a way to be happy in this completely new reality.
I like it at work - time from business idea to PoC did shrink so much, it's easier than ever to win business (not sure for how long but that's today), agentic coding helps a lot with documentation, tests, finding medium-obvious mistakes that sit above linter/typechecker – that part is amazing as well. We'll continue focusing on low effort/high value tasks it currently excels at and keep expanding it.
At the same time we all know where it's going and it makes me uneasy as well.
I don't think I have anything substantial to add – just advice to try to enjoy the ride, take it easy and keep in mind well being of your colleagues. There is a sweet spot to use it – don't overuse it (don't fight with it where it struggles), don't under-use it either (don't say all of it is shit and you won't touch it ever), don't abuse it (do not drop llm output for others to review without knowing what you're pushing).
Seriously, I understand saying something lime this about crypto or whatever meme of the day, but even current LLMs are literal magic. Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
> Malesic argues that AI hype—especially in education—is a shallow gimmick: it overpromises revolutionary change but delivers banal, low-value outputs. True teaching thrives on slow, sacrificial human labor and deep discussion, which no AI shortcut can replicate.
Hardly any revolutionary thought.
Try again with any SOTA reasoning model (GPT-o3, Gemini 2.5 Pro, Grok 3).
Out of curiosity, I used ChatGPT to make a summary of “FreeBSD vs Linux comparison”, and if came out as extremely fair and to the point, in my opinion.
https://www.today.com/money/are-smartphones-making-us-lazy-t...
Etc., etc.
If LLMs were even 50% as good as they're pretending to be we'd see huge productivity increase across the board, we simply don't, and it's been almost 3 years since chatgpt was released now. Where is the productivity increase ? Where is the extra wealth generated ?
Definitely worth investing billions and wasting insane amount of energy... idk how people merge the "this is a revolution!" and "it kinda summed up a 10 pages pdf that I couldn't bother to read in the first place" without noticing the insane amount of mental gymnastic you have to go through to reconcile these two ideas.
Not even mentioning the millions of new LLM generated pages that are now polluting the web
An educators job (like an actual teacher) should be to help people (key) progress and be smarter humans.
Deal with progress.
ChatGPT has allowed me to write 50%+ faster with 50%+ better quality. It’s been one of the largest productivity boosts in the last 10+ years.
I think most people in here know at least a few ways they can use AI that is genuinely useful to them. I suppose if you're _very_ positive about AI, then it's good to have a polarized negative article to make us remember all the ways AI is being overpromised. I'm definitely very excited about finding new ways to apply AI, and that explorative phase can come off as trying to sell snake oil. We have to be realistic and acknowledge this is a technology that can produce content faster than we can consume it. Content that takes effort to distinguish useful vs. not.
All that said I disagree with the idea that the only way "to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds" is via teaching and not via technologies such as AI. The education system certainly failed me and I found a lot of joy in technology instead. For me it was the start of the internet, but I can only imagine for many today it will be the start of AI.
The only thing that really comes to mind is making something in a domain where I have almost no prior expertise.
But then ChatGPT is so frequently wrong, and so frequently repeatedly wrong when it tries to "correct" problems when pointed out, that even then I always have to go and read relevant documentation and re-write the thing regardless. Maybe there's some slight usefulness here in giving me a starting point, but it's marginal.
- Turning a lot of data into a small amount of data, such as extracting facts from a text, translating and querying a PDF, cleaning up a data dump such as getting a clean Markdown table from a copy/pasted HTML source of a web page etc (IMO it often goes wrong when you go the other way and try to turn a small prompt into a lot of data)
- Creating illustrations representing ephemeral data (eg my daily weather report illustration which I enjoy looking at every day even if the data it produces is not super useful: https://github.com/blixt/sol-mate-eink)
- Using Cursor to perform coding tasks that are tedious but I know what the end result should look like (so I can spend low effort verifying it) -- it has an 80% success rate and I deem it to save time but it's not perfect
- Exploration of a topic I'm not familiar with (I've used o3 extensively while double checking facts, learning about laws, answering random questions that would be too difficult to Google, etc etc) -- o3 is good at giving sources so I can double check important things
Beyond this, AI is also a form of entertainment for me, like using realtime voice chat, or video/image generation to explore random ideas and seeing what comes out. Or turning my ugly sketches into nicer drawings, and so forth.
It’s as if the author himself didn’t have his own thoughts, and borrowed some sentences others made to write this piece.
I don’t know what kind of writing style this is.
If you are writing an opinion, why not devote some effort to articulate your own thoughts, or at the very least, provide reasons why other people which the author relies on to make their point, is correct?
It's a lot more balanced compared to the doomy attitude in the primary post.
BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).
If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.
I found AI extremely useful and easy sell for me to spend $20/m even if not used professionally for coding and I'm the person who avoid any type of subscription as a plague.
Even in educational setting that this article mostly focus about it can be super useful. Not everyone has access to mentors and scholars. I saved a lot of time helping family with typical tech questions and troubleshooting by teaching them how to use it and trying to solve their tech problem themselves.
eru•5h ago
In any case, even contemporary LLM---as primitive as they will look like in even a few months time---are already pretty useful as assistants when eg writing software programmes. They ain't gimmicks. They are also useful as a more interactive addition to an encyclopedia. Amongst other uses.
The article also conflates AI in general with LLM. It's a common enough mistake to make these days, so I won't ding the author for that.
Summary of the article: contemporary LLMs aren't very useful for highfalutin liberal arts people (yet). (However they can already churn out the kind of essays and corporate writing that people do in practice.)
frereubu•5h ago
wilg•5h ago