My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.
Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.
I don't hold my breath on this.
* 1990s: Internet access was rare. By 1995, only 14% of Americans were online.
* 2000: Approximately 43% of U.S. households had internet access .
* 2005: The number increased to 68% .
* 2010: Around 72% of households were connected .
* 2015: The figure rose to 75% .
* 2020: Approximately 93% of U.S. adults used the internet, indicating widespread household access .
1995 was when Windows 95 launched, and with its built in dialup networking support, allowed a "normal" person to easily get online. 1995 was the Netscape IPO, which kicked off the dot-com bubble. 1995 was when Amazon first launched their site.
I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.
But that doesn't mean I think my kids should primarily get K-12 and college education this way.
What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.
Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.
GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.
How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.
Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.
Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.
But wait, this isn't an anecdote, it's already happening! Here's an excellent article that details the damage these tools are already causing to our students https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.
>[blank] is an amazing tool ... when used properly
You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.
I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.
A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.
Okay, how do they go about this?
Schools are already understaffed as is, how are the teachers suddenly going to have time to revamp the entire educational blueprint? Where is the funding for this revolution in education going to come from when we've just slashed the Education fund?
Until they come up with a semblance of a plan, teachers will experience an undue burden to slog through automated schoolwork assignments, cheating, and handle children who lack the critical faculties to be well functioning members of society.
It's all very depressing.
An automobile can go quite far and fast but that doesn't mean the flabbiness and poor fitness of its occupants isn't a problem.
how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.
Wouldn't class room exams enforce that though? Like, imagining LLMs like an older sibling or parent that would help pupils cheat on essays.
For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.
You're correct, but let's be honest here, the majority will use it as a means to get their homework over and done with so they can return to Tik Tok. Is that the society we want to cultivate?
>And these endlessly patient machines are very good at feeding that curiosity
They're also very good at feeding you factually incorrect information. In comparison, a textbook was crafted by experts in their field, and is often fact checked by many more experts before it becomes published.
So the key thing to get across to kids is that argument by authority is an untrustworthy heuristic at best. AI slop can even help with this.
Input stream = output from the perspective of the consumer. Things come out of this stream that I can programmatically react to. Output stream = input from the perspective of the producer. This is a stream you put stuff into.
…so when this article starts “My input stream is full of it…” the author is saying they’re seeing output of fear and angst in their feeds.
Am I alone in thinking this is a bit unintuitive?
Your input is ofc someone else's output, and vice versa, but you want to keep your description and thoughts to one perspective, and in a first person blog that's clearly the authors pov, right?
Is there a glimpse of the next hype train we can prepare to board once AI gets dulled down? This has basically made the site unusable.
Just look at the Github product being transformed into absolute slop central its wild. Github universe was exclusively focused on useless LLM additions.
Is it written in Rust?
I use ChatGPT as an RNG of math problems to work through with my kid sometimes.
Ptacek has spent the past week getting dunked on in public for that article. I don't think it lends you a lot of credence to align with it.
> If you’re interested in that thinking, here’s a sample; a slide deck by a Keith Riegert for the book-publishing business which, granted, is a bit stagnant and a whole lot overconcentrated these days. I suspect scrolling through it will produce a strong emotional reaction for quite a few readers here. It’s also useful in that it talks specifically about costs.
You're not wrong here. I read the deck and the word that comes to mind is "disgusting". Then again, the morally bankrupt have always done horrible things to make a quick buck — AI is no different.
It undermines the author's position of being "moderate" if they align with perhaps the most decisive and aggressively written pro-AI puff piece doing the rounds.
> Developers who don't embrace AI tools are going to get left behind.
I'm not sure how to respond to this. I am doubtful a comment on Hacker News will change your mind, but I'd ask you to think about two questions.
If AI is going to be as revolutionary in our industry as other changes of the past, like web or mobile, then how would a similar statement sound around those? Is saying "Developers who don't embrace mobile development are going to get left behind" a sensible statement? I don't think so, even with how huge mobile has been. Same with other big shifts. "Developers who don't embrace microservice architecture are going to get left behind"? Maybe more comparable, but equally silly. So, why would it be different than those? Do you think LLM tools are more impactful than any other change in history?
Second, if AI truly as as groundbreakingly revolutionary as you suggest, what happens to us? Maybe you'll call me a luddite, raging against the loss of jobs when confronted with automated looms, but you'll have to forgive me for not welcoming my own destruction with open arms.
A more apt comparison might be comparing it to the arrival of IDE's and quality source control? Do you think developers (outside of niche cases) working out of text editors and rsyncing code to production are able to find jobs as easily as those who are well versed in using e.g. a modern language tooling + Github in a team environment? Because I've directly seen many such developers being turned down by screening and interviews; I've seen companies shed talent when they refused to embrace git while clinging to SVN and slow deployment processes; said talent would go on to join companies that were later IPOing in the same space for a billion+ while their former colleagues were laid off. To me it feels quite similar to those moments.
My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.
I'm happy to use Copilot to auto-complete, and ask a few questions of ChatGPT to solve a pointy TypeScript issue or debug something, but stepping back and letting Claude or something write whole modules for me just feels sloppy and unpleasant.
Same for me. But maybe that's ultimately an UX issue? And maybe things will straighten out once we figure out how to REALLY do AI assisted software development.
As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.
So: Maybe once we have good tools to understand the output it might be fun again.
(I guess this would include advances in structuring/architecting the output)
As it is, giving high-level directives to an LLM and debugging the output seems like a waste of my time and a hindrance to my learning process. But that's how professional coding will be done in the near future. 100% human written code will become like hand-writing a business letter in cursive: something people used to be taught in school, but no one actually does in the real world because it's too time-consuming.
Ultimately, the business world only cares about productivity and what the stopwatch says is faster, not whether you enjoy or learn from the process.
The reality is that we've seen incremental and diminishing returns, and the promises haven't been met.
My analogy is GUI builders from the late 90s that let you drag elements around, then generated a pile of code. They worked sometimes, but God help you if you wanted to do something the builder couldn't do, and had to edit the generated code.
Looking at compiler output is actually more pleasant. You profile your code, find the hot spots, and see that something isn't getting inlined, vectorized, etc. At that point you can either convince the compiler to do what you want or rewrite it by hand, and the task is self-contained.
But that doesn't mean that it's not a gradient, and LLM output may be meaningfully harder to reason about than compiler output, and that may matter.
My experience has been that it’s difficult to mostly vibe with an agent, but still be an active participant in the codebase. That feels especially true when I’m using tools, frameworks, etc that I’m not already familiar with. The vibing part of the process simultaneously doesn’t provide me with any deeper understanding or experience to be able to help guide or troubleshoot. Same thing for maintaining existing skills.
Now, for all the executives who are trying to force-feed their engineering team to use AI for everything, this is the result. Your engineering staff becomes equivalent to a mathematician who has never actually done a math problem, just read a bunch of books and trusted what was there. Or a math tutor for your kid who "teaches" by doing your kid's homework for them. When things break and the shit hits the fan, is that the engineering department you want to have?
Unless I'm stuck while experimenting with a new language or finding something in a library's documentation, I don't use AI at all. I just don't feel the need for it in my primary skill set because I've been doing it so long that it would take me longer to get AI to an acceptable answer than doing it myself.
The idea seemed rather offensive to him, and I'm quite glad I didn't go to work there, or anywhere that using AI is an expectation rather than an option.
I definitely don't see a team that relies on it heavily having fun in the long run. Everyone has time for new features, but nobody wants to dedicate time to rewriting old ones that are an unholy mess of bad assumptions and poorly understood.
Even though there are still private whispers of "just keep doing what you're doing no one is going to be fired for not using AI", just the existence of the top down mandate has made me want to give up and leave
My fear is that this is every company right now, and I'm basically no longer a fit for this industry at all
Edit: I'm a long way from retirement unfortunately so I'm really stuck. Not sure what my path forward is. Seems like a waste to turn away from my career that I have years of experience doing, but I struggle like crazy to use AI tools. I can't get into any kind of flow with them. I'm constantly frustrated by how aggressively they try to jump in front of my thought process. I feel like my job changed from "builder" to "reviewer" overnight and reviewing is one of the least enjoyable parts of the job for me
I remember an anecdote about Ian McKellen crying on a green screen set when filming the Hobbit, because Talking to a tennis ball on a stick wasn't what he loved about acting
I feel similarly with AI coding I think
Like, say there's a catalog of 1000 of the most common enterprise (or embedded, or UI, or whatever) design patterns, and AI is good at taking your existing system, your new requirements, identifying the best couple design patterns that fit, give you a chart with the various tradeoffs, and once you select one, are able to add that pattern to your existing system, with the details that match your requirements.
Maybe that'd be cool? The system/AI would then be able to represent the full codebase as an integration of various patterns, and an engineer, or even a technical PM, could understand it without needing to dive into the codebase itself. And hopefully since everything is managed by a single AI, the patterns are fairly consistent across the entire system, and not an amalgamation of hundreds of different individuals' different opinions and ideals.
Another nice thing would be that huge migrations could be done mostly atomically. Currently, things like, say, adding support in your enterprise for, say, dynamic authorization policies takes years to get every team to update their service's code to handle the new authz policy in their domain, and so the authz team has to support the old way and the new way, and a way to sync between them, roughly forever. With AI, maybe all this could just be done in a single shot, or over the course of a week, with automated deployments, backfill, testing, and cleanup of the old system. And so the authz team doesn't have to deal with all the "bugging other teams" or anything else, and the other teams also don't have to deal with getting bugged or trying to fit the migration into their schedules. To them it's an opaque thing that just happened, no different from a library version update.
With that, there's fewer things in flight at any one time, so it allows engineers and PMs to focus on their one deliverable without worrying how it's affecting everyone else's schedules etc. Greater speed begets greater serializability begets better architecture begets greater speed.
So, IDK, maybe the end game of AI will make the job more interesting rather than less. We'll see.
There is nothing a VC loves more than the idea of extracting more value from people without investing more into them
I think it's way more basic. Much like recruiters calling me up and asking about 'kubernetes' they are just trying to get a handle on something they don't really understand. And right now all stickers point to 'AI' as the handle that people should pull on to get traction in software.
It is incredibly saddening to me that people do pattern matching and memorize vocabulary instead of trying to understand things even at a basic level so they can reason about it. But a big part of growing up was realizing that most people don't really understand or care to understand things.
Cursor is great for fuzy search across the legacy project. Requests like "how you do X here" can help a lot while fixing old bug.
Or adding a documents. Commit description generated based on diff. Or adding a javadoc to your methods.
Whatever step in your workflow which consists of rewriting existing text but not creating anything new - use Cursor or similar AI tool.
I've probably written 50 over the last two years for relatively routine stuff that I'd either not do (wasn't that important) or done via other means (schelpping through aws cli docs comes to mind) at 2x the time. I get little things done that I'd otherwise have put off. Same goes for IaC stuff for cloud resources. If I never have to write Terraform or Cloudformation again, I'd be fine with that.
Autocomplete is hit or miss for me--vscode is pretty good with CoPilot, Jetbrains IDEs are absolutely laughably bad with CoPilot (typically making obvious syntax errors on any completion for a function signature, constructor, etc) to the point that I disabled it.
I've no interest in any "agent" thingys for the time being. Just doesn't interest me, even if it's "far better than everyone" or whatever.
I am the opposite. After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today. I moved into product management because while I still enjoy building things, it's much more satisfying/challenging to focus on the higher-level issues of making a product that solves a need. My professional life became writing specs, and reviewing code. It's therefore actually kind of fun to work with AI, because I can think technically, but I don't have to do the tedious parts that make me want to descend into a coma.
I could care less if I'm writing a spec for a robot, or I'm writing a spec for a junior front-end engineer. They're both going to screw up, and I'm going to have to spend time explaining the problem again and again...at least the robot never complains and tries really hard to do exactly what I ask, instead of slacking off, doing something more intellectually appealing, getting mired in technical complexity, etc.
If this is your experience of programming, then I feel for you, my dude, because that sucks. But it is definitely not my experience of programming. And so I absolutely reject your claim that this experience represents "99% of programming" -- that stuff is rote and annoying and automate-able and all that, no argument, but it's not what any senior-level engineer worth their salt is spending any of their time on!
Similar to the differences between an art collector and a painter. One wants the ends, the other desires the means.
I enjoy writing code. I just don't enjoy writing code that I've written a thousand times before. It's like saying that Picasso should have enjoyed painting houses for a living. They're both painting, right?
(to be painfully clear, I'm not comparing myself to Picasso; I'm extending on your metaphor.)
Well. Maybe we have to agree to disagree but I think it makes mistakes far more frequently than I do
Even if it makes mistakes exactly as often as I do, making 100x as many mistakes in the same amount as time seems like it would be absolutely impossible to keep up with
I also love programming behaviours and interactions, just not creating endless C# classes and looking at how to implement 3D math
After a long day at the CRUD factory, being able to vibe code as a hobby is fun. Not super productive, but it's better than the alternative (scrolling reels or playing games)
I know a lot of folks would say that's what search & replace is for, but it's far easier to ask the bot to do it, and then check the work.
Forgive me for being dense, but isn't it just clicking the "rename" button on your IDE, and letting it propagate the change to all definitions and uses? This already existed and worked fine well before LLMs were invented.
The far more common situation is that I'm refactoring something, and I realize that I want to make some change to the semantics or signature of a method (say, the return value), and now I can't just use search w/o also validating the context of every change. That's annoying, and today's bots do a great job of just handling it.
Another one, I just did a second ago: "I think this method X is now redundant, but there's a minor difference between it, and method Y. Can I remove it?"
Bot went out, did the obvious scan for all references to X, but then evaluated each call context to see if I could use Y instead.
(But even in the case of search & replace, I've had my butt saved a few times by agent when it caught something I wasn't considering....)
Regardless of what your definition of horrible and boring happens to be, just being able to tell the bot to do a horrible boring thing and having it done with like a junior level intelligence is so experience enhancing that it makes coding more fun.
People should try this kind of coding a couple times just because it's an interesting exercise in figuring out what parts of coding are important to you.
Yeah this is for sure true, but it's probably true in degrees.
I think there was even a study or something (from GitHub maybe) about the frequency of languages and how there were far more commits in say Rust on weekends than weekdays (don't quote me on this).
Plenty of people like programming but really don't find yet-another-enterprise-CRUD-with-React-front-end thing to be thrilling, so they will LLM-pasta it to completion but otherwise would have fun hacking away in langs/stuff they like.
I identify with that (hypothetical) crowd.
Do this a few times and you start to realize it is kinda of worse than just being in the driver's seat in terms of the coding right from the start. For one thing, when you jump in, you are working with code that is probably architectured quite differently from the way you normally do, and you have no developed the deep mental model that is needed to work with the code effectively.
Not to say the LLMs are not useful, especially in agent mode. But the temptation is always to trust and task them with more than they can handle. maybe we need an agent that limits the scope of what you can ask it to do, to keep you involved at the necessary level.
People keep thinking we are at the level where we can forget about the nitty gritty of the code and rise up the abstraction level, when this is nothing close to the truth.
[1] Source: me last week trying really hard to work like you are talking about with Claude Code.
You're assuming that I haven't. Yes, sometimes you have to do it yourself, and the people who are claiming that you can replace experienced engineers with these are wrong (at least for now, and for non-trivial problems).
> Do this a few times and you start to realize it is kinda of worse than just being in the driver's seat in terms of the coding right from the start. For one thing, when you jump in, you are working with code that is probably architectured quite differently from the way you normally do, and you have no developed the deep mental model that is needed to work with the code effectively.
Disagree. There's not a single piece of code I've written using these that I haven't carefully curated myself. Usually the result (after rounds of prompting) is smaller, significantly better, and closer to my original intended design than what I got out of the machine on first prompt.
I still find them to be a significant net enhancement to my productivity. For me, it's very much like working with a tireless junior engineer who is available at all hours, willing to work through piles of thankless drudgery without complaint, and also codes about 100x faster than I do.
But again, I know what I'm doing. For an inexperienced coder, I'm more inclined to agree with your comment. The first drafts that these things emit is often pretty bad.
I think (at least by the original definition[0]) this is not vibe coding. You aren't supposed to be reviewing the code, just execute and pray.
[0]: https://xcancel.com/karpathy/status/1886192184808149383
1 - Using coding tools in a context/language/framework you're already familiar with.
This one I have been having a lot of fun with. I am in a good position to review the AI-generated code, and also examine its implementation plan to see if it's reasonable. I am also able to decompose tasks in a way that the AI is better at handling vs. giving it vague instructions that it then does poorly on.
I feel more in control, and it feels like the AI is stripping away drudgery. For example, for a side project I've been using Claude Code with an iOS app, a domain I've spent many years in. It's a treat - it's able to compose a lot of boilerplate and do light integrations that I can easily write myself, but find annoying.
2 - Using coding tools in a context/language/framework you don't actually know.
I know next to nothing about web frontend frameworks, but for various side projects wanted to stand up some simple web frontends, and this is where AI code tools have been a frustration.
I don't know what exactly I want from the AI, because I don't know these frameworks. I am poorly equipped to review the code that it writes. When it fails (and it fails a lot) I have trouble diagnosing the underlying issues and fixing it myself - so I have to re-prompt the LLM with symptoms, leading to frustrating loops that feel like two cave-dwellers trying to figure out a crashed spaceship.
I've been able to stand up a lot of stuff that I otherwise would never have been able to, but I'm 99% sure the code is utter shit, but I also am not in a position to really quantify or understand the shit in any way.
I suppose if I were properly "vibe coding" I shouldn't care about the fact that the AI produced a katamari ball of code held together by bubble gum. But I do care.
Anyway, for use case #1 I'm a big fan of these tools, but it's really not the "get out of learning your shit" card that it's sometimes hyped up to be.
Once I've done that and piked a few follow up questions, I feel much better in diving into the generated code.
I use Cursor / ChatGPT extensively and am ready to dip into more of an issue / PR flow but not sure what people are doing here exactly. Specifically for side projects, I tend to think through high level features, then break it down into sub-items much like a PM. But I can easily take it a step further and give each sub issue technical direction, e.g. "Allow font customization: Refactor tailwind font configuration to use CSS variables. Expose those CSS variables via settings module, and add a section to the Preferences UI to let the user pick fonts for Y categories via dropdown; default to X Y Z font for A B C types of text".
Usually I spend a few minutes discussing w/ ChatGPT first, e.g. "What are some typical idioms for font configuration in a typical web / desktop application". Once I get that idea solidified I'd normally start coding, but could just as easily hand this part off for simple-ish stuff and start ironing out he next feature. In the time I'd usually have planned the next 1-2 months of side project work (which happens, say, in 90 minute increments 2x a week), the Agent could knock out maybe half of them. For a project i'm familiar with, I expect I can comfortably review and comment on a PR with much less mental energy than it would take to re-open my code editor for my side project, after an entire day of coding for work + caring for my kids. Personally I'm pretty excited about this.
I almost enjoy it. It's kind of nice getting to feel like management for a second. But the moment it hits a bug it can't fix and you have to figure out its horrible mess of code any enjoyment is gone. It's really nice for "dumb" changes like renumbering things or very basic refactors.
It went south immediately. It was confused about the differences between Tailwind 3 and 4, leading to a broken setup. It wasn’t able to diagnose the problem but just got more confused even with patient help from me in guiding it. Worse, it was unable to apply basic file diffs or deletes reliably. In trying to diagnose whether this is a known issue with Cursor, it decided to search for bug reports - great idea, except it tried to search the codebase for it, which, I remind you, only contained code that it had written itself over the past half hour or so.
What am I doing wrong? You read about people hyping up this technology - are they even using it?
EDIT: I want to add that I did not go into this antagonistically. On the contrary, I was excited to have a use case that I thought must be a really good fit.
I'm seeing that the people hyping this up aren't programmers. They believe the reason they can't create software is they don't know the syntax. They whip up a clearly malfunctioning and incomplete app with these new tools and are amazed at what they're created. The deficiencies will sort themselves out soon, they believe. And then programmers won't be needed at all.
I have the same issue with Svelte 4 vs 5. Adding some notes to the prompt to be used for that project helps sort of.
The first project was a simple touch based control panel that communicates via REST/Websocket and runs a background visual effect to prevent the screen burn-in. It took a couple of days to complete. There were often simple coding errors but trivial enough to fix.
The second is a 3D wireframe editor for distributed industrial equipment site installations. I started by just chatting with o3 and got the proverbial 80% within a day. It includes orbital controls, manipulation and highlighting of selected elements, property dialogs. Very soon it became too unwieldy for the laggard OpenAI chat UI so I switched to Codex to complete most of the remaining features.
My way with it is mostly:
- ask no fancy frameworks: my projects are plain JavaScript that I don't really know, makes no sense to pile on React and TypeScript atop of it that I am even less familiar with
- explain what I want by defining data structures I believe are the best fit for internal representation
- change and test one thing at a time, implement a test for it
- split modules/refactor when a subsystem gets over a few hundred LOC, so that the reasoning can remain largely localized and hierarchical
- make o3 write an llm-friendly general design document and description of each module. Codex uses it to check the assumptions.
As mentioned elsewhere the code is mediocre at best and it feels a bit like when I've seen a C compiler output vs my manually written assembly back in the day. It works tho, and it doesn't look to be terribly inefficient.
I managed to get it to do one just now, but it struggled pretty hard, and still introduced some mistakes I had to fix.
First, you might've been using a model like Sonnet 3.7, whose knowledge cutoff doesn't include Tailwind 4.0. The model should know a lot about the tech stack you mentioned, but it might not know the latest major revisions if they were very recent. If that is the case (you used an older model), then you should have better luck with a model like Sonnet 4 / Opus 4 (or by providing the relevant updated docs in the chat).
Second, Cursor is arguably not the top tier hotness anymore. Since it's flat-rate subscription based, the default mode of it will have to be pretty thrifty with the tokens it uses. I've heard (I don't use Cursor) in Cursor Max Mode[0] improves on that (where you pay based on tokens used), but I'd recommend just using something like Claude Code[1], ideally with its VS Code or IntelliJ integration.
But in general, new major versions of sdk's or libraries will cause you a worse experience. Stable software fares much better.
Overall, I find AI extremely useful, but it's hard to know which tools and even ways of using these tools are the current state-of-the-art without being immersed into the ecosystem. And those are changing pretty frequently. There's also a ton of over-the-top overhyped marketing of course.
Wrong tool..
[1] Antiqua et Nova p. 105, cf. Rev. 13:15
https://www.vatican.va/roman_curia/congregations/cfaith/docu...
> Moreover, AI may prove even more seductive than traditional idols for, unlike idols that “have mouths but do not speak; eyes, but do not see; ears, but do not hear” (Ps. 115:5-6), AI can “speak,” or at least gives the illusion of doing so (cf. Rev. 13:15).
It quotes Rev. 13:15 which says (RSVCE):
> and it was allowed to give breath to the image of the beast so that the image of the beast should even speak, and to cause those who would not worship the image of the beast to be slain.
I think the unfortunate reality of human innovation is that too many people consider that technological progress is always good for progresses sake. Too many people create new tools, tech, etc. without really stopping to take a moment and think or have a discussion on what the absolute worst case applications of their creation will be and how difficult it'd be to curtail that kind of behavior. Instead any potential (before creation) and actual (when it's released) human suffering is hand waved away as growing pains necessary for science to progress. Like those websites that search for people's online profiles based on image inputs sold by their creators as being used to find long lost friends or relatives when everyone really knows it's going to be swamped by people using it to doxx or stalk their victims, or AI photo generation models for "personal use" being used to deep fake nudes to embarrass and put down others. In many such cases the creators sleep easy at night with the justification that it's not THEIR fault people are misusing their platforms, they provided a neutral tool and are absolved of all responsibility. All the while they are making money or raking in clout fed by the real pain of real people.
If everyone took the time to weigh the impact of what they're doing even half as diligently as that above article (doesn't even have to be from a religious perspective) the world would be a lot brighter for it.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." -Jeffrey L. Goldblum when ILM showed him an early screening of Jurassic Park.
> In many such cases the creators sleep easy at night with the justification that it's not THEIR fault people are misusing their platforms, they provided a neutral tool and are absolved of all responsibility.
The age old question of gun control.
Funny to note that at least one inventor who contributed greatly to modern warfare (the creator of the gatling gun) did seem to reflect on his future impact but figured it'd go in the opposite direction- that a weapon that could replace a hundred soldiers with one would make wars smaller and less devastating, not more!
Writing code is a really fun creative process:
1. Conceive an exciting and useful idea
2. Comprehend the idea fully from its top to its bottom
3. Translate the idea into specific instructions utilizing known mechanics
4. Find the beautiful middleground between instruction and abstraction
5. Write lots and lots of code!
6. Find where your conception was flawed and fix it as necessary.
7. Repeat steps 2-6 until the thing works just as you dreamed or you give up.
It's maybe the most fun and exciting mixture of art and technology ever.
Using AI is the same as code-review or being a PM:
1. Have an ideal abstraction
2. Reverse engineer an actual abstraction from code
3. Compare the two and see if they match up
4. If they don't, ask the author to change or fix it until it does
5. Repeat steps 2-4 until it does
This is incredibly not fun, because it's not a creative process.
You're essentially just an accountant or calculator at this point.
That's the bigger issue in the whole LLM hype that irks me. The tacit assumption that actually understanding things is now obsolete, as long as the LLM delivers results. And if it doesn't we can always do yet another finetuning or try yet another magic prompt incantation to try and get it back on track. And that this is somehow progress.
It feels like going back to pre-enlightenment times and collecting half-rationalized magic spells instead of having a solid theoretical framework that let's you reason about your systems.
There is a magic in understanding.
There is a different magic in being able to use something that you don't understand. Libraries are an instance of this. (For that matter, so is driving a car.)
The problem with LLMs is that you don't understand, and the stuff that it gives you that you don't understand isn't solid. (Yeah, not all libraries are solid, either. LLMs give you stuff that is less solid than that.) So LLMs give you a taste of the magic, but not much of the substance.
The luddites were not against progress or the technology itself. They were opposed to how it was used, for whose benefit, and for whose loss [0].
The AI-Luddite position isn’t ain’t AI, it’s (among other things anti mass copyright theft from creators to train something with the explicit goal of putting them out of a job, without compensation. All while producing an objectively inferior product but passing it off as a higher quality one.
[0]: https://www.hachettebookgroup.com/titles/brian-merchant/bloo...
We need a catchy name.
It's new tech. We're all "giraffes on roller skates" whenever we start something new. Find out where you can use in your life and use it. Where you can't or don't want to, don't. Try to not get deterred by analysis paralysis when there's something that doesn't make sense. In time, you'll get it.
It confounds me how these people would trust the same companies who fueled the decay of social discourse via the internet with the creation of AI models which aim to encroach on every aspect of our lives.
How was any of this inevitable? Point me to which law of physics demanded we reach this state of the universe. These companies actively choose to train these models, and by framing their development as "inevitable" you are helping absolve them of any of the negative shit they have/will cause.
>figuring out how society evolves from here instead of complaining and trying to legislate away math
Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?
>prevent honest people from using these tools while criminals freely make use of them
What is your argument here? Should we suggest that everyone learn how to money launder to even the playing field against criminals?
Gestures vaguely around at everything
Intelligence is intelligence, and we are beginning to really get down to the fundamentals of self-organization and how order naturally emerges from chaos.
> Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?
Yes, I can. Access to information is one thing (must be carefully handled, but information wants to be free, and there should be no law determining what one person can say to another, barring NDAs and government classification of national secrets (which doesn't include math and physics) but we absolutely have international treaties to limit nuclear proliferation, and we also have countries who do not participate in these treaties, or violate them, which illustrates my point that criminals will do whatever they want.
> Should we suggest that everyone learn how to money launder to even the playing field against criminals?
I have no interest in entertaining your straw mans. You're intelligent enough to understand context.
Certainly. We should also teach them how phishing scams work, and about confirmation bias, high pressure sales tactics, phantom limbs, vote splitting, inflation, optical illusions, demagoguery, peer pressure, lotteries, both insurance and insurance fraud, and lots of other things work.
I disagree that my comment was negative at all. Many of those same people (not all) spend a lot of time making negative comments towards my work in AI, and tossing around authoritarian ideas of restriction in domains they understand like art and literature, while failing to also properly engage with the real issues such as intelligent mass surveillance and increased access to harmful information. They would sooner take these new freedom weapons out of the hands of the people while companies like Palintir and NSO Group continue to use them at scale.
> super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge
So am I, the difference is I am having a rational and not an emotional response, and I have spent a lot of time deeply understanding machine learning for the last decade in order to be able to have a measured, informed response.
> You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too
I firmly believe you cannot ethically outlaw math, and this is part of why I have trouble empathizing with those who feel otherwise. People are so quick to support authoritarian power structures the moment it supposedly benefits them or their world view. Meanwhile, the informed are doing what they can to prevent this stuff from being used to surveil and classify humanity, and to find a balance that allows humans to coexist with artificial intelligence.
We are not falling prey to reactionary politics and disinformation, and we are not willing to needlessly expand government overreach and legislate away critical individual freedom in order to achieve our goals.
that's like saying that you can't outlaw selling bombs in a store because its "chemistry".
Or even for usage- can we not outlaw shooting someone with a gun because it is "projectile physics"?
Im glad you do oppose Palantir - we're on the same side and I support what you're doing! - but I also think you're leaving the most effective solution on the table by ignoring regulatory options.
But for nuclear - there's certainly good uses for nuclear power but its scary! and powers evil world ending bombs! and if it goes wrong people end up secretly mutated and irradiated and its all so awful and we should shut it down now !!
And to be honest I don't know my own feelings on nuclear power or "good" AI either, but I do get it when people want to Shut it All Down Right Now !! Even if there is a legitimate case for being genuinely useful to real people.
Nowadays it's been a long time since my brain totally checked out on spelling. Everything I write in every case has spell check, so why waste neurons on spelling?
I fear the same will happen on a much broader level with AI.
I don't know of any of the research, but I suspect that teaching reading via "sight reading" over phonics is heavily detrimental to developing an intrinsic automatic sense of spelling.
It’s not angst to see students throughout the entire spectrum end up using ChatGPT to write their papers, summarize 3 paragraphs, and use it to bypass any learning.
It’s not angst to see people ask a question to an LLM and talk what it says as gospel.
It’s not angst to understand the environmental impact of all this stupid fucking shit.
It’s not angst to see the danger in generative AI not only just creating slop, but further blurring the lines of real and fake.
It’s not angst to see the vast amount of non-consensual porn being generated of people without their knowledge.
Feel like I’m going fucking crazy here, just day after day of people bowing down at the altar and legit not giving a single fuck about what happens after rofl
This is a really wild and unpredictable time, and it's ok to see the problems looming and feel unsettled at how easily people are ignoring the potential oncoming train
I would suggest taking some time for yourself to distance yourself from this as much as you can for your own mental health
Ride this out as best you can until things settle down a bit. You aren't alone
The "math and capex" are inextricably intertwined with "the carbon". If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem, and we'll all be better off. If the tools have no net value at a market-clearing price for energy (as purported), then it won't be a problem.
I mean, maybe the productive way to say this is that we should more formally link the environmental cost of energy production to the market cost of energy. But as phrased (and I suspect, implied), it sounds like "people who use LLMs are just profligate consumers who don't care about the environment the way that I do," and that any societal advancement that consumes energy (as most do) is subject to this kind of generalized luddite criticism.
I'm confused what you are saying, do you suggest "the market" will somehow do something to address climate change? By what mechanism? And what do LLMs have to do with that?
The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions. [ https://www.greenmemag.com/science-technology/googles-contro... ]
That's not exactly a new thing, just making the problem worse. What is now different with LLMs as opposed to for example crypto mining?
No, I'm suggesting that the market will take care of the cost/benefit equation, and that the externalities are part of the costs. We could always do a better job of making sure that costs capture these externalities, but that's not the same thing as what the author seems to be saying.
(Also I'm saying that we need to get on with nuclear already, but that's a secondary point.)
> The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions.
They no more "require" this, than operating an electric car "requires" the same thing. While there may be environmental extremists who advocate for a wholesale elimination of cars, most sane people would be happy for the balance between cost and benfit represented by electric cars. Ergo, a similar balance must exist for LLMs.
You believe that climate change is an externality that the market is capable of factoring in the cost/benefit equation. Then I don't understand why you disagreed with the statement "the market will somehow do something to address climate change". There is a more fundamental disagreement here.
You said:
> If these tools [LLMs/ai] have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem
And again, why? By what mechanism? Let's say Microsoft 10x it's profit through AI, then it will "finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem". But why? Why would it? Why do you say "we" if we talk about the market.
I think it’s simple: the reign of the essay is over. Educators must find a new way to judge a student’s understanding.
Presentations, artwork, in class writing, media, discussions and debates, skits, even good old fashioned quizzes all still work fine for getting students to demonstrate understanding.
As the son of two teachers I remember my parents spending hours in the evenings grading essays. While writing is a critical skill, and essays contain a good bit of information, I’m not sure education wasn’t overindexing on them already. They’re easy to assign and grade, but there’s so much toil on both ends unrelated to the core subject matter.
Skipping that entirely, or using a LLM to do most of it for you, skips something rather important.
I agree entirely with you except for the word "forces." Writing can cause synthesis. It should. It should be graded to encourage that...
...but all of that is a whole lot of work for everyone involved: student and teacher alike.
And that kind of synthesis is in no way unique to essays! All of the other mediums I mention can make synthesis more readily apparent then paragraphs of (often very low quality) prose. A clever meme lampooning the "mere merchant" status of the Medici family could demonstrate a level of understanding that would take paragraphs of prose to convey.
It is very strange that no real open source project uses "AI" in any way. Perhaps these friends work on closed source and say what their manager wants them to say? Or they no longer care? Or they work in "AI" companies?
[1] He does mention return on investment doubts and waste of energy, but claims that the agent nonsense works (without public evidence).
To them, gen-ai is a savior - Earlier, they felt out of the game - now, they feel like they can compete. Earlier they were wannabe coders. Now they are legit.
But, this will last only until they accept a chunk of code put out by co-pilot and then spend the next 2 days wrangling with it. At that point, it dawns on them what these tools can actually do.
It is very strange that no real open source project uses "AI" in any way.
How do you know? Given the strong opposition that lots of people have I wouldn't expect its use to be actively publicized. But yes, I would expect that plenty of open source contributors are at the very least using Cursor-style tab completion or having AIs generate boilerplate code.
Perhaps these friends work on closed source and say what their manager wants them to say?
"Everyone who disagrees with me is paid to lie" is a really tiresome refrain.
Anecdotally check this out https://github.com/antiwork/gumroad/graphs/contributors
Devin is an AI agent
Using genAI is particularly hard on open source projects due to worries about licensing: if your project is under license X, you don't want to risk including any code with a license incompatible with X, or even under a license compatible with X but without the correct attribution.
It's still not settled whether genAI can really "launder" the license of the code in its training set, or whether legal theories like "subconscious copying" would apply. In the later case, using genAI could be very risky.
I used AI a lot (vibe coding for spikes and throwaway tools, AI-assisted coding for prod code, chatgpt sessions to optimize db schema and queries, etc). I’d say some 80% or more of the code was written by Claude and reviewed by me.
It has not only sped up the development, but as a side project, I would never even have finished it (deployed to prod with enough features to be useful) without AI.
Now you can say that doesn’t count because it’s a side project, or because I’m bullish on AI (I am, without jumping on the hype train), or because it’s too small, or because I haven’t blogged about it, or because anecdotes are not data, and I will readily admit I’m not a true Scotsman.
My wife, a high school teacher, remarked to me the other day “you know, it’s sad that my new students aren’t going to be able to do any of the fun online exercises that I used to run.”
She’s all but entirely removed computers from her daily class workflow. Almost to a student, “research” has become “type it into Google and write down whatever the AI spits out at the top of the page” - no matter how much she admonishes them not to do it. We don’t even need to address what genAI does to their writing assignments. She says this is prevalent across the board, both in middle and high school. If educators don’t adapt rapidly, this is going to hit us hard and fast.
+1 to this. thank you `go fmt` for uniform code. (even culture of uniform test style!). thank you culture of minimal dependencies. and of course go standard library and static/runtime tooling. thank you simple code that is easy to write for humans..
and as it turns out for AIs too.
for me it works well for small scope, isolated sub system or trivial code. unit tests, "given this example: A -> B, complete C -> ?" style transformation of classes (e.g. repositories, caches, etc.)
I agree with this. It's probably terrible for structured education for our children.
The one and only one caveat: Self-Driven language learning
The one and only actual use (outside of generating funny memes) I've had from any LLM so far, is language learning. That I would pay for. Not $30/pcm mind you . . . but something. I ask the model to break down a target language sentence for me, explaining each and every grammar point, and it does so very well. sometimes even going to explain the cultural relevance of certain phrases. This is great.
I've not found any other use for it yet though. As a game engine programmer (C++) The code I write now a days quite deliberate and relatively little compared to a web-developer (I used to be one, I'm not pooping on web devs). so if we're talking about the time/cost of having me as a developer work on the game engine, I'm not saving any time or money by first asking Claude to type what I was going to type anyway. And it's not advanced enough yet to hold the context of our entire codebases spanning multiple components.
Edit, Migaku [https://migaku.com/] is a great language learning application that uses this
As OP, I'm not sure it's worth all that CO2 we're pumping into our atmosphere.
Now, just a year later, DeepL is beaten by open models served by https://groq.com for most languages, and Claude 4 / GPT-4.1 / my hybrid LLM translator (https://nuenki.app/translator) produce practically perfect translations.
LLMs are also better at critiquing translations than producing them, but pre-thinking doesn't help at all, which is just fascinating. Anyway, it's a really cool topic that I'll happily talk at length about! They've made so much possible. There's a blog on the website, if anyone's curious.
Tbh I think we’re going to need a big breakthrough to fix that anyway. Like fusion etc.
A bit less proompting isnt going to save the day
That’s not to say one shouldn’t be mindful. Just think it’s no longer enough
1) The emergence of LLMs and AIs that have turned the Turing test from science fiction into basically irrelevant. AI is improving at an absolutely mind boggling rate.
2) The transition from fossil fuel powered world to a world that will be net zero in few decades. The pace in the last five years has been amazing. China is basically rolling out amounts of solar and batteries that were unthinkable in even the most optimistic predictions a few years ago. The rest of the world is struggling to keep up and that's causing some issues with some countries running backward (mainly the US).
It's true that a lot of AI is powered by mix of old coal plants, cheap Texan gas and a few other things that aren't sustainable (or cheap if you consider the cleanup cost). However, I live in the EU and we just got cut off from cheap Russian gas, are now running on imported expensive gas (e.g. from Texas) and have some pet peeves about data sovereignty that are causing companies like OpenAI, Meta, and Google to have to use local data centers for serving their European users. Which means that stuff is being powered with electricity that is locally supplied with a mix of old dirty legacy infrastructure and new more or less clean infrastructure. That mix is shifting rapidly towards renewables.
The thing is that old dirty infrastructure has been on a downward trajectory for years. There are not a lot of new gas plants being built (LNG is not cheap) and coal plants are going extinct in a hurry because they are dirty and expensive to operate. And the few gas plants that are still being built are in stand by mode much of the time and losing money. Because renewables are cheaper. Power is expensive here but relatively clean. The way to get prices down is not to import more LNG and burn it but to do the opposite.
What I like about things that increase demand for electricity is that they generate investments in providing solutions to clean energy and actually accelerate. The big picture here is that the transition to net zero is going to vastly increase demands on power grids. If you add up everything needed for industry, transport, domestic and industrial heating, aviation, etc. it's a lot. But the payoffs are also huge. People think of this as cost. That's short term thinking. The big picture here is long term. And the payoff is net zero and cheap power making energy intensive things both affordable and sustainable. We're not there yet but we're on a path towards that.
For AI that means, yes, we need a lot of TW of power and some of the uses of AI seem frivolous and not that useful. But the big picture is that this is changing a lot of things as well. I see power needs as a challenge rather than a problem or reason to sit on our hands. It would be nice if that power was cheap. It so happens that currently the cheapest way to generate power happens to be through renewables. I don't think dirty power is long term smart, profitable, or necessary. And we could definitely do more to speed up its demise. But at the same time, this increased pressure on our grids is driving the very changes we need to make that happen.
hold on, its very simple. here's a oneliner even degrowthers would love: extra humans cost a lot more in money and carbon than it cost to have an llm spin up and down to do this work that would otherwise not get done.
I think a useful LLM for education would be one with heavy guardrails, which is “forced” to provide step-by-step back and forth tutoring instead of just giving out answers.
Right now hallucinations would be problematic, but assuming its in a domain like Math (and maybe combined with something like Wolfram to verify outputs), i could see this theoretical tool being very helpful to learning mathematics, or even other sciences.
For more open-ended subjects like english, history, etc then it may be less useful.
Perhaps only as a demonstration, maybe an LLM is prompted to pretend to be a peasant from Medieval Europe, and with text to voice we could have students as a group interact with and ask questions of the LLM. In this case, maybe the LLM is only trained on historical text from specific time periods, with settings to be more deterministic and reduce hallucinations
That said, and it's kind of hard to express this well, not only is the actual productivity still far from what the hype suggests, but I regard agentic coding to be like a bad addictive drug right now. The promise of magic from the agent is always just seems around the corner: just one more prompt to finally fix the rough edges of what it has spat out, just one more helpful hint to put it on the right path/approach, just one more reminder for it to actually apply everything in CLAUDE.md each time...
Believe it or not, I spent several days with it, crafting very clear and specific prompts, prodding with all kinds of hints, even supplying it with legacy code that mostly works (although written in CSharp), and at the end it had written a lot of that almost works, except a lot of simple things just wouldn't work, not matter how much time I spent with it.
In the end, after a couple of hours of writing the code myself, I had a high a quality type design and basic logic, and a clear path to implementing the all the basic features.
So, I don't know, for now even Claude seems mostly useful only as a sporadic helper within small contexts (drafting specific functions, code review of moderate amounts of code, relatively simple refactoring, etc). I believe knowing when AI would help vs slow you down is becoming key.
For this tech to improve, maybe a genetic/evolutionary approach would be needed. Given a task, the agent should launch several models to work on the problem, with each model also launching several randomized approaches to working on the problem. Then the agent should evaluate all the responses and pick the "best" one to return.
Let’s see.
> But, while I have a lot of sympathy for the contras and am sickened by some of the promoters, at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.
So the Moderate is a Believer. But it’s offset by being concerned about The Climate and The Education and The Investments.
You can try to write a self-aware/moment-aware intro. It’s the same fodder for the front page.
[1] Which is an emergent phenomena
AI is completely destroying the economics of putting out free information. LLMs still relies on human beings to experience and document the real world, but they strip those humans of the reward. Creators lose the income, credit and community that come with having an audience. In the long term, I fear that a lot of the quality information will disappear because it's no longer worth creating.
I wrote a bit about this earlier in a very relevant thread: https://news.ycombinator.com/item?id=44099570
These tools are 2 years old. They're vastly superior to their versions from two years ago. As people continue to utilize and provide feedback these tools will continue to improve and become better and better at providing customers (non-programmers) access to features, tools, and technologies that they would otherwise have to rely on a team of developers for.
Personally I cannot afford the thousands of dollars per hour required to retain a team of top shelf developers for some crazy hair brained Bluetooth automation for my house lighting scheme. I can, however, spend a weekend playing around with Claude (and chat GPT and...). And I can get close enough. I don't need a production tool. I just need the software to do the little thing, the two seconds of work, that I don't want to do every single day.
Who's created a RAG pipeline? Not me! But I can walkthrough the BS necessary to get PostGRE, FastAPI, and Llama 3 set up so that I start automating email management.
And that's the beauty: I don't have to know everything anymore! Not spend months trying to parse all the specialized language surrounding the tools I'll need to implement. I just need to ask the questions I don't have answers for, making sure that I ask enough that the answers tie back into what I do know.
And LLM's and vibe coding do that just fine.
strict9•6mo ago
I use AI every day, I feel like it makes me more productive, and generally supportive of it.
But the angst is something else. When nearly every tech related startup seems to be about making FTEs redundant via AI it leaves me with a bad feeling for the future. Same with the impact on students and learning.
Not sure where we go from here. But this feels spot on:
>I think that the best we can hope for is the eventual financial meltdown leaving a few useful islands of things that are actually useful at prices that make sense.
bob1029•6mo ago
How many civil engineering projects could we have completed ahead of schedule and under budget if we applied the same amount of wild-eyed VC and genius tier attention to the problems at hand?
pzo•6mo ago
fellowniusmonk•6mo ago
If ours roles hadn't been specifically targeted by government policy for reduction as a way to buoy government revenues and prop up the budgetary bottom line, in the face of decreasing taxes for favored parties.
This is simply policy induced multifactorial collapse.
And LLMs get to take the blame from engineers because that is the excuse being used. Pretty much every old school hacker who has played around with them recognizes that LLMs are impressive and sci-fi, it's like my childhood dream come true for interface design.
I cannot begin to say how fucking stupid the people in charge of these policies are, I'm an old head, I know exactly the type of 80s executives that actively likes to see the nerds suffer because we're all irritating poindexters to them.
The pattern of actively attacking the freedoms and sabotaging incomes of knowledge workers is not remotely a rare pattern, and it's often done this stupidly and at the expense of an countries economic footing and ability to innovate.