Anyway, I casually mentioned he did a lot of his thinking in an oven and her curiousity was really piqued by that idea. Which is funny because every time I mention it to someone, that's the bit that is most interesting to them. I'm not convinced that an AI would necessarily pick up on that detail being of note as much as a human would.
Best of luck to your mother-in-law in finding a way to deal with her voices, though. <3
The other day, my wife needed to divide something, and rather than get up and walk to the next room to grab her phone, she did it on pen and paper longhand.
At first I was amazed that she bothered instead of grabbing her phone to do it.
Then it occurred to me that, while more people than I expect probably remember how to divide by hand correctly, I don't think I've actually seen someone do it in years, perhaps since my school days.
I do agree with the author that art is a human endeavor and mastery requires practice... But I'm less optimistic that mass adoption of the easy way will let masters stand out. More likely, they'll just be buried under the deluge of slop the public craves.
I feel like this will get missed by the general public. What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?
I could generate some weird 70s sci fi art, make an Instagram profile around that, barrage the algorithm with my posts and rack up likes. The likes will give that instant dopamine but it will never fill that need of accomplishing something.
I like LLMs to get me to reword something, since I struggle with that. But just like in programming I focus it on a specific sentence or two. Otherwise why am I doing it?
Getting promoted, getting a better job, generating sales leads, things of that nature. A depressing number of blogs or LinkedIn posts exist only because the author is under some vague belief that it’s part of what they’re supposed to be doing to get ahead in their career.
Did you see the NYC ball drop by any chance? It was plastered with ads. Ads on screen, ads on people, giant KIA ad below the ball that ruined the shot on purpose. Everything is a money grab now, because we are just eyeballs that see shit and buy it.
If you think it's just the old me remembered things differently, here is 2002: https://www.youtube.com/watch?v=iB6OzLUQE3I
US podcasts have a total valuation of $8-9B with a revenue of $1.9B; total K-12 spending is $950B a year (about 500x higher). Education receives nearly three orders of magnitude more money per year.
Most people sitting on a couch smoking weed on camera make little to nothing, while 3.8M teachers are paid an average of $65,000 per year.
You’re comparing one-in-a-million outlier podcasts to the average case teacher in order to reverse the overwhelming amount more we put into education, both in total and on average.
I watch a few DJs on Twitch. They seem like they are having fun but I'm pretty confident many of them would stop if there was no money in it. Maybe after they don't need the money they'd still do it. They need the money now and it's a fun way to get it.
Similarly, I watch Veritasium and Kurzgestat. They both put lots of work in with teams of people. I think they both enjoy it, but some of that enjoyment comes from "making a living at it". If that disappeared, if they didn't need the living from it I wonder if they'd continue.
And the industry is gone. No one could produce figurines like that at any worldly price, probably for the last 100 years. The world is less for it, but it doesn’t matter, art follows different more efficient technologies and methods.
I sympathize with these artisans of the written word. But they’re all wrong, they’re dinosaurs who don’t know it. I myself was one, churning on high-value bespoke written work. The economic model is wrong, we’re the expert 1850s figurine crafters, adapt or … burn out I guess.
In Taiwan I've met indigenous woodworking artists. They sell stuff in markets all the time, plenty of it incredibly intricate. Incidentally, many temples here are also covered in beautifully layered granite carvings.
Art for its own sake. Say something. Experience having said something.
Economic value is the least of it. i get why economic value is the only thing that matters. we made the world this way. i get it.
but also: art for its own sake. say something. experience the saying of the something.
[1]: https://www.inprnt.com/gallery/canadianturtle/pacman-ukiyoe/
If you are talking about the background colour of a slide, that is not "art", it's a simple choice.
The portrait for your d&d character - if you used AI to generate just because you need any image and you don't care, you need a representation, then it may be difficult to classify that as "art". If you drew it, regardless of how bad it is, and you like and appreciate and connect with it, that is "art".
Of course, we may all have our own definitions of "art"
The existence of a sea of AI slop making it impossible to find or publicize writing is what will kill it.
It's purely a loss.
We've had LLMs for years. Image models and coding agents have gotten remarkably good, and their output is all over the place. So where is the AI writing? Outside of automated summaries, formulaic essays, and overly verbose LinkedIn posts, nowhere.
What will actually happen, likely, is a complete death of writing. Not just that the craft is gone, but that art is gone.
What is the point of creating anything if it has no meaning? And likewise, there is no economic value to it either.
So there will simply be no art, and paradoxically any true art will simply be so ridiculously expensive and unaffordable that nearly nobody will benefit from it any more...
Writing is also peculiar in that it is easily referenceable with a deep history, so it serves as a way to compare one's own ideas to others. Memes are similar in principle, but tend towards esotericism and ephemerality in a balkanized internet.
The value, I expect, to some people, is that if they can monetize that, then it's worthwhile to them, while letting them spend less time on it than if they had to do it themselves (or maybe they aren't artists and couldn't do it themselves, period).
I personally find this kinda dishonest, uncreative, and not something I'd care to look at, but that's just me.
As the internet fills with slop, it'll only get harder to find the people who actually care about what they're putting online and not just the views or the ad revenue which is a shame because those are the types of people who make the internet interesting.
That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?
And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.
Weird times.
It's either taking away the most important (or rewarding) thing I need to do (think) and just causing me more work, or it has replaced me.
AI. Is. Not. Useful.
Maybe a better analogy might be a car with an automatic transmission, although that doesn’t capture the pitfalls of AI very well. It could be argued that a good automatic transmission has none of the serious downsides that AI has.
Still, the general idea is sometimes getting stuff faster with less effort more automatically is more important than the “reward” of doing it yourself.
I use a GPS all the time, but only because it also shows me traffic, red light cameras, and potential hazards. I memorized the route after the first 2-3 drives but I keep using the gps for the amenities.
That said, I’m old enough to have used printed map directions and my time in Boy Scouts gave me the skills to read a paper map too.
AI is like delegating to a junior programmer that never learns or gets better.
We can agree all day long about the pitfalls of the technology, but you’ve never used it so you don’t know if it’s causing you more work or replacing you.
>AI. Is. Not. Useful.
Why waste time writing things like this? What's the point?
Take game programming: it takes an immense amount of work to produce a game, problems at multiple levels of abstraction. Programming is only one aspect of it.
Even web apps are much, much more than the code backing them. UIUX runs deep.
I'm having trouble understanding why you think programming is the entirety of the problem space when it comes to software. I largely agree with your colleagues; the fun part for me, at this point in my career, is the architecture, the interface, the thing that is getting solved for. It's nice for once to have line of sight on designs and be able to delegate that work instead of writing variations on functions I've written thousands if not tens of thousands of times. Often for projects that are fundamentally flawed or low impact in the grand scheme of things.
I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.
AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.
This is the way with many labor-saving devices.
> This is the way with many labor-saving devices.
I think that's more the problem of people using only the extremes to build an argument.
If the boilerplate is that obvious why not just have a blueprint for that and copy and paste it over using a parrot?
Also I dont have a nail gun subscription and the nail gun vendor doesnt get to see what I am doing with it.
> Some people don't enjoy certain parts of the creative process,
Sure > and let an LLM handle them.
This is probably the disputed part. It is not a different way of development, and as such it should not be presented like that. In software, we can use ready-made components, choose between different strategies, build everything in a low-level language etc. The trade-offs coming with each choice is in principle knowable; the developer is still in control.LLMs are nothing like that. Using a LLM is more akin to management of outsource software development. On the surface, it might look like you get ready-made components by outsourcing it to them, but there is no contract about any standard, so you have to check everything.
Now if people would present it like "I rather manage an outsourcing process than doing the creative thing" we would have no discussion. But hammers and nails aren't the right analogies.
You're going to have to tell us your definition of 'Using a LLM' because it is not akin to outsourcing (As I use it).
When I use clause, I tell it the architecture, the libraries, the data flows, everything. It just puts the code down which is the boring part and happens fast.
The time is spent mostly on testing, finding edge cases. The exact same thing if I wrote it all myself.
I don't see how this is hard for people to grasp?
> 'Using a LLM' because it is not akin to outsourcing (As I use it).
The things you do with an LLM are precisely what many other IT-firms do when outsourcing to India. Now you might say that this would be bonkers, but that is also why you hear so often that LLM's are the biggest threat to outsourcing instead of software development in general. The feedback cycle with an LLM is much faster. > I don't see how this is hard for people to grasp?
I think I understand you, and I think you have/had something else in mind when hearing the term outsourcing.This is a straw man argument. You have described one potential way to use an LLM and presented it as the only possible way. Even people who use LLMs will agree with you that your weak argument is easy to cut down.
You’re comparing a deterministic method of quickly installing a fastener with something that nondeterministically designs and builds the whole building.
Yes. I think it depends on one's goals.
You can ask, in the same vein, why use Python instead of C? Isn't the real joy of programming in writing effective code with manual memory management and pointers? Isn't the real joy in exploring 10 different libraries for JSON parsing? Or in learning how to write a makefile? Or figuring out a mysterious failure of your algorithm due to an integer overflow?
TBH I am not sure AI is better either (see https://youtube.com/shorts/QZCHax14ImA), but it's probably gonna get figured out.
I recently wrote a 17x3 reed-solomon encoder which is substantially faster on my 10yo laptop than the latest and greatest solution from Backblaze on their fancy schmancy servers. The fun parts for me were:
1. Finally learning how RS works
2. Diving in sufficiently far to figure out how to apply tricks like the AVX2 16-element LUT instruction
3. Having a working, provably better solution
The programming between (2) and (3) was ... fine ... but I have literally hundreds of other projects I've never shipped because the problem solving process is more enjoyable and/or more rewarding. If AI were good enough yet to write that code for me then I absolutely would have used it to have more time to focus on the fun bits.
It's not that I don't enjoy coding -- some of those other unshipped projects are compilers, tensor frameworks, and other things which exist purely for the benefit of programmer ergonomics. It's just that coding isn't the _only_ thing I enjoy, and it often takes a back seat.
I most often see people with (what I can read into) your perspective when they "think" by programming. They need to be able to probe the existing structure and inject their ideas into the solution space to come up with something satisfactory.
There's absolutely nothing wrong with that (apologies if I'm assuming to much about the way you work), but some people work differently.
I personally tend to prefer working through the hard problems in a notebook. By the time the problem is solved, its ideal form in code is obvious. An LLM capable of turning that obvious description into working code is a game changer (it still only works like 30% of the time, and even then only with a lot of heavy lifting from prompt/context/agent structure, so it's not quite a game changer yet, but it has potential).
https://www.usenix.org/system/files/fast19-zhou.pdf is a more modern paper that goes into some related problems of trying to reduce the number of XOR operations needed to encode data.
If it's a language I don't particularly enjoy, though, so much the better that the AI types more of it than me. Today I decided to fix a dumb youtube behavior that has been bugging me for a while, I figured it would be a simple matter of making a Greasemonkey script that does a fetch() request formed from dynamic page data, grabs out some text from the response, and replaces some other text with that. After validating the fetch() part in the console, I told ChatGTP to code it up and also make sure to cache the results. Out comes a nice little 80 lines or so of JS similar to how I would have written it setting up the MutationObserver and handling the cache map and a promises map. It works except in one case where it just needs to wait longer before setting things up, so I have it write that setTimeout loop part too, another several lines, and now it's all working. I still feel a little bit of accomplishment because my problem has been solved (until youtube breaks things again anyway), the core code flow idea I had in mind worked (no need for API shenanigans), and I didn't have to type much JavaScript. It's almost like using a much higher level language. Life is too short to write much code in x86 assembly, or JavaScript for that matter, and I've already written enough of the latter that I feel like I'm good.
"I love complicated mathematical questions, and love doing the basic multiplication and division calculations myself without a calculator. I don't understand why people would use a calculator for this."
"I love programming, and don't understand why people would use C++ instead of using machine lamguage. You get deep down close to the hardware, such a good feeling, people are missing out. Even assembly language is too much of a cheat."
In the other hand - people still knit, I assume for the enjoyment of it.
But again my projects are more research than product, so maybe it’s different.
I suspect you've found a new hobby, not improved the existing one.
AI tools allow me to do a lot of stuff within a short time, which is really motivating. They also automatically keep a log of what I was doing, so if I don't manage to work on something for weeks, I can quite easily get back in and read my previous thinking.
It can also get very demotivating to read 10 stackoverflow discussions from a Google searches that don't solve my problem. This can cause me to get out of 'the zone' and makes it extremely hard to continue. With AI tools, I can rephrase my question if the answer isn't exactly what I was looking for and steer towards a working solution. I can even easily get in depth explanations of provided solutions to figure out why something doesn't work.
I also have random questions pop up in my brain throughout the day. These distract me from my task at hand. I can now pop this question into an AI tool and have it research the answer, in stead of being distracted for an hour reading up on brake pads or cake recipes or the influence of nicotine on driving ability.
I've played with using LLMs for code generation in my own projects, and whilst it has sometimes been able to solve an issue - I've never felt like I've learned anything from it. I'm very reluctant to use them for programming more as I wouldn't want my own skills to stagnate.
This I think I can explain, because I'm one of these people.
I'm not a programmer professionally for the most part, but have been programming for decades.
AI coding allows me to build tools that solve real world problems for me much faster.
At the same time, I can still take pride and find intellectual challenges in producing a high quality design and in implementing interesting ideas that improve things in the real world.
As an example, I've been working on an app to rapidly create Anki flashcards from Kindle clippings.
I simply wouldn't have done this over the limited holiday time if not for AI tools, and I do feel that the high level decisions of how this should work were intellectual interesting.
That said, I do feel for the people who really enjoyed the act of coding line by line. That's just not me.
This phrase betrays a profoundly different view of coding to that of most people I know who actively enjoy doing it. Even when it comes to the typing it's debatable whether I do that "line by line", but typing out the code is a very small part of the process. The majority of my programming work, even on small personal projects, is coming up with ideas and solving problems rather than writing lines of code. In my case, I prefer to do most of it away from the keyboard.
If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
The joy of writing code is turning abstract ideas into solid, useful things. Whether you do most of it in your head or not, when you sit down to write you will find you know how you want to treat bills - is it an object under payroll or clients or employees or is it a separate system?
LLMs suck at conceptualizing schema (and so do pseudocoders and vibe coders). Our job is turning business models into schemata and then coding the fuck out of them into something original, beautiful, and useful.
Let them have their fun. They will tire of their plastic toy lawnmowers, and the tools they use won't replace actual thought. The sad thing is: They'll never learn how to think.
Drawing a sense of superiority out of personal choices or preferences is a really unfortunate human trait; particularly so in this case since it prevents you from seeing developments around you with clarity.
> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
... is exactly how this often works for me.
If you don't get any value out of this at all, and have worked with SOTA tools, we must simply be working in very different problem domains.
That said I have used this workflow successfully in many different problem domains, from simple CRUD style apps to advanced data processing.
Two recent examples to make it more concrete:
1) Write a function with parameter deckName that uses AnkiConnect to return a list of dataclasses with fields (...) representing all cards in the deck.
Here, it one-shots it perfectly and saves me a lot of time sifting through crufty, incomplete docs.
2) Implement a function that does resampling with trilinear interpolation on 3d instance segmentation. Input is a jnp array and resampling factor, output is another array. Write it in Jax. Ensure that no new instance IDs are created by resampling, i.e. the trilinear weights are used for weighted voting between instance IDs on each output voxel.
This one I actually worked out on paper first, but it was my first time using Jax and I didn't know the API and many of the parallelization tricks yet. The LLM output was close, but too complex.
I worked through it line by line to verify it, and ended up learning a lot about how to parallelize things like this on the GPU.
At the end of the day it came out better than I could have done it myself because of all the tricks it has memorized and because I didn't have to waste time looking up trivial details, which causes a lot of friction for me with this type of coding.
So I take it you don't let coding agents write your boilerplate code? Do you instead spend any amount of time figuring out a nice way to reduce boilerplate so you have less to type? If that is the case, and as intellectually stimulating as that activity may be, it probably doesn't solve any business problems you have.
If there is one piece of wisdom I could impart, it's that you can continue enjoying the same problem solving you are already doing and have the machine automate the monotonous part. The trick is that the machine doesn't absorb abstract ideas by osmosis. You must be a clear communicator capable of articulating complex ideas.
Be the architect, let the construction workers do the building. (And don't get me started, I know some workers are just plain bad at their jobs. But bad workmanship is good enough for the buildings you work in, live in, and frequent in the real world. It's probably good enough for your programming projects.)
If you can't / won't / don't read and write the code yourself, can I ask how you know that the code written for you is working correctly?
BTW, if it doesn't take you hours to test the failure modes, you're not thinking of enough failure modes.
The time savings in writing it myself has a lot to do with this. Plus I get to understand exactly why each line was written, with comments I wrote, not having to read its comments and determine why it did something and whether changing that will have other ramifications.
If you're doing anything larger than a sample React site, it's worth taking the time to do it yourself.
The main key in steering Claude this month (YMMV), is basically giving tasks that are localized, can be tested out and not too general. Then you kinda connect the dots in your head. Not always, but you can kinda get gist of what works and what doesn’t.
Also, as I said, I've been coding for a long time. The ability to read the code relatively quickly is important, and this won't work for early novices.
The time saving comes almost entirely from having to type less, having to Google around for documentation or examples less, and not having to do long debugging sessions to find brainfart-type errors.
I could imagine that there's a subset of ultra experienced coders who have basically memorized nearly all relevant docs and who don't brainfart anymore... For them this would indeed be useless.
I have not memorized all the docs to JS, TS, PHP, Python, SCSS, C++, and flavors of SQL. I have an intuition about what question I need to ask, if I can't figure something out on my own, and occasionally an LLM will surface the answer to that faster than I can find it elsewhere... but they are nowhere near being able to write code that you could confidently deploy in a professional environment.
It was probably 2-3 hours work of screwing around figuring out issue fields, python libraries, etc that was low priority for my team but causing issues on another team who were struggling with some missing information. We never would have actually tasked this out, written a ticket for it, and prioritised it in normal development, but this way it just got done.
I’ve had this experience about 20 times this year for various “little” things that are attention sinks but not hard work - that’s actually quite valuable to us
const somecolor='#ff2222'; /* oh wait, the user asked for it to be yellow. Let's change the code below to increase the green and red /
/
hold on, I made somecolor a const. I should either rewrite it as a var or wait, even better maybe a scoped variable! /hah. Sorry I'm just making this shit up, but okay. I don't hire coders because I just write it myself. If I did, I would assign them
all* kinds of annoying small projects. But how the fuck would I deal with it if they were this bad?If it did save me time, would I want that going into my codebase?
> If it did save me time, would I want that going into my codebase?
Depends - and that's the judgement call. I've managed outsourcers in the pre-LLM days who if you leave them unattended will spew out unimaginable amounts of pure and utter garbage that is just as bad as looping an AI agent with "that's great, please make it more verbose and add more design patterns". I don't use it for anything that I don't want to, but for so many things that just require you to write some code that is just getting in the way of solving the problem you want to solve it's been a boon for me.
How do you know AI did the right thing then? Why would this take you 2-3 hours? If you’re using AI to speed up your understanding that makes sense - I do that all the time and find it enormously useful.
But it sounds like you’re letting AI do the thinking and just checking the final result. This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
Because I tested it, and I read the code. It was only like 40 lines of python.
> Why would this take you 2-3 hours?
It's multiple systems that I am a _user_ of, not a professional developer of. I know how to use Jira, I'm not able to offhand tell you how to update specific fields using python - and then repeat for Jenkins, perforce, slack. Getting credentials in (Claude saw how the credentials were being read in other scripts and mirrored that) is another thing.
> This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
As I said above, it's 30 lines of code. I did put my name beind it, it's been running on our codebase on every single checkin for 6 months, and has failed 0 times in that time (we have a separate report that we check in a weekly meeting for issues that were being missed by this process). Again, this isn't some massive complicated system - it's just glueing together 3/4 APIs in a tiny script in 1/10 of the time that it took me to do it. Worst case scenario is it does exactly what it did before - nothing.
If that's most of what you do, I can see how you'd not be that impressed.
I'd say though that even in such an environment, you'll probably still be able to extract tasks that are relatively self contained, to use the LLM as a search engine ("where is the code that does X") or to have it assist with writing tests and docs.
"Convert the comments in this DOCX file into a markdown table" was an example task that came up with a colleague of mine yesterday. And with that table as a baseline, they wrote a tool to automate the task. It's a perfect example of a tool that isn't fun to write and it isn't a fun problem to solve, but it has an important business function (in the domain of contract negotiation).
I am under the impression that the people you are arguing with see themselves as artisans who meticulously control every bit of minutiae for the good of the business. When a manager does that, it's pessimistically called micromanagement. But when a programmer does that, it's craftsmanship worthy of great praise.
Not sure how this is so hard to understand. If you have closed source software, how do you know its's working?
But it can't actually generate working code.
I gave it a go over the Christmas holidays, using Copilot to try to write a simple program, and after four very frustrating hours I had six lines of code that didn't work.
The problem was very very simple - write a bit of code to listen for MIDI messages and convert sysex data to control changes, and it simply couldn't even get started.
I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.
I've let Claude run around my code and asked it for help, etc. Once in awhile it's able to diagnose some weird issues - like last month, it actually helped me figure out why PixiJS was creating undefined behavior after textures were destroyed on the GPU, in a very specific case. But the truth is, I wouldn't hire an intern or an employee to write my code because they won't be able to execute exactly what I have in mind.
Ironically, in my line of work, I spend 5x as many hours thinking about what to build and how to build it as I do coding it. The fun part is coding it. And, that's the only time I charge for. I may spend 10 hours thinking about how to do something, drawing diagrams, making phone calls to managers and CEOs, and I won't charge any of that time. When I'm ready to sit down and write the code:
I go to a bar.
I turn my phone off.
I work for 6 hours, have 4 drinks, and bill $300 per hour.
I don't suspect that the kind of coding I'm doing, which includes all the preparation and thought that went into it, and having considered all edge cases in advance, is going to be replaced by LLMs. Or by the children who use LLMs. They didn't have much of a purchase on taking my job before, anyway... but sadly the ones who are using this technology now have almost no hope of ever becoming proficient at their profession.
Coding is not making a thing that appears to work. It's craftsmanship. It's quite difficult to convince a client that something which appears to work as a demo is not yet suitable or ready for production. It may take 20 more hours before it's actually ready to fly. Managing their expectations on that score is a major part of the work as well.
However, these two things are different: the kind of work that feels fulfilling, meaningful and even beautiful, versus: delivering the needed/wanted product.
A vibe coded solution that basically works, for a quarter of the cost, has advantages.
greater chance something will if we take more swings
Not to say it’s useless garbage, there is some value for sure, but it’s nowhere near as good as some people represent it to be. It’s not an original observation, but people end up in a “folie a deux” with a chatbot and churn out a bunch of mediocre stuff while imagining they’re breaking new ground and doing some amazing thing.
I feel kind of the same when I read about people wanting self-driving cars. What's the advantage of them? Why would it be helpful?
I like programming. Quite a bit. But the modern bureaucratic morass of web technologies is usually only inspiring in the small. I do not like the fact that I have to balance so many different languages and paradigms to get to my end result.
It would be a bit like a playwright aficionado saying “I really love telling stories through stage play” only to discover that all verbs used in dialogue had to be in Japanese, nouns are a mix of Portuguese and German, and connecting words in English. And talking to others to put your play on, all had to be communicated in Faroese and Quechua.
For recent ones, it is a interactive visualization of StarCraft 2 (https://github.com/stared/sc2-balance-timeline). Here I could do it myself (and spend way more time than I want to admit on refactoring, so code looks OK-ish), but unlikely I would have enough time to do so. I had the very idea a few years ago, but it was just too much work for a side project. Now I did it - my focus was high-level on WHAT I want to do and constant feedback on how it looks, tweaking it a lot.
Another recent is "a project for one" of a Doom WAD launcher (https://github.com/stared/rusted-doom-launcher). Here I wouldn't be able to do it, as I am not nearly as proficient in Rust, Tauri, WADs, etc. But I wanted to create a tool that makes it easy to to launch custom Doom maps with ease of installing a game on Steam.
In both cases the pattern is the same - I care more on the result itself that its inner workings (OK, for viz I DO care). Yes, it takes away a lot of experience of coding oneself. But it is not something entirely different - people have had the same "why use a framework instead of writing it yourself", "why use Python when you could have used C++", "why visiting StackOverflow when you could have spend 2 days finding solution yourself".
With side projects it is OUR focus on what we value. For someone it is writing low-level machine code by hand, even it it won't be that useful. For some other, making cute visual. For someone else, having an MVP that "just works" to test a business idea.
Yes, balance updates make the game live.
For watching current games, I cannot recommend better than Lowko (https://www.youtube.com/@LowkoTV) - he covers the main matches, and make a commentary in a style I like.
Our company is "encouraging" use of LLMs through various carrots and sticks; mostly sticks. They put out a survey recently asking us how we used it, how it's helped, etc. I'll probably get fired for this (I'm already on the short list for RIFs due to being remote in a pathological RTO environment and being easily the eldest developer here, but...), but I wrote something like:
"Most of us coders, especially older ones, are coders because we like coding. The amount of time and money being put spent to make coders NOT CODE is incredible."
In a sense it's like SQL or MiniZinc: you define the goal, and the engine takes care of how to achieve it.
Or maybe it's like driving: we don't worry about spark advance, or often manual clutches, anymore, but LLMs are like Waymo where your hands aren't even on the steering wheel and all you do is specify the destination, not even the route to get there.
it's outsourcing to an unreliable body shop where they barely speak English and the weekly attrition rate is 300%
When writing code in exchange for money the goal is not to write code, it's to solve a problem. Care about the code if you want but care about solving the problem quickly and effectively more. If LLMs help with that you should be using them.
On personal projects it depends on your goal. I usually want the tool more than whatever I get from writing code. I always read whatever an LLM spits out to make sure I understand it and confirm it's correct but why wouldn't I accelerate my personal tool development as well?
The amount of people who apparently just want the end result and don't care about the process at all has really surprised me. And it makes me unfathomably sad, because (extremely long story short) a lot of my growth in life can be summed up as "learning to love the process" -- staying present, caring about the details, enjoying the journey, etc. I'm convinced that all that is essential to truly loving one's own life, and it hurts and scares me to both know just how common the opposite mindset is and to feel pressured to let go of such a huge part of my identity and dare-I-say soul just to remain "competitive."
> Am I the weird one here?
Yes. But good weird, not bad weird.I mean we're programmers. Even though it's much more popular these days the very nature of what we do makes us "weird". At least compared to the average person. But weird isn't bad.
(Why people doing it if they find it so boring? And why side projects?! I know it pays well but there are plenty of jobs that do. I mean my cousin makes more as a salesman and spends his days at golf courses. He's very skilled, but his job is definitely easier)
> "oh but programming is the boring part, now I can focus on the problem solving"
I also can't comprehend people when they say this.For starters it's like saying "I want you learn an instrument so I listen to scales, that way I can focus on playing songs." The fun part can't really happen without the hard part.
Second, how the fuck do you do the actual engineering when you're not writing the code? I mean sure, I can do a lot at the high level but 90% of the thinking happens while writing. Hell, 90% of my debugging happens while writing. It feels like people are trying to tell me that LLMs are useful because "typing speed is the bottleneck". So I'm left thinking "how the fuck do you even program?" All the actual engineering work, discovering issues, refining the formulation, and all that happens because I'm in the weeds.
The boring stuff is where the best learning and great ideas come from. Isn't a good programmer a lazy one? I'd have never learned about something like functors and template metaprogramming if I didn't ever do the boring stuff like write a bunch of repetitive functions thinking "there's got to be a better way!" No way is an LLM going to do something like that because it's a dumb solution until a critical mass is reached and it becomes a great solution. There's little pressure for that kind of progress when you can generate those functions so fast (I say little because there's still pressure from an optimization standpoint but who knows if an LLM will learn that unprompted)
Honestly coding with LLMs feels like trying to learn math by solely watching 3Blue1Brown videos. Yeah, you'll learn something but you'll feel like you learned more than you actually did. The struggle is part of the learning process. Those types of videos can complement the hard work but they don't replace it.
$$$
It depends on what you're trying to do. I mean if the point of doing anything is a "feeling of accomplishment" why hire anyone to do anything you could do yourself. Why hire a builder to build your home? Why hire a mechanic to fix your car? Why pay a neighborhood kid to mow your lawn? Why hire a photographer for your wedding? Why hire a cook to make a meal? People hire others because even if they could do it themselves, they don't enjoy it but they need or still want the outcome for some reason or another.
Would you want to hire someone to write your blog for you? No you probably wouldn't if its a personal blog, so likewise you probably wouldn't want to use an AI for it either. But if it's a marketing blog like almost every business seems to have on their website these days full of listicles and vague "did you know" marketing? Sure, it's probably already outsourced anyway, so why not use an AI.
You probably don't want to be using an AI to generate artwork if you're aiming to make a painting that expresses your inner feelings. But if you're making a game and you suck at painting or drawing, you might hire it out, using an AI in that case isn't any different.
But precisely, "AI" is _NOT_ fixing my car or building my home or photographing my wedding!! It's writing a sludge of plausible-looking but empty slop that contaminates everything on the web, it's attempting to automate the visual arts, it's generating fake video that's getting harder and harder to distinguish from real one. It's automating things that SHOULD NOT be automated, and it's NOT automating things that should!
These are negative externalities, indeed, but the producer of the "goods" here does not feel those effects.
"photocamera gives no feelings of accomplishment of creating a picture"
and yet photography is an art of its own, and painting also has not disappeared
---
or heck, "taking digital photos gives zero feelings of accomplishment because you didn't do developing in a redroom"
The issue is the loss of control and intimate knowledge about my own work.
For me, the fun in programming also depends a lot on the task. Recently, I wanted to have Python configuration classes that can serialize to yaml, but I also wanted to automatically create an ArgumentParser that fills some of the fields. `hydra` from meta does that but I wanted something simpler. I asked an agent for a design but I did not like the convoluted parsing logic it created. I finally designed something by hand by abusing the metadata fields of the dataclass.field calls. It was deeply satisfying to get it to work the way I wanted.
But after that, do I really want to create every config class and fill every field by myself for the several scripts/classes that I planned to use? Once the initial template was there, I was happy to just guide the agent to fill in the boilerplate.
I agree that we should keep the fun in programming/art, but how we do that depends on the what, the who, and the when.
At some point, society and culture does separate the wheat from the chaff, but it takes a generation.
Writing however, is perhaps the area it really is quite literally nothing but a rubber duck for me. I think this past week I have likely written ~10k words, and suggestions from ai that ive taken straight up is at maybe like 10 words and even those were likely modified.
I straight up hate all its suggestions for how to word stuff, maybe it has something to do with the amount of prompt responses that Ive read the past year. I imagine if i could generate a nice display of physical eyerolls ive done this past year, topping that list would be when a chatbot responds starting with "your touching on something" or some other output thats painfully common.
Also i wouldn't say its worthless for my writing, it helps me kinda really pinpoint my weak portions, i just don't take any of their suggestions to strengthen it and find my own.
A lot of american buisness communication is packing, fluff or filler to either disguise a lack of knowledge or not make firm statements.
Unless you are very careful, a standard LLM output will wrap a bunch of obvious points in lots of filler language. This works in business because the most toxic phrase you can utter is "I don't know". So we are used to verbal noise, and pick through the filler to glean clues to what the writer actually knows or wants to assert (or not assert)
If you look at modern tech journalism, its either thinly re-worded PR pieces, or re-iteration of other not relevant opinions (see meta's AR glasses) You skim them to read to pull out interesting points (full colour, speakers, battery life, etc). The rest is just packing.
But for "pleasure" reading, ie stuff thats not directly related to your chain of command, there is no use in reading that shit.
Either its a story, where you need to impart emotion, or a. novel viewpoint. Or its an argument, where you also have a story, with some "facts" that also support an emotion.
That requires some level of understanding of the subject matter, to make a coherent narrative, that doesn't feel empty.
TLDR:
LLMs generally produce Buisness passive, which is almost useless as a form of communication. Just send bullet points.
The money from sponsors that comes with building a popular account goes a long way to mitigate that though.
Within the social milieu of industrialized society, mass-production is the point. It's harder to see that when the goods are essentials like clothing or food, where we obtain some utility and any artfulness is secondary to that utility. But, when we switch that around, and the artfulness of the good is the primary quality, it becomes very obvious that 10 trillion nearly-identical pictures of cake is just production for it's own sake.
If you agree with that, then like they say about prostitutes, it is just a matter of cost, appearance and complexity.
You answered yourself: to get something else than feeling of accomplishment.
Not realizing there can be some is a failure of imagination.
It seems to me that it still takes a significant amount of luck to end up actually racking up likes that way.
I don't disagree, but on the other hand, searches are not useless. They're limited because you do need to create a query capturing what it is that you're looking for, in advance. But we do that all the time.
Surely, an AI generated text would have been pedantically correct and used the subjunctive mood there, "If that were..."
> When you’re stuck and sit there, thinking, trying to come up with what’s next, that’s the valuable part of writing.
Not just what’s next, but the question of what to write in the first place.
I’ve pointed it out before, but this idea of quiet contemplation is exactly where LLMs completely pratfall. The fewer details or instructions you give them, the less novel the output.
I can’t speak for everyone, but when I want to write a new blog post on my site, it’s precisely the opposite. I dim the lights, sit quietly, and let the neurological brownian motion machine do its thing.
I think this works both ways. Your average plumber doesn't enjoy writing. It's something they might need to do from now and then, but if you give them a magic box that solves the problem, they're gonna be overjoyed. One less chore.
Plumbing or writing, I don't think you can convince people not to take shortcuts by telling them "but the fact it's hard is what makes it worthwhile for you!"
But you can still make the case that writers and plumbers alike who enjoy doing their work for its own sake should embrace the reward effected by conquering the tedium of their trade and not take shortcuts.
The goal of plumbing is to fix / repair; certainly it's possible to enjoy fixing and repairing. But is the joy of writing in "repairing ideas"? How is that a separate concept from creating new ones?
Now, code by Gen AI is straightforward in comparison. Coding is not writing poetry, even if the lines also don't reach the right margin
Now I had my house replumbed to relocate well tank, water softener, water heater, whole house filter, relocated washing machine and slop sink. That I left to the professionals. But I watched to learn more.
If I outsource my thinking and can’t see how the box does it, what is the purpose?
- except the cost of materials and gas to drive to the hardware store, which you'll likely do twice or thrice as you realize you bought the wrong thing or need some other specific tool, that you'll use one time a year or less
- except the cost of your own time away from personal projects and family
- except the cost of hiring a plumber afterwards to professionally fix the problem you caused by DIY'ing it without the knowledge and experience that a professional brings
And now you understand each other!
I have zero interest to learn about plumbing and I would pay a professional even if I could do it myself to avoid any doubts or fears of messing it up.
I think the issue is that it's gotten far easier to be a poseur than ever before.
https://www.linkedin.com/pulse/when-architects-plumb-why-you...
It's easier than ever to be a p99.999% oil painter, but compared with p99.999% film directors basically no one cares at all. Because painting is not in high demand, and film still is, for now.
If the demand for your certain kind of writing vastly diminishes, it is your detriment. AI's supply effect is changing the demand.
George Eliot already wrote a p99.99999% novel, Middlemarch. It is only thanks to massive population growth that the number of readers of her novel has increased or remained steady. As a proportion of the population, Middlemarch has no readership, and is a side show of a side show. It has almost completely lost its once hallowed place in society and culture.
I had some musings about this with respect to blogging. Especially because search engines are now placing their own summaries above SEO-optimized junk posts. Those posts become disincentivized. Hopefully, it leaves us with more people writing blogs for the sake of writing rather than trying to sell clicks.
The entities doing endless reposts are building faster bigger audiences and might or will repost your stand out piece.
If you care that you wrote it and that people enjoy it. All is good. But I'd you wanted to stand out with it or build around it. The low effort reposters might take that from you.
In a rare occasion I've even seen a reposter shorten or better edit the original piece.
Though I am still hopefully optimistic that in the long run for the good
Call me old fashioned, but when has this been ever not true? Like yeah, does someone read cliffs notes and go, "that was really edifying and I gleaned incredible insights into myself and the world!!!".
I think this just widens the gap between people who give a shit and those who don’t
the big thing that changes are the economics of laziness and slop
Of course this isn't always true but it's true quite often.
Take one random example - Spark: The Revolutionary Science of Exercise and the Brain.
The idea is in the title. You don't need to read more than that to benefit from the idea. But all the different varieties of benefit and pathways and studies the author sites are still valuable.
Range: Why Generalists Triumph in a Specialized World
idea is also in the title,and it displays so many different scenarios of people engaging with specialized fields and interacting with them in ways that relate to their past experiences.
Productivity hacks and pop psychology are not what we're talking about here. We're talking about interesting works of non-fiction. And if it's fiction, and you think that there is "one idea" and you can skip the rest, I don't know what to tell you.
We had a literary explosion in the last few decades where the competitive advantage of reading may have reached its nadir. (The supply side also screwed the pooch. Recent non-fiction has been polluted with fluff. Literature, on the other hand, is in a renaissance.)
In the last two years, on the other hand, I’ve found significant advantage in being able to speak, write and read clearly. The only thing I can think of is people marking themselves through LLM use, directly and indirectly.
I'd hazard a guess that from the writer's perspective, novelty scales with volume of thought / connections, which is (at present) a fragmented process and not that well-assisted by AI. OTOH, can "writing quality" be better approximated by LLMs?
I really want this to be the case, but what I've observed so far is that slop networks with thousands of domains and millions generated articles simply drown out everything else. It's becoming increasingly difficult to tell apart pages written by humans from those written by conmen, especially if I'm not an expert on the subject matter.
As an incredibly egregious example, here's one of the top results (#1/#2 on duckduckgo) for "wireguard mesh": https://www.ltwireworks.com/blog/how-to-configure-wireguard-.... Yes, it's a grill mesh manufacturer.
A work related example I have is using AI to generate project plans. LLMs can probably generate an ok project plan for straightforward projects with plenty of examples to be trained on. But perhaps the most important value of generating a plan is the thinking that goes into it. Considering alternatives, likely failures, unlikely failures, etc. In generating the plan you are starting to practice dealing with problem that would come up while implementing it. The knowledge in your head is more valuable than the document produced. The document is just a summary of all the thinking you have done. Essentially a collection of mnemonics. Many details in your head will never make it into the formal plan, but will be needed during implementation.
You can --- and people have --- built strength-focused programming around cable machines. "They're safer and work target muscle groups more efficiently" is usually the argument. A Life Fitness Synergy system is also much more practical to own inside of one's house than a power rack and 1000+ lb of plates that will make quick work of most home flooring.
This strategy works. It's sure as shit better than doing nothing. But quadriceps, delts and lats don't work in isolation. They rely on secondary and tertiary muscles and entire kinetic chains to help them accomplish tasks.
Cables do hit muscle groups directly, but they also lead to diminishing strength and physique returns much more quickly than boring traditional weight training. They also lead to problematic muscle imbalances that, ironically, can cause overuse injuries later in life (super heavy leg extensions with improper knee flexion comes to mind).
All this fluff about targeting specific muscles etc is simply not analogous to LLMS. Maybe old-school barbells are paper files and fax machines, and cable machines are Slack, Asana, and Excel?
- https://claytonwramsey.com/blog/prompt/This book is probably not the greatest resource for establishing a zettelkasten, but it is very good at demonstrating how a good externalized writing system is critical for getting good learning done and finding unique insights. Also, it addresses the wikipedia issue I had specifically. As another person mentioned in this thread, making the notes more like proper publishable writing (even if just a sentence) made a big difference.
However, what really clicked with me about the book was the hypothesis that true human thinking can only be done externally, through writing, due to the limitations of our brain as a platform. The book lists out things like recency bias and short term memory limitations that get in the way of proper, structural thinking that results in actual insights. Whereas maintaining a zettelkasten, or a simulacrum of one at least, externalizes your thought process and allows you to achieve genuinely your maximum potential for thought.
The arguments went beyond the normal ones about the recorded benefits of note-taking for learning, memory, and creativity, and got into the aspects unique to a zettelkasten that make it an enabler for thinking. However the book also pitches this as a productivity boost for authors and researchers, and doesn't really seem to care about people who are just learning for the sake of learning (but it does make a solid case that building a zettelkasten makes learning more fun).
Personally I've been reading criticisms of the book as a way to learn how to maintain a zettelkasten that I agree with: it's not specific or clear enough, and it defines too many different kinds of notes (and not all at once; some note types are defined like 3/4 of the way through the book). For me it was just a very convincing argument to stop trying to make my brain do things it isn't good at - stop beating myself up trying to memorize super detailed facts, let my external system handle that. Stop worrying about forgetting bits and bobs of the various books I've read, let my external system slowly create a map of ideas of everything I'm reading. Stop over-optimizing all my note taking systems and just scratch shit into a paper pad, to be indexed as a good zettel later (or just thrown away if I decide it's not helpful).
So, though I do intend to use this system to fuel my blog, I think I'd still find value in it just in feeding the conversations I have as well. I'm deeply interested in non traditional politics, leadership, and activism, and with this system I've adopted I'm finding myself make connections I don't think I'd have made before; for example this very idea of externalization and scaffolding of human thought as a means to make up for our flaws, I'm finding similar threads in all sorts of things I read now.
If you're interested in zettelkasten, I would recommend a different resource for learning how to actually set one up (just, the internet plus chatgpt is probably fine, plus some FOSS software). I will say, if it's taking too long, whatever you're doing is too complicated. It should take a single click or button press to make a new note, and it should be very easy to scan through your notes and make links every once and a while, and making a link should be no more than a highlight, a button click or press, a search, and a confirmation. If you're anything like me, you may spend more time setting something like this up and agonizing over it than you will using it... that's why I moved from org-roam to trilium, so I could just stop hyper optimizing and start using the damn thing.
As the author says, the time you spend stuck is the time you're actually thinking. The friction is where the work happens.
But being stuck doesn't have to suck. It does suck, most of the time, for most people; but most people have also experienced flow, where you are still thinking hard, but in a way that does not suck.
Current psychotechnology for reducing or removing the suck is very limited. The best you can do is like... meditate a lot. Or take stimulants, maybe. I am optimistic that within the next few decades we will develop much more sophisticated means of un-suckifying these experiences, so that we can dispense with cope like "it's supposed to be unpleasant" once and for all.
The other day I was on LinkedIn and a Chief Design Officer at a notable company posted her reflections on leadership for the year. There were some potentially interesting insights, but they never got past a surface level. The AI-ness of the writing was as clear as day (and GPTZero tagged it as 100% likely to be AI).
It’s disappointing when you see leaders and so-called stewards of taste farming out that part of their voice.
The bland platitudes of corporate management were mindbogglingly boring drivel already in the era before LLMs went mainstream. What we get nowadays is just more of it. That stuff should be skipped anyways. What was not worth writing is not worth reading.
I totally disagree with this point. It's a combination of wishful thinking and denial. LLMs do a very fine job at writing if you give them the right base of information/insights. I think it will totally obliterate 'writing' as a differentiable skill.
What will happen IMO is that people who have interesting ideas and experiences but suck at writing will have the upper hand. The market for content will be flooded by articles from people who would normally not write. They will feed the LLMs bullet points of interesting facts and observations and let the LLM fill in the gaps and actually make the article engaging. What matters is that the core points have to be interesting. The AI cannot come up with brilliant insights but it can convey brilliant insights really well.
I think even if, hypothetically, some people could tell apart AI-generated content from manually written content, some AI-generated content may actually be more interesting and valuable to read than the manually written one...
At the end of the day, writing by itself doesn't matter; it's just a communication medium. What matters are insights, ideas, concepts, perspectives... It was always about substance, not form. It's a flaw of the human mind that some people used form as a proxy for substance.
There are a lot of people who know a lot and have a lot to say but they were so busy experiencing and learning that they never had time to write before... And even if they did, they could not convey their ideas effectively before.
Now given that LLMs have mastered the superficial aspects of communication, those aspects are no longer valuable and substance is more valuable. But IMO nobody will care whether articles or books were written by AI in the future. It won't have much effect on quality or value of the book/article.
I think what will matter in the future are:
- Insights, ideas, perspectives.
- Media (the most important still); who intermediates content distribution gets to decide what people consume and can shape their perception of quality to a significant extent.
I'm hoping that as more people get involved in writing using LLMs, that it will force more people to confront the second point... People will be forced to pay more attention to substance as it will be the only real differentiator. I'm hoping people will begin to feel disgusted by the low level of substance that current media platforms purvey... It's already kind of happening; people invented the term "AI slop" but really it's not just AI which produces slop. The media has been guilty of spreading slop for quite some time and it kept getting worse. Now AI is just a convenient strawman to bash.
I think this may be a form of denial. The reality is likely the opposite: AI will commoditize the act of writing entirely, shifting the value solely to insight.
For too long, we’ve confused "good writing" with "good thinking." We assumed that if someone wrote beautifully, they had something smart to say. Conversely, we ignored brilliant people simply because they couldn't articulate their complex ideas effectively.
AI fixes this market inefficiency. It allows experts who are too busy actually doing things to finally compete with professional writers. They provide the raw brilliance (the substance), and the AI provides the polish (the form).
Yes, I actually did this as an experiment.
From my perspective they are both different ways to communicate the same idea (with different effectiveness, different level of detail, to different audiences). I don't regard my Gemini-generated one as being any less 'my own work' as the one I painstakingly wrote by hand.
It gets to the core of what writing should be about. It should be about substance, not form. LLMs are equalizers when it comes to form. Time to focus on substance now!
I don't see how AI helps here. If you can't articulate your idea, then
1) how clear is that idea in your head anyway
2) how are you going to articulate it to the LLM?
For example, I'm bilingual and I tend to think in visual and abstract concepts and then translate to the target language as a separate step. It doesn't necessarily come out exactly right the first time. I often re-read what I wrote and see ambiguities which could cause someone to misinterpret what I'm trying to say.
Also, I tend to over-elaborate and struggle to understand other people's mental models. You need to understand your audience really well in order to convey points effectively or else you might bore them or your ideas might seem to go off on a tangent while you're actually trying to lay the foundation for the idea you're trying to convey...
For example, as an experiment, I posted my previous comment 2 times; once handwritten, the other transformed by Gemini (the one you responded to). The transformed one did better and got more engagement... It said the same thing but punchier and shorter. It doesn't waste words laying the groundwork because it has a better sense of what you already know (as the audience) given the conversation context.
This comment here is handwritten. I suspect it's probably not as punchy or to-the-point from your perspective.
So to summarize; I think LLMs can help some people more than others and it fits with the point I was trying to make that it will empower more people to write who would previously not write.
My point is that LLMs require language as input. Therefore you need to be able to articulate your ideas in language for an LLM to even be of use.
It's fine if you want to use the LLM as an editor etc., but for what it's worth, I found this comment to be more engaging to read than the LLM generated one. The LLM one has a very "generic" feel to it.
If you can't articulate your complex idea to a human, what's the reason to believe an LLM would understand it better?
casual painting also "makes you remember how to see" and stuff - that doesn't mean that taking photos stop you. It's just different
If your rebuttal is "Michelangelo would've only painted the broad strokes and the faces" you're still missing the point that he still /did some painting/.
both Photography and Ai are literally "click a (shutter) button" - so photo analogy is perfect
And Michelangelo is bad example because it's "ye old paintings" (you could've at least tried with Picasso or smth) - while my argument would be "painters got replaced by photographers"
> This is why reading actual books in full might now be more valuable than it ever has been: Only if you’ve seen every word will you discover insights and links an AI would never include in its average-driven summary.
Is summarizing by a human much different? Let's check if the author has a consistent stance on reading every word.
> The 4 Minute Millionaire: 44 Lessons to Rethink Money, Invest Wisely, and Grow Wealthy in 4 Minutes a Day > This book compiles 44 lessons from some 20 of history’s best books about money, finance, and investing. Each lesson can be read in about 4 minutes and comes with a short action item.
Hmmmm
Aside from that, it seems more valuable to think about the odeas in the blog on their own merit, rather than attacking the writer for not having been true to those ideas in every past action.
One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.
For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."
OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.
The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?
Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.
It is time for a new web. A new standard, a new everything. A new start without the AI bloat. Either something like this will emerge, or we will loose the web we have.
My thoughts exactly. In all my interactions with gen AI it was always the same: on the surface it looks pretty convincing, but once you look more deeply it's obviously non-sense. AI is great at superficial imitation of human-created work. It fails miserably at doing anything deeper.
I think the biggest problem with AI is that most people just don't take the time or effort anymore to really look at an image, really read a text, or really listen to a piece of music, or a podcast. We've become so habituated to mindlessly consuming content that we can't even tell anymore if it's just a bunch of stochastic nonsense.
You can try to do a turning test. I've met several people claiming they can always find AI art, all of them can't do it (and AI art became even better now!)
For four billion people, using an LLM to create things is a marked improvement.
For four billion people, using an LLM to create things is a marked improvement. I'm not sure how you'd explain the phenomenally widespread use of LLMs otherwise.
By the way: Can you tell whether my comment (this one) was written by an LLM or not?
Most people who are non-technical (including most creators) have an extremely naive view of what LLMs are, mostly driven by what the media, and shills who are mostly targeting audiences that aren't creative are focused on, and their response to LLMs is shaped by that.
We’ve trained it so far on the outputs of our weird thinking process only.
The suck is why we're here.
Your suck is my profit margin.
Thinking/writing isn't "the suck".
> because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger.
The creators of The Enhanced Games/Olympics would disagree with you.
Which brings me to my point: Are we satisfied being "Top Slave" or do we want to be Free? Or do you believe that Freedom is an illusion?
I’m afraid people will just start getting used to ai written book and go on with it. Just like what happen on YouTube.
It’s only going to decrease writer possibilities because their earning will decrease.
analogpixel•1d ago
I'm always surprised when people say they use LLMs to do stuff in their Journal/Obsidian/Notion. The whole point of those systems is to make you think better, and then you just offload all of that to a computer.
roughly•1d ago
jackyinger•1d ago
Enginerrrd•12h ago
A tale as old as time.
alansaber•1d ago
yeahforsureman•1d ago
EDIT: Trying to stay on topic and score some po--, cargo I mean...
trashburger•15h ago
rapidfl•21h ago
Does not apply to all ppl but maybe there should be phases to cosplay hard. Then reflect and realign.
albert_e•19h ago
I am noticing that I am very quick to get excited about a thing and also very quick to lose motivation to pursue that new thing to a meangful level of understanding and mastery.
Yesterday I was excited about something that I wanted to build a proof-of-concept of and blog about proudly. It might take 2-3days of intermittent effort juggling between other things but god was I excited to see it through.
I reaped great dopamine learning the first 30% of the stuff by end of day.
Today I wake up and am wondering what got me so excited yesterday. Of course I knew the basics of that now, parts of it seem obvious even, would anyone be really interested in me talking about it?
If I threw my hat over the fence by cosplaying an active builder and blogger ... maybe I would have seen it through 3 days of commitment?
akoboldfrying•1d ago
In your opinion, what is the differentiating factor?
lukeschlather•1d ago
I think AI is a great tool in certain circumstances, but this sounds like one of the clearest examples where it is brain rot.
kettlecorn•1d ago
I've found LLMs work reasonably well to just copy-paste that blob of thoughts into to have them summarize the key points back to me in a more coherent form.
kaashif•1d ago
If I understand something well, I can write something coherent easily.
What you describe feels to me along the lines of studying for an exam by photocopying a textbook over and over.
ragequittah•23h ago
To imagine LLMs have no use case here seems dishonest. If I don't understand a particularly hard part of the subject matter and the textbook doesn't expand on it enough you can tell the LLM to break it down further with sources. I know this works because I've been doing it with Google (slowly, very slowly) for decades. Now it's just way more convenient to get to the ideas you want to learn about and expand them as far as you want to go.
nunez•20h ago
kettlecorn•44m ago
In some cases yes I'll synthesize that myself into something more coherent. In other cases an LLM can offer a summary of certain themes I'm coming back to, or offer a pseudo-outsider's take on what the core themes being explored are.
If something is important to me I'll spend the time to understand it well enough to frame my own coherent argument, but if I'm doing extremely explorative thinking I'm OK with having a rapid process with an LLM in the loop.
jphorism•1d ago
Kiro•20h ago
I'm not using LLMs for my notes but "think better" has never been a goal for me.
wtetzner•19h ago