Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.
And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".
We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.
Be a little more discerning, and think for yourself before you lose the ability to.
Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.
My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.
I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.
LLMs have been trained on likely all published books, it IS more qualified than me.
If he continues to write and gets good enough should he seek a human editor sure.
But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.
It has also be trained on worthless comments on the internet, so that’s not a great indicator.
Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will
You think you shouldn’t give advice because your feedback is not valuable and may even cause your son to give up writing, but you have so far given no reason why AI wouldn’t. From the entire ChatGPT “glazing” accident I can also argue that the AI can also give bad feedback. Heck most mainstream models are fine tuned to sounds like a secretary that never says no.
Sorry if this sounds rude, but it feels like the real reason you ask your son to get AI feedback is to avoid being personally responsible for mistakes. You are not using AI as a tool, you are using it as an scapegoat in case anything goes wrong.
- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.
- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.
Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.
The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.
Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.
It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.
And the richer you are, the more freedom you'll have to opt out and manage your own decisions.
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
That's AI.
It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.
On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.
Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.
People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.
I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.
Hell, I have preferred ligature fonts for different languages.
Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.
We have intelligent people using ai and claiming it’s useful.
And we have other intelligent people who’s saying it’s not useful.
I’m inclined to believe the former. You can’t be deluded about positives usefulness. But you can be about the negative simply by using the LLM in a half assed way and picking the most convenient conclusion without nuance.
If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…
Then again, maybe I'm too old now and being left behind if I remember the old hype like this....
The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
https://en.wikipedia.org/wiki/David_(Michelangelo)#Process
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.
Creative thought is not.
tombarys•5h ago
esjeon•1h ago
But I still don't like that the same model struggles w/ my projects...