Also am at the peak of my game, and automated templates, snippets, and stackoverflow lookup a decade+ ago. I prefer reading a discussion of tradeoffs to approaches before picking one. It may take up to ten more minutes up front but save hours later.
So waiting for the dust to settle.
Maybe other people are better at prompting or designing VSCode integrations but what I’ve experienced so far has been a mess. Utterly nonsensical design decisions, it doesn’t seem to understand basic linear algebra or the LAPACK API. (I tried adding the Fortran or C source to its context to no avail.) I asked it to rewrite a well-documented scalar function using AVX intrinsics and. . . woof. No good.
Hopefully the field either improves dramatically in a couple years or goes back into hibernation for the next AI winter.
> Photo by Nate Edwards/BYU Photo
So is it a photograph or AI generated?
Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?
> Steffen and Wells found that most non-users are more concerned with issues like trusting the results, missing the human touch or feeling unsure if GenAI is ethical to use.
Are the ethical users avoiding LLMs entirely? Or only for certain use cases?
It solves acting according to one's principles and against what one perceives as harmful? I don't understand the question.
True engineering requires discipline, anything short of this philosophy is brain rot and you will pay the price in the long term
AI is built essentially on averages. It’s the summary of the most common approach to everything. All the art, writing, podcasts, and code look the same. Is that the bland, unimaginative world we’re looking for?
I love the bit in the study about the “fear” of AI. I’m not “afraid” it’ll produce bad code. I know it will; I’ve seen it do it 100 times. AI is fine as one tool to help you learn and think about things, but don’t use it as a replacement for thinking and learning in the first place.
It is, but that also means if you prompt it correctly it will give you the answer of the average graduate student working on theoretical physics, or the average expert on the historical inter-cultural conflict of the country you are researching. Averages can be very powerful as well.
In my experience the way you prompt is less important than the “averageness” of the answer you’re looking for.
Quoting https://buttondown.com/hillelwayne/archive/ai-is-a-gamechang... about https://zfhuang99.github.io/github%20copilot/formal%20verifi... "In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec." This is not the behavior of the "average" anything.
As a side note:LLMs by definition do not demonstrate “understanding” of anything.
Getting an average response by necessity gives you something dumbed down and smoothed over that nobody in the field would actually write (except maybe to train and LLM or contribute an encyclopedia entry).
Not that having general knowledge is a bad thing, but LLM output is not representative of what a researcher would do or write.
We must have dramatically different approaches to writing code with LLMs. I would never implement AI-written code that I can't understand or prove works immediately. Are people letting LLMs write entire controllers or modules and then just crossing their fingers?
Doing security reviews for this content can be a real nightmare.
To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow
Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.
Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.
And even uf you understand the code, that doesn't mean it is maintainable code.
> We must have dramatically different approaches to writing code with LLMs.
I’ve seen this same conversation occur on HN every day for the past year and a half. Help! I think I’m stuck in an llm conversation where it keeps repeating itself and is unable to move onto the next point.
But claiming those strings as one's own is a bridge too far. Of course one might want to avoid inadvertently creating strings that others have already created. Autocomplete can prevent that. But people will inevitably need to create new strings that no one else has created before. There is no substitute for the thinking behind the creation of new strings. Recombining old strings is not a substitute.
"AI" is being marketed as a substitute. Recombination of past work is not, by itself, new work or new thinking. As with autocomplete, there are limits to its usefulness.
For software developers who hate "intellectual property" and like to take ideas from others, this may be 100% acceptable. But for non-software developers who seek originality, it might fall short.
When the people invested in "AI", e.g., Silicon Valley wonks, start throwing around terms like "intelligence" to describe a new type of autocomplete, when they fake demos to mislead people about its limits, then some people are going to lose interest. Software developers betting on "AI" may not be among them. The irony is that software development is already so rife with economically justified mindless copying and unoriginality that software quality is in a free fall. "AI" is only going to supercharge the race to the bottom.
Like it or not, the market wants "bad code". It loves mindless copying. It has no notion of "code quality". It demands minimisation of "developer time". Perhaps "AI" will deliver.
With that said, getting it to create boilerplate code is pretty useful, but not all that important a part of my job.
Resistance to Generative AI: Investigating the Drivers of Non-Use - https://scholarspace.manoa.hawaii.edu/server/api/core/bitstr...
All the reasons given are fears:
Output Quality - Fears that...
Ethical - Fears about..
Risk - Fears that...
Human Connection - Fears that...
Impairment - Fears that...
Creativity - Fears that...
My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).They also only surveyed a few hundred people via Prolific.
The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.
Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.
Human thought is implemented by a system that has adapted for hundreds of millions of years in diverse environments. We are adapted to huge variations in resources, threats of innumerable kinds, climates, opportunities, social and ecological relationships, etc, and many of its adaptations may be adaptations to control, balance or modify its other adaptations. It would be crazy to expect human intelligence to be what we could describe as optimized for something, and it would be crazy to expect humans to be able to figure out what that something is even if that were true. Perhaps our minds have gotten us here, and they cannot get us out of here, but they maintain some pretty strong links to our natural environment, which is still our landlord.
AI, OTOH, is a new kind of creature of a single time and a monoculture -- the internet. I don't talk to AI; perhaps someone has asked AI how much fear we should have of AI, and what the odds are.
While i'm sure its a useful tool in some situations, and i don't begrudge anyone who finds value in it, it simply doesn't fit into my life as something that would be useful on a regular basis.
In a similar vein, when people find out that I ride a bicycle, their first question is why I don't ride an e-bike.
I only occasionally try it out for specific tasks and have never felt the inclination to try making it a part of any daily process, but his mindset was such that he couldn't perceive of anyone not wanting to fully dive in everyday and that those who didn't were missing out on significant value to their lives.
Devs and others recognize that the tech use very useful but not “magic” .
The vast majority of uses are your typical Silicon Valley hype, jargon filled bullshit that sells half-baked products to the tech illiterate folks in the C-suite.
TL,DR: I don't use it for writing (I want to say something original in my own voice), but I do use it for copy editing (improving wording, helping with title ideas, etc.).
CTO became extremely enthusiastic about ChatGPT and said that the programming would be a dying job and tried to show during presentation how good ChatGPT was and asked it to write a basic code related to our tasks. It produced total garbage that could not be used even as a starting point. CTO tried to prompt it to the needed directions, but it made things worse.
After the presentation I tried to search for the task from the presentation. It turned out there were very few StackOverflow or GitHub entries about it as the topic was rather specialized and ChatGPT tried to average those into the task.
In a month I and another recent hire departed from the company. And a year later the company was hiring programmers again.
Out of curiosity I repeated the task few times with different models all the time resulting in the same garbage.
So my rule of the thumb is that if a task generates a lot of search hit, then perhaps a LLM can average the knowledge into something reasonable. If not, averaging is not possible.
On one hand, training it isn't "copying" per se, but "learning", so maybe it isn't straight up copyright infringement, unless it can reproduce large parts identically. It could also allow small team/individuals to have much large impact in the world and could lower the barrier to entry for research and experimentation, maybe even other endeavors. It certainly could help with knowledge sharing and accessibility, where downstream creativity and usefulness can outweigh diffuse individual harm. Maybe it expands the creative field rather than shrinks it, that'd be a good thing.
But then on the other hand, many models (datasets) are built with copyrighted works without permission or royalties, with the effect that LLM availability could reduce demand for human livelihoods, leading to eroding fields instead of expanding them. Most releases today are kind of opaque with their training datasets, most are undisclosed and it's hard if not impossible for authors to have agency over if their work is included or not. Maybe if LLMs remain it'll be hard to sustain cultural production instead, that'd be good for no one.
So then what is the best approach for someone who doesn't want the forfeit the usefulness they themselves experience, but also not go directly against what the ethical considerations bring up? In the end I don't know if there is an easy or right side to take, I guess usually the optimum sits somewhere around the middle, not at the extremes at least.
BryanLegend•4h ago