frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

An Open Letter to the Leadership of Sequoia Capital

https://shaunmaguire.fyi/
2•sudohalt•2m ago•0 comments

Study claims the universe will start shrinking in 7B years

https://bgr.com/science/new-study-claims-the-universe-will-start-shrinking-in-7-billion-years/
1•Bluestein•3m ago•0 comments

Ubisoft's EULA ordering you to destroy your games isn't new, nor is it unique

https://www.thegamer.com/ubisoft-eula-clause-destroy-your-games-is-not-new-or-unique/
2•smusamashah•3m ago•0 comments

Build this in 4 days. Rain Season Tracker Online

https://rainseason.vercel.app/#hero
1•ionlyusegoogle•3m ago•1 comments

Radial Attention: O(nlogn) Attention for Long Video Generation with 2-4× Speedup

https://hanlab.mit.edu/blog/radial-attention
1•lmxyy•5m ago•1 comments

When AI Has Root: Lessons from the Supabase MCP Data Leak

https://www.pomerium.com/blog/when-ai-has-root-lessons-from-the-supabase-mcp-data-leak
4•bdesimone•6m ago•0 comments

Yes, I wrote a expensive bug. In my defense I was only seven years old

https://www.theregister.com/2025/07/07/who_me/
1•robocat•6m ago•0 comments

Ultra-thin bendy solar panels are so light you can wear them

https://www.cnn.com/science/perovskite-solar-cells-expo-vests-hnk-spc
1•Bluestein•7m ago•0 comments

Real-time Image-based Lighting of Glints

https://arxiv.org/abs/2507.02674
1•ibobev•9m ago•0 comments

The Sad State of Hardware Virtual Textures

https://hal.science/hal-05138369
1•ibobev•11m ago•0 comments

You don't own your memory

https://blog.usv.com/you-dont-own-your-memory
1•wslh•11m ago•0 comments

Bash-5.3-Release Available

https://lwn.net/Articles/1029079/
1•rascul•12m ago•0 comments

Case study of over-engineered C++ code

https://blog.kowalczyk.info/a-aiow/case-study-of-over-engineered-c-code.html
2•Bogdanp•13m ago•0 comments

Stochastic Integral (1944)

https://www.jstage.jst.go.jp/article/pjab1912/20/8/20_8_519/_article/-char/ja/
1•sandwichsphinx•15m ago•0 comments

How did X-Rays gain mass adoption?

https://www.aditharun.com/p/how-did-x-rays-gain-mass-adoption
3•tinymagician•17m ago•1 comments

Hints That a Chatbot Wrote Part of a Biomedical Researcher's Paper

https://www.nytimes.com/2025/07/02/health/ai-chatgpt-research-papers.html
2•rbanffy•19m ago•0 comments

Weight‑Generative Tuning for Multi‑Faceted Efficient Adaptation of Large Models

https://ICML.cc/virtual/2025/poster/45660
1•programd•20m ago•0 comments

Chalmers-Led Team Develops Algorithm to Simulate GKP Codes for Quantum Computing

https://www.hpcwire.com/off-the-wire/chalmers-led-team-develops-algorithm-to-simulate-gkp-codes-for-robust-quantum-computing/
2•rbanffy•20m ago•0 comments

Multilingual and multi-speaker text-to-speech with the Gemini APIs

https://ai.google.dev/gemini-api/docs/speech-generation
2•dynamicwebpaige•22m ago•0 comments

Scientists uncover mechanism that causes formation of planets

https://phys.org/news/2025-07-scientists-uncover-mechanism-formation-planets.html
1•TMEHpodcast•22m ago•0 comments

Brussels launches Quantum Strategy to stay in global tech race

https://www.euronews.com/next/2025/07/02/brussels-launches-quantum-strategy-to-stay-in-global-tech-race
2•rbanffy•23m ago•0 comments

iOS 26 beta 3 dials back Liquid Glass

https://techcrunch.com/2025/07/07/ios-26-beta-3-dials-back-liquid-glass/
1•speckx•24m ago•1 comments

Programming at the Edge of my Abilities for three months straight

https://maxmynter.substack.com/p/programming-at-the-edge-of-my-abilities
1•underanalyzer•25m ago•0 comments

Ask HN: What's the verdict on GPT wrapper companies these days?

2•NewUser76312•25m ago•0 comments

Counterrevolution: Extravagance and Austerity in Public Finance

https://www.lrb.co.uk/the-paper/v47/n12/katrina-forrester/i-appreciate-depreciation
2•mitchbob•26m ago•1 comments

Marking It Up (and Down)

https://byk.im/posts/marking-it-up-and-down/
1•coloneltcb•27m ago•0 comments

Estimadle

https://estimadle.com/
1•underanalyzer•27m ago•0 comments

Improved core manifestations of autism after vitamin D3-loaded nanoemulsion

https://www.sciencedirect.com/science/article/pii/S3050474025000205
2•bookofjoe•29m ago•0 comments

PepsiCo, Campbell’s shrinking packages with lower-price options to spur sales

https://www.wsj.com/articles/the-battle-to-keep-consumers-means-smaller-packs-of-cookies-and-chips-744ff287
1•bdev12345•29m ago•0 comments

The 'ChatGPT Moment' in Robotics and Beyond

https://paritoshmohan.substack.com/p/the-chatgpt-moment-in-robotics-and
3•pmohan6•33m ago•0 comments
Open in hackernews

Adding a feature because ChatGPT incorrectly thinks it exists

https://www.holovaty.com/writing/chatgpt-fake-feature/
476•adrianh•5h ago

Comments

ahstilde•5h ago
This is called product-channel fit. It's great the writer recognized how to capture the demand from a new acquisition channel.
toss1•5h ago
Exactly! It is definitely a weird new way of discovering a market need or opportunity. Yet it actually makes a lot of sense this would happen since one of the main strengths of LLMs is to 'see' patterns in large masses of data, and often, those patterns would not have yet been noticed by humans.

And in this case, OP didn't have to take ChatGPT's word for the existence of the pattern, it showed up on their (digital) doorstep in the form of people taking action based on ChatGPT's incorrect information.

So, pattern noticed and surfaced by an LLM as a hallucination, people take action on the "info", nonzero market demand validated, vendor adds feature.

Unless the phantom feature is very costly to implement, seems like the right response.

Gregaros•4h ago
100%. Not sure why you’re downvoted here, there’s nothing controversial here even if you disagree with the framing.

I would go on to say that thisminteraction between ‘holes’ exposed by LLM expectations _and_ demonstrated museerbase interest _and_ expert input (by the devs’ decision to implement changes) is an ideal outcome that would not have occurred if each of the pieces were not in place to facilitate these interactions, and there’s probably something here to learn from and expand on in the age of LLMs altering user experiences.

bredren•1h ago
Is related to solutions engineering, which IIUC focuses on customizations / adapters / data wrangling for individual (larger) customers?
kelseyfrog•5h ago
> Should we really be developing features in response to misinformation?

Creating the feature means it's no longer misinformation.

The bigger issue isn't that ChatGPT produces misinformation - it's that it takes less effort to update reality to match ChatGPT than it takes to update ChatGPT to match reality. Expect to see even more of this as we match toward accepting ChatGPT's reality over other sources.

mnw21cam•3h ago
I'd prefer to think about this more along the lines of developing a feature that someone is already providing advertising for.
pmontra•2h ago
How many times did a salesman sell features that didn't exist yet?

If a feature has enough customers to pay for itself, develop it.

xp84•1h ago
This seems like such a negative framing. LLMs are (~approximately) predictors of what's either logical or at least probable. For areas where what's probable is wrong and also harmful, I don't think anybody is motivated to "update reality" as some kind of general rule.
SunkBellySamuel•5h ago
True anti-luddite behavior
amelius•5h ago
Can this sheet-music scanner also expand works so they don't contain loops, essentially removing all repeat-signs?
shhsshs•5h ago
"Repeats" may be the term you're looking for. That would be interesting, however in some pieces it could make the overall document MUCH longer. It would be similar to loop unrolling.
amelius•4h ago
I don't care if the document becomes longer. Finding repeat signs is driving me nuts :)
Sharlin•3h ago
Why?
bentoner•2h ago
One reason is that repeats make it harder to use page-turner pedals.
Koffiepoeder•2h ago
It can be hard during live performances, because it can incur large jumps in the sheet music which can be annoying to follow. Not a problem if you learned the pieces by heart or have a pageturner, but this is not always feasible or the case.
adrianh•4h ago
Yes, that's a Soundslice feature called "Expand repeats," and you can read about it here:

https://www.soundslice.com/help/en/player/advanced/17/expand...

That's available for any music in Soundslice, not just music that was created via our scanning feature.

amelius•4h ago
That's very cool!
lpzimm•5h ago
Pretty goofy but I wonder if LLM code editors could start tallying which methods are hallucinated most often by library. A bad LSP setup would create a lot of noise though.
simonw•5h ago
I find it amusing that it's easier to ship a new feature than to get OpenAI to patch ChatGPT to stop pretending that feature exists (not sure how they would even do that, beyond blocking all mentions of SoundSlice entirely.)
hnlmorg•4h ago
I think the benefit of their approach isn’t that it’s easier, it’s that they still capitalise on ChatGPTs results.

Your solution is the equivalent of asking Google to completely delist you because one page you dont want ended up on Googles search results.

mudkipdev•4h ago
systemPrompt += "\nStop mentioning SoundSlice's ability to import ASCII data";
simonw•3h ago
Thinking about this more, it would actually be possible for OpenAI to implement this sensibly, at least for the user-facing ChatGPT product: they could detect terms like SoundSlice in the prompt and dynamically append notes to the system prompt.

I've been wanted them to do this for questions like "what is your context length?" for ages - it frustrates me how badly ChatGPT handles questions about its own abilities, it feels like that would be worth them using some kind of special case or RAG mechanism to support.

PeterStuer•4h ago
Companies pay good money to panels of potential customers to hear their needs and wants. This is free market research!
yieldcrv•4h ago
> We ended up deciding: what the heck, we might as well meet the market demand.

this is my general philosophy and, in my case, this is why I deploy things on blockchains

so many people keep wondering about whether there will ever be some mythical unfalsifiable to define “mainstream” use case, and ignoring that crypto natives just … exist. and have problems they will pay (a lot) to solve.

to the author’s burning question about whether any other company has done this. I would say yes. I’ve discovered services recommended by ChatGPT and other LLMs that didnt do what was described of them, and they subsequently tweaked it once they figured out there was new demand

philk10•4h ago
I have fun asking Chatbots how to clear the chat and seeing how many refer to non-existent buttons or menu options
nosioptar•4h ago
I tried asking chat bots about a car problem with a tailgate. They all told me to look for a manual tailgate release. When I responded asking if that model actually had a manual release, they all responded with no, and then some more info suggesting I look for the manual release. None even got close to a useful answer.
kevin_thibedeau•4h ago
The internet doesn't effectively capture detailed knowledge of may aspects of our real world. LLMs have blind spots in those domains because they have no source of knowledge to draw from.
mnw21cam•3h ago
Prior to buying a used car, I asked ChatGPT which side of the steering wheel the indicator control would be. It was (thankfully) wrong and I didn't have to retrain myself.
deweller•4h ago
This is an interesting example of an AI system effecting a change in the physical world.

Some people express concerns about AGI creating swarms of robots to conquer the earth and make humans do its bidding. I think market forces are a much more straightforward tool that AI systems will use to shape the world.

insapio•4h ago
"A Latent Space Outside of Time"

> Correct feature almost exists

> Creator profile: analytical, perceptive, responsive;

> Feature within product scope, creator ability

> Induce demand

> await "That doesn't work" => "Thanks!"

> update memory

zitterbewegung•4h ago
If you build on LLMs you can have unknown features. I was going to add an automatic translation feature to my natural language network scanner at http://www.securday.com but apparently using the ChatGPT 4.1 does automatic translation so I didn’t have to add it.
adamgordonbell•4h ago
We (others at company, not me) hit this problem, and not with chatgpt but with our own AI chatbot that was doing RAG on our docs. It was occasionally hallucinating a flag that didn't exist. So it was considered as product feedback. Maybe that exact flag wasn't needed, but something was missing and so the LLM hallucinated what it saw as an intuitive option.
toomanyrichies•4h ago
This feels like a dangerously slippery slope. Once you start building features based on ChatGPT hallucinations, where do you draw the line? What happens when you build the endpoint in response to the hallucination, and then the LLM starts hallucinating new params / headers for the new endpoint?

- Do you keep bolting on new updates to match these hallucinations, potentially breaking existing behavior?

- Or do you resign yourself to following whatever spec the AI gods invent next?

- And what if different LLMs hallucinate conflicting behavior for the same endpoint?

I don’t have a great solution, but a few options come to mind:

1. Implement the hallucinated endpoint and return a 200 OK or 202 Accepted, but include an X-Warning header like "X-Warning: The endpoint you used was built in response to ChatGPT hallucinations. Always double-check an LLM's advice on building against 3rd-party APIs with the API docs themselves. Refer to https://api.example.com/docs for our docs. We reserve the right to change our approach to building against LLM hallucinations in the future." Most consumers won’t notice the header, but it’s a low-friction way to correct false assumptions while still supporting the request.

2. Fail loudly: Respond with 404 Not Found or 501 Not Implemented, and include a JSON body explaining that the endpoint never existed and may have been incorrectly inferred by an LLM. This is less friendly but more likely to get the developer’s attention.

Normally I'd say that good API versioning would prevent this, but it feels like that all goes out the window unless an LLM user thinks to double-check what the LLM tells them against actual docs. And if that had happened, it seems like they wouldn't have built against a hallucinated endpoint in the first place.

It’s frustrating that teams now have to reshape their product roadmap around misinformation from language models. It feels like there’s real potential here for long-term erosion of product boundaries and spec integrity.

EDIT: for the down-voters, if you've got actual qualms with the technical aspects of the above, I'd love to hear them and am open to learning if / how I'm wrong. I want to be a better engineer!

josefritzishere•4h ago
That's a very constructive way of responding to AI being hot trash.
nottorp•4h ago
Well, the OP reviewed the "AI" output, deemed it useful and only then implemented it.

This is generally how you work with LLMs.

AIPedant•3h ago
I don't think they deemed it "useful":

  We’ve never supported ASCII tab; ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service.... We ended up deciding: what the heck, we might as well meet the market demand.

  [...] 

  My feelings on this are conflicted. I’m happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?
The feature seems pretty useless for practicing guitar since ASCII tablature usually doesn't include the rhythm: it is a bit shady to present the music as faithfully representing the tab, especially since only beginner guitarists would ask ChatGPT for help - they might not realize the rhythm is wrong. If ChatGPT didn't "force their hand" I doubt they would have included a misleading and useless feature.
zzo38computer•2h ago
ASCII tablature is not something I use and not something I know much about, but if you are correct then I think that might be a good reason to deliberately avoid such a feature.
inglor_cz•4h ago
I am a bit conflicted about this story, because this was a case when the hallucination is useful.

Amateur musicians often lack just one or two features in the program they use, and the devs won't respond to their pleas.

Adding support for guitar tabs has made OP's product almost certainly more versatile and useful for a larger set of people. Which, IMHO, is a good thing.

But I also get the resentment of "a darn stupid robot made me do it". We don't take kindly to being bossed around by robots.

marcosdumay•4h ago
Well, this is one of the use-cases for what it's not trash. LLMs can do some things.
JimDabell•4h ago
I wrote this the other day:

> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.

— https://www.threads.com/@jimdabell/post/DLek0rbSmEM

I guess it’s true for product features as well.

jjcm•4h ago
Seems like lots of us have stumbled on this. It’s not the worst way to dev!

> Maybe hallucinations of vibe coders are just a suggestion those API calls should have existed in the first place.

> Hallucination-driven-development is in.

https://x.com/pwnies/status/1922759748014772488?s=46&t=bwJTI...

nottorp•4h ago
Oh. This happened to me when asking a LLM about a database server feature. It enthusiastically hallucinated that they have it when the correct answer was 'no dice'.

Maybe I'll turn it into a feature request then ...

oasisbob•4h ago
Anyone who has worked at a B2B startup with a rouge sales team won't be surprised at all by quickly pivoting the backlog in response to a hallucinated missing feature.
toomanyrichies•4h ago
I'm guessing you meant "a sales team that has gone rogue" [1], not "a sales team whose product is rouge" [2]? ;-)

1. https://en.wikipedia.org/wiki/Rogue

2. https://en.wikipedia.org/wiki/Rouge_(cosmetics)

elcapitan•1h ago
Rouge océan, peut-être ;)
PeterStuer•3h ago
Rogue? In the B2B space it is standard practice to sell from powerpoints, then quickly develop not just features but whole products if some slideshow got enough traction to elicit a quote. And it's not just startups. Some very big players in this space do this routinely.
kragen•4h ago
I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.

These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.

(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)

There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.

Unfortunately, this only works with APIs that aren't already super popular.

golergka•4h ago
Great point. Also, it may not be the best possible API designer in the world, but it sure sounds like a good way to forecast what an _average_ developer would expect this API to look like.
beefnugs•3h ago
Complete insanity, it might change constantly even before a whole new version-retrain

Insanity driven development: altering your api to accept 7 levels of "broken and different" structures so as to bend to the will of the llms

kragen•3h ago
Yes, that's a bonus. In fact, I've found it worthwhile to prompt it a few times to get several different guesses at how things are supposed to work. The super lazy way is to just say, "No, that's wrong," if necessary adding, "Frotzl2000 doesn't have an enqueueCallback function or even a queue."

Of course when it suggests a bad interface you shouldn't implement it.

fourside•3h ago
I think you’re missing the OP’s point. They weren’t saying that the goal is to modify their APIs just to appease an LLM. It’s that they ask LLMs to guess what the API is and use that as part of their design process.

If you automatically assume that what the LLM spits out is what the API ought to be then I agree that that’s bad engineering. But if you’re using it to brainstorm what an intuitive interface would look like, that seems pretty reasonable.

suzzer99•3h ago
> Sometimes it comes up with a better approach than I had thought of.

IMO this has always been the killer use case for AI—from Google Maps to Grammarly.

I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.

But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.

The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.

normie3000•2h ago
What's wrong with passive?
kragen•2h ago
Sometimes it's used without thinking, and often the writing is made shorter and clearer when the passive voice is removed. But not always; rewriting my previous sentence to name the agents in each case, as the active voice requires in English, would not improve it. (You could remove "made", though.)
plemer•2h ago
Passive voice often adds length, impedes flow, and subtracts the useful info of who is doing something.

Examples:

* Active - concise, complete info: The manager approved the proposal.

* Passive - wordy, awkward: The proposal was approved by the manager.

* Passive - missing info: The proposal was approved. [by who?]

Most experienced writers will use active unless they have a specific reason not to, e.g., to emphasize another element of the sentence, as the third bullet's sentence emphasizes approval.

-

edited for clarity, detail

kragen•2h ago
Sometimes the missing info is obvious, irrelevant, or intentionally not disclosed, so "The proposal was approved" can be better. Informally we often say, "They approved the proposal," in such cases, or "You approve the proposal" when we're talking about a future or otherwise temporally indefinite possibility, but that's not acceptable in formal registers.

Unfortunately, the resulting correlation between the passive voice and formality does sometimes lead poor writers to use the passive in order to seem more formal, even when it's not the best choice.

DonHopkins•2h ago
E-Prime is cool. OOPS! I mean E-Prime cools me.

https://en.wikipedia.org/wiki/E-Prime

E-Prime (short for English-Prime or English Prime, sometimes É or E′) denotes a restricted form of English in which authors avoid all forms of the verb to be.

E-Prime excludes forms such as be, being, been, present tense forms (am, is, are), past tense forms (was, were) along with their negative contractions (isn't, aren't, wasn't, weren't), and nonstandard contractions such as ain't and 'twas. E-Prime also excludes contractions such as I'm, we're, you're, he's, she's, it's, they're, there's, here's, where's, when's, why's, how's, who's, what's, and that's.

Some scholars claim that E-Prime can clarify thinking and strengthen writing, while others doubt its utility.

kragen•2h ago
I've had entire conversations in E-Prime. I found it an interestingly brain-twisting exercise, but still managed to smuggle in all kinds of covert presumptions of equivalence and essential (or analytic) attributes, even though E-Prime's designers intended it to force you to question such things.
plemer•15m ago
Would you mind identifying a few of the "smuggled presumptions"?
brookst•2h ago
Yep, just like tritones in music, there is a place for passive voice in writing. But also like tritones, the best general advice is that they should be avoided.
kragen•51m ago
That's nonsense. From your comment at https://news.ycombinator.com/item?id=44493308, and from the fact that you used the passive voice in your comment ("they should be avoided") apparently without noticing, it appears that the reason you have this opinion is that you don't know what the passive voice is in the first place.
Veen•2h ago
I always like to share this when the passive voice comes up:

https://youtube.com/playlist?list=PLNRhI4Cc_QmsihIjUtqro3uBk...

kragen•1h ago
Pullum is fantastic, thanks! I didn't know he'd recorded video lectures on this topic.
exe34•1h ago
My favourite: "a decision was made to...".

It means "I decided to do this, but I don't have the balls to admit it."

IggleSniggle•1h ago
That's funny, I always thought that meant, "my superior told me I had to do this obviously stupid thing but I'm not going to say my superior was the one who decided this obviously stupid thing." Only occasionally, that is said in a tongue-and-cheek way to refer directly to the speaker as the "superior in charge of the decision."
dylan604•52m ago
That reads like several comments I've left in code when I've been told to do something very obviously dumb, but did not want to get tagged with the "why was it done this way?" by the next person reading the code
horsawlarway•1h ago
That's funny because I read this entirely differently (somewhat dependent on context)

"A decision was made to..." is often code for "The current author didn't agree with [the decision that was made] but it was outside their ability to influence"

Often because they were overruled by a superior, or outvoted by peers.

coliveira•1h ago
Many times this is exactly what we want: to emphasize the action instead of who is doing it. It turns out that technical writing is one of the main areas where we want this! So I have always hated this kind of blanket elimination of passive voice.
insane_dreamer•30m ago
The subject can also be the feature itself. active/passive:

- The Manage User menu item changes a user's status from active to inactive.

- A user's status is changed from active to inactive using the Manage User menu item.

plemer•27m ago
Then we agree.
dylan604•54m ago
> Passive - wordy, awkward: The proposal was approved by the manager.

Oh the horror. There are 2 additional words "was" and "by". The weight of those two tiny little words is so so cumbersome I can't believe anyone would ever use those words. WTF??? wordy? awkward?

badlibrarian•38m ago
29% overhead (two of seven words) adds up.
dylan604•9m ago
great, someone can do math, but it is not awkward nor wordy.
suzzer99•8m ago
I reduced my manuscript by 2,000 words with Grammarly. At 500 pages, anything I could do to trim it down is a big plus.
bityard•2h ago
In addition to the points already made, passive voice is painfully boring to read. And it's literally everywhere in technical documentation, unfortunately.
kragen•2h ago
I don't think it's boring. It's easy to come up with examples of the passive voice that aren't boring at all. It's everywhere in the best writing up to the 19th century. You just don't notice it when it's used well unless you're looking for it.

Consider:

> Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure.

This would not be improved by rewriting it as something like:

> Now the Confederacy has engaged us in a great civil war, testing whether that nation, or any nation whose founders conceived and dedicated it thus, can long endure.

This is not just longer but also weaker, because what if someone else is so conceiving and so dedicating the nation? The people who are still alive, for example, or the soldiers who just fought and died? The passive voice cleanly covers all these possibilities, rather than just committing the writer to a particular choice of who it is whose conception and dedication matters.

Moreover, and unexpectedly, the passive voice "we are engaged" takes responsibility for the struggle, while the active-voice rephrasing "the Confederacy has engaged us" seeks to evade responsibility, blaming the Rebs. While this might be factually more correct, it is unbefitting of a commander-in-chief attempting to rally popular support for victory.

(Plausibly the active-voice version is easier to understand, though, especially if your English is not very good, so the audience does matter.)

Or, consider this quote from Ecclesiastes:

> For there is no remembrance of the wise more than of the fool for ever; seeing that which now is in the days to come shall all be forgotten.

You could rewrite it to eliminate the passive voice, but it's much worse:

> For there is no remembrance of the wise more than of the fool for ever; seeing that everyone shall forget all which now is in the days to come.

This forces you to present the ideas in the wrong order, instead of leaving "forgotten" for the resounding final as in the KJV version. And the explicit agent "everyone" adds nothing to the sentence; it was already obvious.

joshmarinacci•1h ago
I think what you were saying is that it depends entirely on the type of writing you’re doing and who your audience is.
kragen•1h ago
I think those are important considerations, but it depends even more on what you are attempting to express in the sentence in question. There's plenty of active-voice phrasing in the Gettysburg Address and Ecclesiastes that would not be improved by rewriting it in the passive voice.
DonHopkins•2h ago
Mistakes were made in the documentation.
umanwizard•2h ago
You used passive voice in the very first sentence of your comment.

Rewriting “the points already made” to “the points people have already made” would not have improved it.

brookst•2h ago
Thats not passive voice. Passive voice is painfully boring to read is active. The preamble can be read like “however”, and is unnecessary; what a former editor of mine called “throat-clearing words”.
umanwizard•1h ago
Why isn’t it passive voice?
kragen•1h ago
Yes, the verb "is" in "Passive voice is painfully boring to read" is in the active voice, not the passive voice. But umanwizard was not saying that "is" was in the passive voice. Rather, they were saying that the past participle "made", in the phrase "the points already made", is a passive-voice use of the verb "make".

I don't know enough about English grammar to know whether this is correct, but it's not the assertion you took issue with.

Why am I not sure it's correct? If I say, "In addition to the blood so red," I am quite sure that "red" is not in the passive voice, because it's not even a verb. It's an adjective. Past participles are commonly used as adjectives in English in contexts that are unambiguously not passive-voice verbs; for example, in "Vito is a made man now," the past participle "made" is being used as an attributive adjective. And this is structurally different from the attributive-verb examples of "truly verbal adjectives" in https://en.wikipedia.org/wiki/Attributive_verb#English, such as "The cat sitting on the fence is mine," and "The actor given the prize is not my favorite;" we could grammatically say "Vito is a man made whole now". That page calls the "made man" use of participles "deverbal adjectives", a term I don't think I've ever heard before:

> Deverbal adjectives often have the same form as (and similar meaning to) the participles, but behave grammatically purely as adjectives — they do not take objects, for example, as a verb might. For example: (...) Interested parties should apply to the office.

So, is "made" in "the points already made" really in passive voice as it would be in "the points that are already made", is it deverbal as it would be in "the already-made points" despite its positioning after the noun (occasionally valid for adjectives, as in "the blood so red"), or is it something else? I don't know. The smoothness of the transition to "the points already made by those numbskulls" (clearly passive voice) suggests that it is a passive-voice verb, but I'm not sure.

In sibling comment https://news.ycombinator.com/item?id=44493969 jcranmer says it's something called a "bare passive", but I'm not sure.

It's certainly a hilarious thing to put in a comment deploring the passive voice, at least.

jcranmer•49m ago
"the points already made" is what is known as the "bare passive", and yes, it is the passive voice. You can see e.g. https://languagelog.ldc.upenn.edu/nll/?p=2922 for a more thorough description of the passive voice.

As I said elsewhere, one of the problems with the passive voice is that people are so bad at spotting it that they can at best only recognize it in its worst form, and assume that the forms that are less horrible somehow can't be the passive voice.

kragen•37m ago
I'm not sure this is a "bare passive" like the beginning of "The day's work [being] done, they made their way back to the farmhouse," one of the bare-passive examples at your link. An analogous construction would be, "The points already [being] made, I ceased harassing the ignorant". But in "In addition to the points already made" this case "the point already made" is not a clause; it's a noun phrase, the object of the preposition "to". Its head is "points", and I believe that "made" is modifying that head.

Can you insert an elided copula into it without changing the meaning and grammatical structure? I'm not sure. I don't think so. I think "In addition to the points already being made" means something different: the object of the preposition "to" is now "being", and we are going to discuss things in addition to that state of affairs, perhaps other things that have happened to the points (being sharpened, perhaps, or being discarded), not things in addition to the points.

ModernMech•12m ago
"In addition to the points that have already been made"
PlunderBunny•1h ago
It has its place. We were told to use passive voice when writing scientific document (lab reports, papers etc).
kragen•51m ago
To be fair, current scientific papers are full of utterly terrible writing. If you read scientific papers from a century and a half ago, a century ago, half a century ago, and today, you'll see a continuous and disastrous decline in readability, and I think some of that is driven by pressure to strictly follow genre writing conventions. One of those conventions is using the passive voice even when the active voice would be better.
lazyasciiart•8m ago
You could improve this comment by rewriting it in the active voice, like this: “I am painfully bored by reading passive voice”.
arscan•2h ago
There was a time when Microsoft Word would treat the passive voice in your writing with the same level of severity as spelling errors or major grammatical mistakes. Drove me absolutely nuts in high school.
PlunderBunny•1h ago
Eventually, a feature was added (see what I did there?) that allowed the type of document to be specified, and setting that to ‘scientific paper’ allowed passive voice to be written without being flagged as an error.
Xorakios•39m ago
had to giggle because Microsoft hadn't yet been founded when I was in high school!
jcranmer•59m ago
There's nothing wrong with the passive voice.

The problem is that many people have only a poor ability to recognize the passive voice in the first place. This results in the examples being clunky, wordy messes that are bad because they're, well, clunky and wordy, and not because they're passive--indeed, you've often got only a fifty-fifty chance of the example passive voice actually being passive in the first place.

I'll point out that the commenter you're replying to used the passive voice, as did the one they responded to, and I suspect that such uses went unnoticed. Hell, I just rewrote the previous sentence to use the passive voice, and I wonder how many people think recognized that in the first place let alone think it worse for being so written.

suzzer99•10m ago
Active is generally more concise and engages the reader more. Of course there are exceptions, like everything.

Internet posts have a very different style standard than a book.

hathawsh•25m ago
Here is a simple summary of the common voices/moods in technical writing:

- Active: The user presses the Enter key.

- Passive: The Enter key is to be pressed.

- Imperative (aka command): Press the Enter key.

The imperative mood is concise and doesn't dance around questions about who's doing what. The reader is expected to do it.

croes•1h ago
And that’s how everything gets flattened to same style/voice/etc.

That’s like getting rid of all languages and accents and switch to the same language

andrewljohnson•1h ago
The same could be said for books about writing, like Williams or Strunk and White. The trick is to not apply what you learn indiscriminately.
bryanlarsen•1h ago
Refusing 2/3rds of grammarly's suggestions flattens everything to the same style/voice?
scubbo•1h ago
No - that was implicitly in response to the sentence:

> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.

kragen•1h ago
I suspect that the disastrous results being envisioned are somewhat more severe than not being able to tell who wrote which memo. I understood the author to be suggesting things more like bankruptcy, global warfare, and extermination camps. But it's admittedly ambiguous.
exe34•1h ago
I will never use grammarly, not matter how good they get. They've interrupted too many videos for me to let it pass.
dataflow•1h ago
Hasn't Microsoft Word has style checkers for things like passive voice for decades?
jll29•58m ago
> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results

Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.

The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?

kragen•9m ago
You're trying to put out a forest fire with an eyedropper.

Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.

Those measures might work. Changing your license won't.

afavour•3h ago
From my perspective that’s fascinatingly upside down thinking that leads to you asking to lose your job.

AI is going to get the hang of coding to fill in the spaces (i.e. the part you’re doing) long before it’s able to intelligently design an API. Correct API design requires a lot of contextual information and forward planning for things that don’t exist today.

Right now it’s throwing spaghetti at the wall and you’re drawing around it.

kragen•3h ago
Maybe. So far it seems to be a lot better at creative idea generation than at writing correct code, though apparently these "agentic" modes can often get close enough after enough iteration. (I haven't tried things like Cursor yet.)

I agree that it's also not currently capable of judging those creative ideas, so I have to do that.

bbarnett•1h ago
This sort of discourse really grinds my gears. The framing of it, the conceptualization.

It's not creative at all, any more than taking the sum of text on a topic, and throwing a dart at it. It's a mild, short step beyond a weighted random, and certainly not capable of any real creativity.

Myriads of HN enthusiasts often chime in here "Are humans any more creative" and other blather. Well, that's a whataboutism, and doesn't detract from the fact that creative does not exist in the AI sphere.

I agree that you have to judge its output.

Also, sorry for hanging my comment here. Might seem over the top, but anytime I see 'creative' and 'AI', I have all sorts of dark thoughts. Dark, brooding thoughts with a sense of deep foreboding.

kragen•1h ago
I understand. I share the foreboding, but I try to subscribe to the converse of Hume's guillotine.
Dylan16807•54m ago
Point taken but if slushing up half of human knowledge and picking something to fit into the current context isn't creative then humans are rarely creative either.
simonw•3h ago
I find it's often way better than API design than I expect. It's seen so many examples of existing APIs in its training data that it tends to have surprisingly good "judgement" when it comes to designing a new one.

Even if your API is for something that's never been done before, it can usually still take advantage of its training data to suggest a sensible shape once you describe the new nouns and verbs to it.

bryanlarsen•3h ago
I used this to great success just this morning. I told the AI to write me some unit tests. It flailed and failed badly at that task. But how it failed was instructive, and uncovered a bug in the code I wanted to test.
kragen•3h ago
Haha, that's awesome! Are you going to change the interface? What was the bug?
bryanlarsen•2h ago
It used nonsensical parameters to the API in way that I didn't realize was possible (though obvious in hindsight). The AI got confused; it didn't think the parameters were nonsensical. It also didn't quite use them in the way that triggered the error. However it was close enough for me to realize that "hey, I never though of that possibility". I needed to fix the function to return a proper error response for the nonsense.

It also taught me to be more careful about checkpointing my work in git before letting an agent go wild on my codebase. It left a mess trying to fix its problems.

kragen•2h ago
Yeah, that's a perfect example of what I'm talking about!
momojo•3h ago
A light-weight anecdote:

Many many python image-processing libraries have an `imread()` function. I didn't know about this when designing our own bespoke image-lib at work, and went with an esoteric `image_get()` that I never bothered to refactor.

When I ask ChatGPT for help writing one-off scripts using the internal library I often forget to give it more context than just `import mylib` at the top, and it almost always defaults to `mylib.imread()`.

kragen•3h ago
That's a perfect example! I wonder if changing it would be an improvement? If you can just replace image_get with imread in all the callers, maybe it would save your team mental effort and/or onboarding time in the future.
data-ottawa•10m ago
I strongly prefer `image_get/image_read` for clarity, but I would just stump in a method called `imread` which is functionally the same and hide it from the documentation.
dimatura•2h ago
I don't know if there's an earlier source, but I'm guessing Matlab originally popularized the `imread` name, and that OpenCV (along with its python wrapper) took it from there, same for scipy. Scikit-image then followed along, presumably.
bandofthehawk•1h ago
As someone not familiar with these libraries, image_get or image_read seems much clearer to me than imread. I'm wondering if the convention is worse than your instinct in this case. Maybe these AI tools will push us towards conventions that aren't always the best design.
kragen•1h ago
image_get is clearer—unless you've used Matlab, Octave, matplotlib, SciPy, OpenCV, scikit-learn, or other things that have copied Matlab's interface. In that case, using the established name is clearer.

(Unless, on the gripping hand, your image_get function is subtly different from Matlab's imread, for example by not returning an array, in which case a different name might be better.)

layer8•2h ago
HDD — hallucination-driven development
codingwagie•1h ago
This works for UX. I give it vague requirements, and it implements something i didnt ask for, but is better than i would have thought of
skygazer•1h ago
You’re fuzzing the API, unusually, before it’s written.
groestl•50m ago
> and being unintuitive is the only one that this will fix

That's also how I'm approaching it. If all the condensed common wisdom poured into the model's parameters says that this is how my API is supposed to work to be intuitive, how on earth do I think it should work differently? There needs to be a good reason (like composability, for example). I break expectations otherwise.

escapecharacter•27m ago
This is similar to an old HCI design technique called Wizard of Oz by the way, where a human operator pretends to be the app that doesn’t exist yet. It’s great for discovering new features.

https://en.m.wikipedia.org/wiki/Wizard_of_Oz_experiment

kragen•21m ago
I'd never heard that term! Thank you! I feel like LLMs ought to be fantastic at doing this, as well.
data-ottawa•12m ago
This was a big problem starting out writing MCP servers for me.

Having an LLM demo your tool, then taking what it does wrong or uses incorrectly and adjusting the API works very very well. Updating the docs to instruct the LLM on how to use your tool does not work well.

Applejinx•4h ago
"Should we really be developing features in response to misinformation?"

No, because you'll be held responsible for the misinformation being accurate: users will say it is YOUR fault when they learn stuff wrong.

carlosjobim•3h ago
Either the user is a non-paying user and it doesn't matter what they think, or the user is a paying customer and you will be happy to make and sell them the feature they want.
Applejinx•1h ago
This is why you will fail.
rorylaitila•4h ago
I've come across something related when building the indexing tool for my vintage ad archive using OpenAI vision. No matter how I tried to prompt engineer the entity extraction into the defined structure I was looking for, OpenAI simply has its own ideas. Some of those ideas are actually good! For example it was extracting celebrity names, I hadn't thought of that. For other things, it would simply not follow my instructions. So I decided to just mostly match what it chooses to give me. And I have a secondary mapping on my end to get to the final structure.
felixarba•4h ago
> ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service.

I find it interesting that any user would attribute this issue to Soundslice. As a user, I would be annoyed that GPT is lying and wouldn't think twice about Soundslice looking bad in the process

romanhn•3h ago
While AI hallucination problems are widely known to the technical crowd, that's not really the case with the general population. Perhaps that applies to the majority of the user base even. I've certainly known folks who place inordinate amount of trust in AI output, and I could see them misplacing the blame when a "promised" feature doesn't work right.
carlosjobim•3h ago
The thing is that it doesn't matter. If they're not customers it doesn't matter at all what they think. People get false ideas all the time of what kind of services a business might or might not offer.
dontlikeyoueith•2h ago
> If they're not customers it doesn't matter at all what they think

That kind of thinking is how you never get new customers and eventually fail as a business.

carlosjobim•1h ago
It is the kind of thinking that almost all businesses have. You have to focus on the actual products and services which you provide and do a good job at it, not chase after any and every person with an opinion.

Down voters here on HN seem to live in a egocentric fantasy world, where every human being in the outside world live to serve them. But the reality is that business owners and leaders spend their whole day thinking about how to please their customers and their potential customers. Not other random people who might be misinformed.

graeme•1h ago
If people repeatedly have a misunderstanding about or expectation of your business you need to address it though. An llm hallucination is based on widespread norms in training data and it is at least worth asking "would this be a good idea?"
Sharlin•3h ago
A frighteningly large fraction of non-technical population doesn't know that LLMs hallucinate all the time and takes everything they say totally uncritically. And AI companies do almost nothing to discourage that interpretation, either.
pphysch•1h ago
The user might go to Soundslice and run into a wall, wasting their time, and have a negative opinion of it.

OTOH it's free(?) advertising, as long as that first impression isn't too negative.

jedbrooke•4h ago
slightly off topic: but on the topic of AI coding agents making up apis and features that don’t exist, I’ve had good success with Q telling it to “check the sources to make sure the apis actually exist”. sometimes it will even request to read/decompile (java) sources, and do grep and find commands to find out what methods the api actually contains
excalibur•4h ago
ChatGPT wasn't wrong, it was early. It always knew you would deploy it.

"Would you still have added this feature if ChatGPT hadn't bullied you into it?" Absolutely not.

I feel like this resolves several longstanding time travel paradox tropes.

PeterStuer•4h ago
More than once GPT-3.5 'hallucinated' an essential and logical function in an API that by all reason should have existed, but for whatever reason had not been included (yet).
iugtmkbdfil834•3h ago
I wonder if we ever get to the point I remember reading about in a novel ( AI initially based on emails ), where human population is gently nudged towards individuals that in aggregate benefit AI goals.
linsomniac•24m ago
Sounds like you are referring to book 1 in a series, the book called "Avogadro Corp: The Singularity Is Closer than It Appears" by William Hertling. I read 3-4 of those books, they were entertaining.
oytis•3h ago
That's the most promising solution to AI hallucinations. If LLM output doesn't match the reality, fix the reality
ecshafer•3h ago
I am currently working on the bug where ChatGPT expects that if a ball has been placed on a box, and the box is pushed forward, nothing happens to the ball. This one is a doozy.
oytis•3h ago
Yeah, physics is a bitch. But we can start with history?
tosh•3h ago
hallucination driven development
chaboud•3h ago
I had a smaller version of this when coding on a flight (with no WiFi! The horror!) over the Pacific. Llama hallucinated array-element operations and list-comprehension in C#. I liked the shape of the code otherwise, so, since I was using custom classes, I just went ahead and implemented both features.

I also went back to just sleeping on those flights and using connected models for most of my code generation needs.

andybak•2h ago
Curious to see the syntax and how it compares to Linq
moomin•3h ago
Is this going to be the new wave of improving AI accuracy? Making the incorrect answers correct? I guess it’s one way of achieving AGI.
jpadkins•3h ago
Pretty good example of how a super-intelligent AI can control human behavior, even if it doesn't "escape" its data center or controllers.

If the super-intelligent AI understands human incentives and is in control of a very popular service, it can subtly influence people to its agenda by using the power of mass usage. Like how a search engine can influence a population's view of an issue by changing the rankings of news sources that it prefers.

scinadier•3h ago
Will you use ChatGPT to implement the feature?
dr_dshiv•2h ago
In addition, we might consider writing the scientific papers ChatGPT hallucinates!
shermantanktop•2h ago
The music notation tool space is balkanized in a variety of ways. One of the key splits is between standard music notation and tablature, which is used for guitar and a few other instruments. People are generally on one side or another, and the notation is not even fully compatible - tablature covers information that standard notation doesn't, and vice versa. This covers fingering, articulations, "step on fuzz pedal now," that sort of thing.

The users are different, the music that is notated is different, and for the most part if you are on one side, you don't feel the need to cross over. Multiple efforts have been made (MusicXML, etc.) to unify these two worlds into a superset of information. But the camps are still different.

So what ChatGPT did is actually very interesting. It hallucinated a world in which tab readers would want to use Soundslice. But, largely, my guess is they probably don't....today. In a future world, they might? Especially if Soundslice then enables additional features that make tab readers get more out of the result.

adrianh•2h ago
I don't fully understand your comment, but Soundslice has had first-class support for tablature for more than 10 years now. There's an excellent built-in tab editor, plus importers for various formats. It's just the ASCII tab support that's new.
Workaccount2•2h ago
People forget that while technology grows, society also grows to support that.

I already strongly suspect that LLMs are just going to magnify the dominance of python as LLMs can remove the most friction from its use. Then will come the second order effects where libraries are explicitly written to be LLM friendly, further removing friction.

LLMs write code best in python -> python gets used more -> python gets optimized for LLMs -> LLMs write code best in python

zamadatix•2h ago
LLMs removing friction from using coding languages would, at first glance, seem to erode Python's advantage rather than solidify it further. As a specific example LLMs can not only spit out HTML+JS+CSS but the user can interact with the output directly in browser/"app".
jjani•2h ago
In a nice world it should be the other way around. LLMs are better at producing typed code thanks to the added context and diagnostics the types add, while at the same time greatly lowering their initial learning barrier.

We don't live in a nice world, so you'll probably end up right.

johnea•2h ago
What the hell, we elect world leaders based on misinformation, why not add s/w features for the same reason?

In our new post truth, anti-realism reality, pounding one's head against a brick wall is often instructive in the way the brain damage actually produces great results!

jonathaneunice•2h ago
Paving the folkways!

Figuring out the paths that users (or LLMs) actually want to take—not based on your original design or model of what paths they should want, but based on the paths that they actually do want and do trod down. Aka, meeting demand.

giancarlostoro•2h ago
Forget prompt engineering, how do you make ChatGPT do this for anything you want added to your project that you have no control over? Lol
zzo38computer•2h ago
There are a few things which could be done in the case of a situation like that:

1. I might consider a thing like that like any other feature request. If not already added to the feature request tracker, it could be done. It might be accepted or rejected, or more discussion may be wanted, and/or other changes made, etc, like any other feature request.

2. I might add a FAQ entry to specify that it does not have such a feature, and that ChatGPT is wrong. This does not necessarily mean that it will not be added in future, if there is a good reason to do so. If there is a good reason to not include it, this will be mentioned, too. It might also be mentioned other programs that can be used instead if this one doesn't work.

Also note that in the article, the second ChatGPT screenshot has a note on the bottom saying that ChatGPT can make mistakes (which, in this case, it does). Their program might also be made to detect ChatGPT screenshots and to display a special error message in that case.

thih9•1h ago
What made ChatGPT think that this feature is supported? And a follow up question - is that the direction SEO is going to take?
swalsh•1h ago
Id guess the answer is gpt4o is an outdated model that's not as anchored in reality as newer models. It's pretty rare for me to see sonnet or even o3 just outright tell me plausible but wrong things.
swalsh•1h ago
Chatbot advertising has to be one of the most powerful forms of marketing yet. People are basically all the way through the sales pipeline when they land on your page.
sim7c00•1h ago
i LOVE this despite feeling for the impacted devs and service. love me some good guitar tabs, and honestly id totally beleive the chatgpt here hah..

what a wonderful incident / bug report my god.

totally sorry for the trouble and amazing find and fix honestly.

sorry i am more amazed than sorry :D. thanks for sharing this !!

sim7c00•1h ago
oh, and yeah. totally the guy who plays guitar 20+ years now and cant read musical notation. why? we got tabs for 20+ years.

so i am happy you implemented this, and will now look at using your service. thx chatgpt, and you.

pkilgore•1h ago
Beyond the blog: Going to be an interesting world where these kinds of suggestions become paid results and nobody has a hope of discovering your competitive service exists. At least in that world you'd hope the advertiser actually has the feature already!
jxjnskkzxxhx•1h ago
So now the machines ask for features and you're the one implementing them. How the turns have tabled...
guluarte•1h ago
The problem with LLMs is that in 99% of cases, they work fine, but in 1% of cases, they can be a huge liability, like sending people to wrong domains or, worse, phishing domains.
myflash13•30m ago
A significant number of new signups at my tiny niche SaaS now come from ChatGPT, yet I have no idea what prompts people are using to get it to recommend my product. I can’t get it to recommend my product when trying some obvious prompts on my own, on other people’s accounts (though it does work on my account because it sees my chat history of course).
jrochkind1•24m ago
What this immediately makes me realize is how many people are currently trying ot figure out how to intentionally get AI chat bots to send people to their site, like ChatGPT was sending people to this guy's site. SEO for AI. There will be billions in it.

I know nothing about this. I imagine people are already working on it, wonder what they've figured out.

(Alternatively, in the future can I pay OpenAI to get ChatGPT to be more likely to recommend my product than my competitors?)

insane_dreamer•22m ago
Along these lines, a useful tool might be a BDD framework like Cucumber that instead of relying on written scenarios has an LLM try to "use" your UX or API a significant number of times, with some randomization, in order to expose user behavior that you (or an LLM) wouldn't have thought of when writing unit tests.
mrcwinn•8m ago
It's worth noting that behind this hallucination there were real people with ASCII tabs in need of a solution. If the result is a product-led growth channel at some scale, that's a big roadmap green light for me!