> "Good luck with all the lawsuits," added another. "This might read like a gritty founder hustle story," said software engineer Mauricio Idarraga. "But it's actually one of the most reckless and tone-deaf posts I've seen in a while."
> "We told our customers there's an 'AI that'll join a meeting'," said Udotong. "In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand."
They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
How do you get from 'AI that'll join a meeting' to 'an MIT engineering grad as your note taker'?
The rest about note takers is irrelevant when the problem is lying about the "note taker" as that could be the deciding factor for choosing a service, not price
AND you didn't have context or interest in the content?
AND you were required to write an essay at the end proving that you paid attention?!
Wait...
But when it’s a SaSS product it becomes an inspirational hustle culture story.
I would bet the TOS mentioned manual reviews.
If I invest in your AI startup and find out it's really people doing the work, I'm going to be pissed.
https://en.wikipedia.org/wiki/Elizabeth_Holmes
Also, there was this, which also originally claimed to be AI:
https://spectrum.ieee.org/untold-history-of-ai-mechanical-tu...
Expectation is that sensitive meetings run through a pipeline without being exposed to actual people (and if it is for very specific reasons, there are audit trails).
Here, they literally listen to sensitive information and can act on it.
How do you trust they won't do it again to "enhance summaries" or something in the future?
What this startup did isn't that, AFAICT. It wasn't manual work in service of learning...it was just fraud as a business model, no? Like, they were pretending the technology existed before it actually did. There's a bright line between unscalable hustle and misleading customers about what your product actually is.
Doing unscalable things is about being scrappy and close to the problem. Pretending humans are AI is just straight up deceiving people.
A similar exmaple is "Make something people want". This is generally true advice in focusing your efforts on solving customer's problems. Yet, this is disastrous advice if taking literally to the fullest extent (you can only imagine).
Bias towards bullshit
> this was for our first few beta customers from 2017 and we made it clear that there was a human in the loop of the service. LLMs didn't exist yet. It was like offering an EA for $100/mo - several other startups did that as well, but obviously it doesn't scale.
So not necessarily fraud unless they deceived investors. Or he’s covering up his mistake. Getting the popcorn!!
charles_f•2h ago
henry2023•1h ago
mmcdermott•51m ago
Claiming that the transcripts were generated by a nonexistant AI is fraud and should be treated as such.
deepfriedbits•53m ago