At the same time that Tesla was making actual electric cars, Nikola was rolling fake "electric trucks" downhill.
Grifters exist, but not everyone is a grifter.
/s
>> Capitalism rewards dishonesty.
> It's a wonderful thing that no other economic systems reward dishonesty.
This is a whataboutism. To rephrase, "all economic systems reward dishonesty." - That's the point. Saying not every market participant is a grifter is a form of denial.
At least Capitalism in a free society is largely self correcting.
Capitalism makes capitalism hard to appropriately regulate. Concentration of capital means concentration of power.
And yet many countries have a better handle on it than the USA. I just never, ever buy into the whole "it's to hard for America to do a thing that other countries do". I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.
The talk in EU is now that we have to ease regulation because we can't compete with countries with laxer regulation. The same race to the bottom has happened in e.g. taxes and labor protections ever since capital controls were lifted.
> I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.
I sure hope so but I can't really see it happening. The whole US political system seems to be FUBAR, largely because the concentrations of capital bought it.
Too bad that capital owners deeply hate regulation and do everything they can to deregulate everything.
Hell, it doesn't even need to be capital owners. Plenty of bootlickers here on HN that always cry about any kind of government regulation.
I think it's obvious that this is incorrect. Part of regulation is to try and curtail grifting. It is not possible to prevent it and is the tendency of the system.
Google China melamine contaminated milk.
Humans are dishonest, and implying that that doesn't manifest in other economic systems is willfully and maliciously false, a statement only used by propagandists.
Full self driving is the mislabeled one. Should be “90% self driving”.
Full self driving, however, may be limited to the speed limit. I don’t know, since I don’t use it.
Auto landing test 3 days ago.
(2023) https://www.theautopian.com/elon-musk-predicts-level-4-or-5-...
https://dictionary.cambridge.org/us/dictionary/english/grift
ways of getting money dishonestly that involve tricking someone
So, they got money dishonestly by tricking someone into buying their cars based on the belief that they'd offer real full self driving now or very soon, then they didn't actually deliver that.What they want from you are comments like this. What they want to do with those comments is to preserve the consensus amongst Tesla bulls that they are on their way to selling robots and renting robot taxis by the ride.
Tesla can make electric vehicles but the company valuagion is based on grift.
Although I guess your point is that it's also cheap to train them, probably cheaper than doing this. But startups are started by social people, not technical people. Stuff like this will always be expensive for social people since they have to pay one of us to do it. YC interviews their CEOs from time to time, it's really clear that's how that works.
From what I read online, the real issue was "Natasha", their virtual assistant powered by a dedicated foundation model. They ran out of money before it got anywhere.
Yeeeah... that's a fairly disingenuous take.
The difference between every other offshore dev shop backed by developers in India and Builder.ai is that - and I say this as someone who thinks Infosys is a shit company - Infosys and all those other dev shops are at least up front about how their business works and where and who will be building your app. Whereas Builder.ai spent quite a long time pretending like they had AI doing the work when actually it was a lot of devs in India.
That is deliberately misleading and it is not OK. It's fraudulent. It's literally what Theranos did with their Edison machines that never worked so whereas they claimed they had this wondrous new blood testing technology they were actually running tests with Siemens machines, diluting blood samples, etc. The consequences of Theranos's actions were much more serious (misdiagnoses and, indeed missed diagnoses of thousands of patients), rather than just apps built by humans rather than AI, but lying and fraud is lying and fraud.
https://www.infosys.com/services/cloud-cobalt/offerings/ai-i...
Every big dev shop does this. Overselling tech happens all the time in this space. The line between marketing and misleading isn't always so clear. The difference is Builder.ai pushed the AI angle harder, but that doesn't make it Theranos-level fraud.
Everyone in the industry incentivizes and participates in this behavior, but once in a while, let's grab a few stand-out individuals to scapegoat once in a while for all the harm caused by/to the entire group with this behavior. Make sure you pick someone big/ugly enough to be credibly dangerous to the whole group, but who isn't too dangerous and well connected so that you can be sure that when the card flips on them everyone around them scatters.
It's the same reason groups of individual humans do it: Scapegoating is a much lower resistance path to follow than the horrifying alternative (self-consciousness, reflection, love)
They did plenty of shady shit including producing poor results, but that’s largely incompetence independent of fraud vs intentionally putting people’s lives on the line.
IMO, the fraud kind of hides the equally important story where incompetent 19 year old collage dropout shockingly doesn’t know how to effectively setup and manage complex systems.
We also don't know what was discussed in private. For example, it could have been something like: "We want to be part of this investment opportunity, we'll give you $40 million. But if regulators start asking questions, we want the money back."
Without full context or legal findings, everything else is just speculation.
I'm surprised no one is talking about Microsoft's investment in BuilderAI, a total loss. It's unlikely they'll recover much, if anything. So why aren't they suing the CEO and CFO? Maybe some of the issues were handled quietly behind the scenes to avoid public exposure or reputational damage? I don't know.
Theranos was clear fraud. She claimed scientific advances that did not exist.
There are always unsolved engineering and scientific challenges that stand between today and future product, and nothing is guaranteed, but you have to sell investors on the future technology (see: frontier model makers pushing AGI/ASI hype)
Obviously there are differences between Toyota's SS battery claims and Theranos' claims, but it's not a black and white line, it's a spectrum.
Saying "We will have great batteries 10 years from now" is not fraud. It's your belief about the future. Everyone knows no one can predict the future.
Saying "this hydrogen powered truck works, here is a video of it running on the road right now" but the video is edited so you don't see that it's going down hill and the car isn't actually running" that's fraud.
Theranos wasn't in trouble for saying their machines would be great one day. They got in trouble for lying about the current state of things, saying they were performing blood tests on their machines when they were not.
Take a minute to visit their site and get informed. We live in a time where people form opinions just by reading a headline.
Overselling is fraud and is a crime at a certain point, which they clearly passed otherwise they wouldn't have had their lenders pull back money and leave them bankrupt
Kind of like how these guys lied about the volume of sales they had. Textbook fraud. They aren't in trouble for saying "AI is going to be great"
The company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, the Indian founders and accountants reportedly engaged in round-tripping with VerSe Innovation. That raised red flags for investors, regulators, and prosecutors, and led to bankruptcy proceedings.
In the general I kind of disagree with this. I am not a lawyer, so I don't know all the details, but if you look for it, you should be able to find the line since it's generally illegal to mislead customers. There's also a whole set of contractural and perhaps even legal obligations when it comes to investors.
For contracts and the law to be enforceable, they need draw lines as clearly as possible. There's always some amount of details that are up to interpretation, but companies make sure to pay legal counsel to make sure they don't cross these lines.
Now, specifically in this case, I do agree with you. This case doesn't seem to be a legal matter of customer or investor misleadings (thus far). Viola Credit did seize $37 million, so IMO there clearly was a violation of contract in all this, but it seems like that had nothing to do with the whole AI overselling.
There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.
Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.
Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.
Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.
And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Questions:
1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Creating a dedicated pretrained model is a prerequisite of any LLM. What do you mean by "full LLM"?
LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.
We never thought that it wasn't just a joke...
Shameless plug, but we built (https://v1.slashml.com), in roughly 2 weeks. Granted its not as mature, but we don't have billions :)
Insider trading charges filed over Long Island Iced Tea’s blockchain ‘pivot’ https://www.cnn.com/2021/07/10/investing/blockchain-long-isl...
Besides simple one page self-contained apps, yes, it's quite hard. So hard that it's still an unsolved problem.
I did my research before jumping into this space :)
Thanks for the search though:
> https://news.ycombinator.com/item?id=30397201
This one from 3 yrs ago had some interesting comments
> off topic, but they have a very suspect pricing page: https://www.builder.ai/studio-store
> "Delivery: 12 weeks"
> is Builder.ai just a CRUD app for indian sweatshops to build the apps?
> > It would not have spawned an entire industry and no code websites every other week or so if it was ‘just a CRUD app’.
Phrased differently, 30% of all transactions were still entered by a human overseas watching cameras at the time they decided to pull the plug, years after the initial launch.
There was a 2 year period in which I bought lunch at an Amazon Go daily. I was naive to the magic so I thought it was the greatest innovation ever.
https://tech.walmart.com/content/walmart-global-tech/en_us/b...
Shame to see another project fall to the strategy of AI = "actually indians". I wonder how many other companies have engaged in this stuff.
If you want to compete with the likes of GPT-4, Claude, or Gemini today, you're looking at billions, just for training, not counting infra, data pipelines, evals, red teaming, and everything else that comes with it.
Builder.ai wasn't able to use GenAI to actually build software. And when the money ran out and no model was ever announced, investors lost trust and clients lost patience.
I worked on the first iteration of Amazon Go in 2015/16 and can provide some context on the human oversight aspects.
The system incorporated human review in two primary capacities:
1. Low-confidence event resolution: A subset of customer interactions resulted in low-confidence classifications that were routed to human reviewers for verification. These events typically involved edge cases that were challenging for the automated systems to resolve definitively. The proportion of these events was expected to decrease over time as the models improved. This was my experience during my time with Go.
2. Training data generation: Human annotators played a significant role in labeling interactions for model training-- particularly when introducing new store fixtures or customer behaviors. For instance, when new equipment like coffee machines were added, the system would initially flag all related interactions for human annotation to build training datasets for those specific use cases. Of course, that results in a surge of humans needed for annotation while the data is collected.
Scaling from smaller grab-and-go formats to larger retail environments (Fresh, Whole Foods) would require expanded annotation efforts due to the increased complexity and variety of customer interactions in those settings.
This approach represents a fairly standard machine learning deployment pattern where human oversight serves both quality assurance and continuous improvement.
The news story is entertaining but it implies there was no working tech behind Amazon Go which just isn't true.
They probably had to train people to talk like ChatGPT.
Step 0: Make sure you have an em dash shortcut on your keyboard and use that as often as possible.
Step 1: Be extremely polite and apologize profusely.
Step 2: ...
Another failure of dd - I really wonder how high-profile investors pour hundreds of millions into a co without doing something simple like ordering an app using a burner account.
You'd think that if you're investing $1M+, there's budget for at least getting an intern / assistant to do that.
The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.
See: https://www.businesswire.com/news/home/20240611122778/en/Bui...
Ai all around is purely about consolidation of power and money. It's bad for workers and ultimately probably bad for the startup world and competition more broadly.
That was right around the time this company had a new $250M funding round, so lack of resources to invest in actual AI is a terrible excuse.
I'm just telling you what I read online. Builder.ai wasn't competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. Builder.ai was building a virtual assistant for customers. It meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.
And like I said before, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Everyone talks about models and infrastructure, but without the right data, you've got nothing. And that's where the biggest hidden cost is.
According to the company's own website, they were creating the data themselves. Labelled datasets, transcripts, customer conversations, documents, and more. That's where the real money goes. Millions and millions.
Or having an AI Agent do it...
That’s surely worth the $30 I paid.
Claiming you can do something specific (use AI to do something) and then using humans to do the labor is something else entirely. If you raise money on that, it's just fraud.
I think the story has been exaggerated a lot, though. The original story was that the admins were doing real submission activity (links, etc.) but they had a mechanism to create a new user account with the submission. So they created a lot of new user accounts for themselves, but the activity was real and driven by the founders.
We all have test accounts on our production systems. If it's a tiny number of the overall users at time of fundraising it doesn't matter. On the other hand if they created 10,000 accounts and then claimed they had 11,000 users that would be blatant fraud. I really don't think they did anything like that, though. I think they seeded the very initial site with content and made different "accounts" for it, but by the time they raised they had real traffic.
Because at the very least they killed most countermeasures to bots and a serious percentage of activity on twitter is "fake engagement".
I also have a much more difficult question: Could you explain how this fraud works/applies if nation states are the ones developing the bots? Is there a difference between foreign and US bots?
Demonstrating this in court might get pretty complicated, though. Legal terms often have a way of obscuring the complexity of real life (which is understandable, of course).
I'm guessing the number of well-known startups who have committed fraud by "faking it until they make it" is somewhere between 1 and N. What that number is might well be subjective to the judge or jury rendering a verdict. Unfortunately, lack of serious insight into this might also be evidence that "faking it until you make it" works even if it's fraud, so long as you can spin revenue that investors demand out of it eventually.
Edit: forgive my claiming lack of evidence = evidence; i'm just tired. I think my point that it's kind of unknowable, and this might prompt people to accept it as proof positive (even irrationally). I hope my comment can be received in good faith
Also a bunch of the bots are by nation states. In that case I would expect that at least some courts would not cooperate with any such fraud case (Russia, India, China, I don't know in Europe but I doubt there aren't a few examples ... and maybe US. Probably at least a few states). Best of luck to make anything stick if the courts to not cooperate.
So when founders are starting a new site, they need to bootstrap by getting enough content in there to drive browsing. Only then will the audience grow, and only then will users start to post their own stuff. This is what Reddit did, and it’s not unique to them. YouTube’s founders did the same thing when they started.
Note that this is not “fake it til you make it.” This is investment in audience growth.
I get $100M. Maybe even $200M.
But $400M?
Unforgivable.
So by this math each employee got 1,900ish hookers. Since i figure male hookers for the female employees where cheaper well round up to 2,000.
That is in fact unforgivable. 1,000 would of been acceptable. 2,000... just excess
That estimate seems off. Please crunch the numbers once again. Make sure to factor in inflation.
https://www.levels.fyi/t/software-engineer/locations/greater...
But more importantly, we’re all pretending, the only cost of building anything is salaries. A company that size could blow a million dollars a month just on AWS, and the AI stuff is waaaay more expensive.
And, like many things in this world, you'll find you'll pay for what you get.
My guess? Most of the cash is socked away in BTC or some such wealth sink just waiting for the individuals to clear their bothersome legal issues.
Had they done this years ago they would be so rich it would be worthy keep builder.ai going just to avoid legal problems.
OpenAI is on track to spend $14 billion this year.
Wouldn't surprise me if the developers were hired from sweatshop staffing agencies, or just working directly for minimum wage - if that even.
There’s a lot of poor Indians forced into slave labor conditions there by tricking them into job opportunities. But there is not a lot of call center scams run today in India. Not at the scale at which Cambodia runs them.
My personal estimate is that it is about 80% of the startups you see around.
Of course, "Indian-as-a-Service" doesn't sound as cool as AI, but besides this, I think it's a valid solution and a business model for many use cases.
> If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.
Shameful.
I was hoping for something interesting, but it is just plain old fashioned accounting fraud.
When I say apparent, it took less than 15 minutes and a couple of google searches to get a sniff of it.
Somehow, you can still raise $500MM ++.
I think about that a lot
Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency - https://news.ycombinator.com/item?id=44080640 - May 2025 (136 comments)
The boring claim is that the company inflated its sales through a round-tripping scheme: https://www.bloomberg.com/news/articles/2025-05-30/builder-a... (https://archive.ph/1oyOw). That's consistent with other recent reporting (e.g. https://news.ycombinator.com/item?id=44080640)
The lurid claim is that the company's AI product was actually "Indians pretending to be bots". From skimming the OP and https://timesofindia.indiatimes.com/technology/tech-news/how..., the only citation seems to be this self-promotional LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7334521... (https://web.archive.org/web/20250602211336/https://www.linke...).
Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.
To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?
(Thanks to rafram and sva_ for the links in https://news.ycombinator.com/item?id=44172409 and https://news.ycombinator.com/item?id=44175373.)
Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.
—-
Message to HN:
Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.
Things you’ll need:
1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.
2) Ex VC who exudes wealth with every footstep he/she takes
3) The camera
4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.
See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.
My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.
If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):
> Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.
Source: https://www.builder.ai/how-it-works
They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:
Source: https://www.builder.ai/under-the-hood
It sounds impressive, but listing libraries doesn't prove they built an actual LLM.
So, the questions that remain unanswered are:
1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings
I've seen a lot of posts coming out of India claiming "we were the AI". So I looked into it to see if Builder AI was lying, or if this was just a case of unpaid developers from India spreading rumours after the company went bust.
Here's what some of the devs are saying:
> "We were the AI. They hired 700 of us to build the apps"
Sounds shocking, but it doesn't hold up.
The problem is, BuilderAI never said development was done using AI. Quite the opposite. Their own website explains that a virtual assistant called "Natasha" assigns a human developer to your project. That developer then customises the code. They even use facial recognition to verify it's the same person doing the work.
> "Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked."
Source: https://www.builder.ai/how-it-works
I also checked the Wayback Machine. No changes were made to that site after the scandal. Which means: yes, those 700 developers were probably building apps, but no, they weren't "the AI". Because the company never claimed the apps were built by AI to begin with.
Verdict: FAKE NEWS
mountainriver•2d ago
fakedang•2d ago
pyman•1d ago
The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.
See: https://www.businesswire.com/news/home/20240611122778/en/Bui...
seanp2k2•1d ago
alephnerd•2d ago
Notice how (aside from MS) most participants were not experienced Enterprise SaaS or AI/ML investors.
s1artibartfast•2d ago
Seems like a bad bet that went south.
sib•2d ago
Almost definitionally, VCs are investing someone else's money (the people providing the capital are called the "limited partners" (LPs); the VCs who raise and invest the money are "general partners" (GPs).) The LPs are often pension funds, university endowments, and charitable organizations.
Yes, GPs do typically have a capital contribution requirement, but it's generally in the area of 1% of the fund, so the vast majority of what VCs are investing is other people's money, for which they definitely have fiduciary responsibility.
s1artibartfast•2d ago
petesergeant•2d ago
flowerthoughts•2d ago
Startups will always carry a risk, and VCs are not betting that the company will be asymptotically good, just good enough to make an exit.
compiler-guy•1d ago
MegaButts•1d ago
This is a misunderstanding of VC investment. Any competent VC expects most of their investments to go to zero. They're hoping a small percent of their investments will make up for the losses. The goal of a decent VC isn't to avoid bad investments so much as it is to make sure they get one good investment. A good investment in AirBnB/Google/Facebook will make up for dozens of speculative bets that go to zero.
flowerthoughts•1d ago
I'll be doing a linguistic nit pick now, as I felt it was a bit harsh to label my statement as a misunderstanding.
The bet is still on each investment to have a good exit. With the implied assumption that betting is a probabilistic game.
MegaButts•1d ago