frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: MailShrimp – validate email lists with risk level and confidence score

https://www.mailshrimp.app
1•udiocla•2m ago•0 comments

Show HN: Bob's Word

https://bobsword.com
2•mynameisbobby•4m ago•0 comments

Do you trust OpenAI codex to access repos and surf the internet?

1•LargeLingoMod•8m ago•0 comments

The creatives and academics rejecting AI – at work and at home

https://www.theguardian.com/technology/2025/jun/03/creatives-academics-rejecting-ai-at-home-work
2•Huxley1•12m ago•1 comments

Operator of South Korea's Largest Pirate Site Sentenced to 3 Years in Prison

https://torrentfreak.com/operator-of-south-koreas-largest-pirate-site-sentenced-to-3-years-prison-250604/
3•bundie•17m ago•0 comments

Phptop: Simple PHP ressource profiler, safe and useful for production sites

https://github.com/bearstech/phptop
1•kadrek•20m ago•0 comments

Show HN: Rispose.com – I've linked OpenAI to the Shopify MCP for instant answers

https://rispose.com/shopify
1•cosbgn•23m ago•1 comments

Performance Improvement Through SQLAlchemy and FastAPI Optimizations

https://www.zenml.io/blog/scaling-zenml-200x-performance-fastapi-database-v0830
1•htahir111•25m ago•0 comments

How Much Energy Does It Take to Think?

https://www.quantamagazine.org/how-much-energy-does-it-take-to-think-20250604/
2•pseudolus•26m ago•0 comments

Show HN: Heynds – Write Faster – AI Voice and Writing Assistant (Mac/Windows)

https://www.heynds.com/en
1•pierremouchan•26m ago•0 comments

Gbe_fork – Steam emulator that emulates steam online features

https://github.com/Detanup01/gbe_fork
1•WhereIsTheTruth•29m ago•0 comments

Show HN: A rude AI that will roast your website SEO

https://seomode.co/tools/seo-roast
2•thisismehrab•30m ago•2 comments

Biggest boom since Big Bang: Most energetic explosions in universe uncovered

https://phys.org/news/2025-06-biggest-boom-big-astronomers-uncover.html
2•pseudolus•30m ago•0 comments

De Bruijn's Combinatorics

https://vixra.org/abs/1208.0223
1•fanf2•31m ago•0 comments

Meta is massively suspending Instagram accounts based on flawed AI

https://www.reddit.com/r/Instagram/s/J5Kd18juRF
2•100c1p43r•31m ago•1 comments

Digital Twins in Data Centers: Revolution or Passing Trend?

https://www.powernet.es/blog/gemelos-digitales-cpd
1•Joseblasco20•32m ago•1 comments

China's Hundred Lens War

https://www.chinatalk.media/p/chinas-ar-arms-race
1•ZeljkoS•36m ago•0 comments

Digital Humanism Salon: Capital and the computer (2024)

https://www.youtube.com/watch?v=8O95JBYdnoQ
1•breezykermo•37m ago•1 comments

Ask HN: I don't understand what problems ORMs solve

3•iondodon•47m ago•4 comments

Ask HN: Almost 3 years since ChatGPT. What tools do you use?

1•break_the_bank•58m ago•2 comments

Ask HN: Has ChatGPT been trained on Hacker News comments?

1•leftcenterright•59m ago•0 comments

How does browser automation with browsergpt by civai work

https://app.vearn.co/q/how-does-browser-automation-with-browsergpt-really-work-for-people-who-want-to-save-time-online-and-do-stuff-hands-free
1•usecodenaija•1h ago•0 comments

The Writers on the Leaves of the Trees That Surround the Palace Hathel

https://medium.com/luminasticity/the-writers-on-the-leaves-of-the-trees-that-surround-the-palace-hathel-c32dd25b26ab
1•bryanrasmussen•1h ago•0 comments

Free Prompt Engineering Chrome Extension

https://chromewebstore.google.com/detail/promptjesus/haaecanojfcjlbjbalioknghgchlglnl
1•zigmazigma•1h ago•0 comments

Ask HN: Best Low-Power, Budget-Friendly, and Capable Home Server Setup?

1•johnnykree•1h ago•0 comments

Amazon to invest $10B in North Carolina to expand cloud, AI infra

https://www.reuters.com/business/retail-consumer/amazon-invest-10-billion-north-carolina-expand-cloud-ai-infrastructure-2025-06-04/
2•Kevvv•1h ago•0 comments

Ask HN: How can LLMs boost my developer experience?

2•rich_sasha•1h ago•0 comments

Statement on California State Senate Advancing Dangerous Surveillance Bill

https://www.eff.org/deeplinks/2025/06/statement-california-state-senate-advancing-dangerous-surveillance-bill
3•mdp2021•1h ago•0 comments

From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning

https://arxiv.org/abs/2505.17117
2•ggirelli•1h ago•0 comments

Poison everywhere: No output from your MCP server is safe

https://www.cyberark.com/resources/threat-research-blog/poison-everywhere-no-output-from-your-mcp-server-is-safe
2•nor0x•1h ago•0 comments
Open in hackernews

Builder.ai Collapses: $1.5B 'AI' Startup Exposed as 'Indians'?

https://www.ibtimes.co.uk/builderai-collapses-15bn-ai-startup-exposed-actually-indians-pretending-bots-1734784
355•healsdata•1d ago

Comments

mountainriver•1d ago
Do folks think that this was utter negligence by the VCs, or just a pump and dump?
fakedang•1d ago
Not just VCs. Microsoft was an investor too.
pyman•1d ago
Microsoft invested £250M in Inflection AI, £250M in Builder.ai, and has backed several other companies working on LLMs. They’ve been placing strategic bets across the AI space, but only a few of those companies actually had the talent, infrastructure, and funding needed to build real models.

The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.

See: https://www.businesswire.com/news/home/20240611122778/en/Bui...

seanp2k2•1d ago
...and given the massive success of Amazon Alexa...
alephnerd•1d ago
For every investor in Builder, there were multiple that passed.

Notice how (aside from MS) most participants were not experienced Enterprise SaaS or AI/ML investors.

s1artibartfast•1d ago
neither? Negligence doesn't make sense because it was the VCs own money- no duty to care. It doesn't seem like anyone cashed out either.

Seems like a bad bet that went south.

sib•1d ago
>> Negligence doesn't make sense because it was the VCs own money

Almost definitionally, VCs are investing someone else's money (the people providing the capital are called the "limited partners" (LPs); the VCs who raise and invest the money are "general partners" (GPs).) The LPs are often pension funds, university endowments, and charitable organizations.

Yes, GPs do typically have a capital contribution requirement, but it's generally in the area of 1% of the fund, so the vast majority of what VCs are investing is other people's money, for which they definitely have fiduciary responsibility.

s1artibartfast•1d ago
That's fair.
petesergeant•1d ago
I dunno, there's a world where this ends differently, and the company was liquid for six more months and transitioned to "actual" AI instead, having already built the customer base and sales channels, and everyone's happy. Launching your AI product before the AI works appears not to be especially unusual and is only a problem if the money runs out before you finish building the AI.
flowerthoughts•1d ago
There's a more-or-less useful adage in investing: scared money don't make money.

Startups will always carry a risk, and VCs are not betting that the company will be asymptotically good, just good enough to make an exit.

compiler-guy•1d ago
And even more than that, just that some company they invest in will be a winner. It's OK for most to fail, as long as one of them does well. So they invest in lots of long-shots.
MegaButts•1d ago
> VCs are not betting that the company will be asymptotically good, just good enough to make an exit.

This is a misunderstanding of VC investment. Any competent VC expects most of their investments to go to zero. They're hoping a small percent of their investments will make up for the losses. The goal of a decent VC isn't to avoid bad investments so much as it is to make sure they get one good investment. A good investment in AirBnB/Google/Facebook will make up for dozens of speculative bets that go to zero.

flowerthoughts•19h ago
> This is a misunderstanding of VC investment. Any competent VC expects most of their investments to go to zero.

I'll be doing a linguistic nit pick now, as I felt it was a bit harsh to label my statement as a misunderstanding.

The bet is still on each investment to have a good exit. With the implied assumption that betting is a probabilistic game.

MegaButts•15h ago
No, this is wrong. VCs regularly bet on companies they expect to fail, and occassionally even know will fail. They sometimes put money into companies knowing they will never get it back. They do not expect a positive return on every investment.
Ancalagon•1d ago
Really weird considering how much AI is actually available now
immibis•1d ago
Almost like it doesn't work as well as they market it as working?
klipt•1d ago
Not all companies are equal.

At the same time that Tesla was making actual electric cars, Nikola was rolling fake "electric trucks" downhill.

Grifters exist, but not everyone is a grifter.

Supermancho•1d ago
Capitalism rewards dishonesty. Every company is a grifter to some degree. This is more widespread in technical service companies.
sib•1d ago
It's a wonderful thing that no other economic systems reward dishonesty.

/s

Supermancho•1d ago
>>> Grifters exist, but not everyone is a grifter.

>> Capitalism rewards dishonesty.

> It's a wonderful thing that no other economic systems reward dishonesty.

This is a whataboutism. To rephrase, "all economic systems reward dishonesty." - That's the point. Saying not every market participant is a grifter is a form of denial.

deadeye•1d ago
To an extent. Just not to the extent that dishonesty is rewarded in socialism and communism.

At least Capitalism in a free society is largely self correcting.

burnte•1d ago
Capitalis does no such thing, the market available to the company does. Every problem people blame on "capitalism" is solvable with appropriate regulation. That's literally the point of regulation, too. Dozens of other countries show you can have a vibrant economy that isn't beholden to a few billionaires.
jampekka•1d ago
> Every problem people blame on "capitalism" is solvable with appropriate regulation.

Capitalism makes capitalism hard to appropriately regulate. Concentration of capital means concentration of power.

burnte•1d ago
> Capitalism makes capitalism hard to appropriately regulate. Concentration of capital means concentration of power.

And yet many countries have a better handle on it than the USA. I just never, ever buy into the whole "it's to hard for America to do a thing that other countries do". I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.

jampekka•1d ago
The general trend has been towards deregulation, and capital concentration, almost everywhere for decades. US is just ahead.

The talk in EU is now that we have to ease regulation because we can't compete with countries with laxer regulation. The same race to the bottom has happened in e.g. taxes and labor protections ever since capital controls were lifted.

> I think the US is capable of anything we put our collective will towards, we just need leaders who want to lead the whole nation rather than personal profiteers.

I sure hope so but I can't really see it happening. The whole US political system seems to be FUBAR, largely because the concentrations of capital bought it.

otherme123•1d ago
Capturing the regulators is a great way to make yourself immune to market pressure.
surgical_fire•1d ago
> Capitalis does no such thing, the market available to the company does. Every problem people blame on "capitalism" is solvable with appropriate regulation

Too bad that capital owners deeply hate regulation and do everything they can to deregulate everything.

Hell, it doesn't even need to be capital owners. Plenty of bootlickers here on HN that always cry about any kind of government regulation.

Supermancho•1d ago
> Capitalis does no such thing,

I think it's obvious that this is incorrect. Part of regulation is to try and curtail grifting. It is not possible to prevent it and is the tendency of the system.

burnte•1d ago
Not being able to eliminate it doesn't mean controlling it is impossible or that we shouldn't try. In any system there will be cheaters, it's how you deal with it that counts. A well regulated economy makes wealth spread around.
ahazred8ta•1d ago
In eastern europe (COMECON) during the cold war days, the factories were famous for cranking out products that claimed to do something but did not deliver on the promise. Did they do that because they were run by capitalists?

Google China melamine contaminated milk.

AngryData•1d ago
I find it hard to believe any claim that they weren't capitalists. Almost nothing in the USSR was operated or managed by worker collectives. Nor in China. A handful of people managed the capital of these businesses and profited off them, and the people who own, operate, and profit off of capital investments are capitalists, even if they slap a PR sign up claiming they aren't.
earnestinger•1d ago
Technically you are wrong. The best kind of wrong.
ahazred8ta•19h ago
That's why I specified the eastern european countries where many of the factories actually were controlled by the unions. They themselves acknowledged that they were fudging their numbers on a regular basis to get undeserved bonus money for workers.
throw10920•18h ago
Thank you for pointing out that this has nothing to do with capitalism.

Humans are dishonest, and implying that that doesn't manifest in other economic systems is willfully and maliciously false, a statement only used by propagandists.

mjmsmith•1d ago
Not sure Tesla is the poster child for non-grifters in the context of AI.
andrei_says_•1d ago
Or “auto pilot”
lotsofpulp•1d ago
Auto pilot is pretty good at driving within the same lane. Just like a plane pilot needs to do the more complicated stuff like take off and landing, one can expect the same with autopilot (just changing lanes and stopping at lights or stop signs or turning).

Full self driving is the mislabeled one. Should be “90% self driving”.

seanp2k2•1d ago
Not when that lane is the HOV lane and traffic is going (speed limit + 15mph). You can always spot the white Tesla on 101 doing the speed limit in the HOV lane during rush hour with a mile of cars behind them and a mile of nothing in front.
lotsofpulp•1d ago
The driver can set the auto pilot speed to whatever number they want. If they are going the speed limit, it’s because the driver chose to cruise at that max speed.

Full self driving, however, may be limited to the speed limit. I don’t know, since I don’t use it.

pixl97•1d ago
> plane pilot needs to do the more complicated stuff like take off and landing

Auto landing test 3 days ago.

https://www.youtube.com/watch?v=eijEPsSdqg8

lotsofpulp•20h ago
Interesting. For marketing purposes, however, I would think almost all people still associate auto pilot with just cruising in the air, similar to cruising on a highway.
bufferoverflow•1d ago
How are they grifters? Tesla gave us FSD to try for free for a month, twice now. If you don't like its performance, don't pay.
seanp2k2•1d ago
Elon Musk Predicts Level 4 Or 5 Full Self-Driving ‘Later This Year’ For the Tenth Year In A Row

(2023) https://www.theautopian.com/elon-musk-predicts-level-4-or-5-...

https://dictionary.cambridge.org/us/dictionary/english/grift

    ways of getting money dishonestly that involve tricking someone
So, they got money dishonestly by tricking someone into buying their cars based on the belief that they'd offer real full self driving now or very soon, then they didn't actually deliver that.
bufferoverflow•1d ago
Who cares what Elon says? You can literally try the product and decide. Or you can watch any of hundreds of videos on YouTube or people trying it.
a4isms•1d ago
You the customer are only a minor part of the grift. In fact, you're an unwitting prop for the grift. The entire point of the grift is the stock price, not your $8,000 or monthly subscription.

What they want from you are comments like this. What they want to do with those comments is to preserve the consensus amongst Tesla bulls that they are on their way to selling robots and renting robot taxis by the ride.

jampekka•1d ago
Coast-to-coast self-driving Teslas were promised by 2017. And have been promised next year almost every year since.

Tesla can make electric vehicles but the company valuagion is based on grift.

wongarsu•1d ago
If you have an idea for a cool AI startup it's faster to build your first prototype without the actual AI, just faking that part. But if your Actual Indians had 95% accuracy and you can't get an AI to do more than 85% then you are kind of stuck if you raised money and got customers pretending that your Actual Indians are Artificial Intelligence.
TYPE_FASTER•1d ago
This is the way. Funny how AI could also stand for Actual Intelligence. Or, Artisanal Intelligence? "Now 100% organic handcrafted thoughts, unique for your business problem."
more_corn•1d ago
Not true it’s super easy to fine tune and deploy one of the open models. I should teach a course.
msgodel•19h ago
The technical aspects of training and tuning are trivial. GP is pointing out that you might not be able to get the model to succeed at the task as often for any number of reasons that you won't know before you actually train one.

Although I guess your point is that it's also cheap to train them, probably cheaper than doing this. But startups are started by social people, not technical people. Stuff like this will always be expensive for social people since they have to pay one of us to do it. YC interviews their CEOs from time to time, it's really clear that's how that works.

mrweasel•1d ago
Also it can't have been fast. Didn't customers and investors feel that it was weird that CoPilot spits out code as fast as you can type, but Builder.ai needs days or weeks to generate your app. Or where these Indian developers just really really fast?
givemeethekeys•1d ago
Maybe they use GPT :)
helloplanets•1d ago
There's this one super secret agentic framework that beats all the benchmarks...
hyperadvanced•1d ago
Available sure, but cost effective? My guess is that they tried a lot of things to get ChatGPT to work and burnt out of money before it got cheap enough to fit with a reliable business model. Early but not wrong, I guess.
Havoc•1d ago
I've read indications that this was always aimed at hybrid model rather than pure AI, but hard to tell now because all the news is on this indians train.
bboygravity•1d ago
That makes more sense, would explain the unbelievable clickbait headlines (as usual).
giarc•1d ago
Not only that..."The deception wasn't new. As early as 2019, The Wall Street Journal exposed Builder.ai's questionable AI claim revealing that the platform relied heavily on human contractors rather than artificial intelligence."
aitchnyu•20h ago
Sounds a little more reasonable. In 2016, Builder's founding, Uber was pursuing self driving as well as hiring human drivers.
rafram•1d ago
Article from a slightly better source: https://timesofindia.indiatimes.com/technology/tech-news/how...
pyman•1d ago
This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.

From what I read online, the real issue was "Natasha", their virtual assistant powered by a dedicated foundation model. They ran out of money before it got anywhere.

bartread•1d ago
> This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.

Yeeeah... that's a fairly disingenuous take.

The difference between every other offshore dev shop backed by developers in India and Builder.ai is that - and I say this as someone who thinks Infosys is a shit company - Infosys and all those other dev shops are at least up front about how their business works and where and who will be building your app. Whereas Builder.ai spent quite a long time pretending like they had AI doing the work when actually it was a lot of devs in India.

That is deliberately misleading and it is not OK. It's fraudulent. It's literally what Theranos did with their Edison machines that never worked so whereas they claimed they had this wondrous new blood testing technology they were actually running tests with Siemens machines, diluting blood samples, etc. The consequences of Theranos's actions were much more serious (misdiagnoses and, indeed missed diagnoses of thousands of patients), rather than just apps built by humans rather than AI, but lying and fraud is lying and fraud.

pyman•1d ago
I don't agree. Even Infosys markets AI as part of their offering, just look at their "AI for Infrastructure" pitch:

https://www.infosys.com/services/cloud-cobalt/offerings/ai-i...

Every big dev shop does this. Overselling tech happens all the time in this space. The line between marketing and misleading isn't always so clear. The difference is Builder.ai pushed the AI angle harder, but that doesn't make it Theranos-level fraud.

mistercheph•1d ago
Arguably, Theranos was also somewhere in a gray area between marketing and fraud.

Everyone in the industry incentivizes and participates in this behavior, but once in a while, let's grab a few stand-out individuals to scapegoat once in a while for all the harm caused by/to the entire group with this behavior. Make sure you pick someone big/ugly enough to be credibly dangerous to the whole group, but who isn't too dangerous and well connected so that you can be sure that when the card flips on them everyone around them scatters.

It's the same reason groups of individual humans do it: Scapegoating is a much lower resistance path to follow than the horrifying alternative (self-consciousness, reflection, love)

pyman•1d ago
Theranos was dealing with people's health. Misdiagnoses, delayed treatments, etc, that's real harm. Imo, comparing that to building web apps isn't the same.
Retric•1d ago
Theranos was using the same testing equipment and techniques as any other lab for most of their diagnostic services. Which is how they avoided being instantly exposed when their results ended up being meaningless. “In October 2015, John Carreyrou of The Wall Street Journal reported that Theranos was using traditional blood testing machines instead of the company's Edison devices to run its tests, and that the company's Edison machines might provide inaccurate results.” https://en.wikipedia.org/wiki/Theranos

They did plenty of shady shit including producing poor results, but that’s largely incompetence independent of fraud vs intentionally putting people’s lives on the line.

IMO, the fraud kind of hides the equally important story where incompetent 19 year old collage dropout shockingly doesn’t know how to effectively setup and manage complex systems.

aprilthird2021•1d ago
The actual crime Theranos founder went to jail for was not misdiagnosing people. It was defrauding investors because they made them believe their machines were doing the tests when really they were sending them out to separate labs
pyman•1d ago
Completely different story. With Theranos the investors sued the founders, with Builder AI they didn't. This suggests they knew what was really going on, so it wasn't fraud in their eyes.
aprilthird2021•1d ago
It is not a completely different story. The lender yanked back the money they lent because they found out about fraudulent sales numbers. That led to the bankruptcy. It was still the people whose money was in the game who brought the company down in both scenarios because fraud is a big red line for anyone whose money is on the line
pyman•23h ago
I understand where you're coming from, but we need to stick to the facts. If there are no court cases, we can't imply that fraud was committed. We don't know what kind of agreements were in place, why the money was being transferred, or what the expectations were on both sides.

We also don't know what was discussed in private. For example, it could have been something like: "We want to be part of this investment opportunity, we'll give you $40 million. But if regulators start asking questions, we want the money back."

Without full context or legal findings, everything else is just speculation.

I'm surprised no one is talking about Microsoft's investment in BuilderAI, a total loss. It's unlikely they'll recover much, if anything. So why aren't they suing the CEO and CFO? Maybe some of the issues were handled quietly behind the scenes to avoid public exposure or reputational damage? I don't know.

lotsofpulp•1d ago
>Arguably, Theranos was also somewhere in a gray area between marketing and fraud.

Theranos was clear fraud. She claimed scientific advances that did not exist.

mistercheph•1d ago
What about traditional auto manufacturers making claims about solid state battery technology they will achieve in the next decade that they haven't yet?

There are always unsolved engineering and scientific challenges that stand between today and future product, and nothing is guaranteed, but you have to sell investors on the future technology (see: frontier model makers pushing AGI/ASI hype)

Obviously there are differences between Toyota's SS battery claims and Theranos' claims, but it's not a black and white line, it's a spectrum.

aprilthird2021•1d ago
Why are so many people here pretending fraud is ambiguous?

Saying "We will have great batteries 10 years from now" is not fraud. It's your belief about the future. Everyone knows no one can predict the future.

Saying "this hydrogen powered truck works, here is a video of it running on the road right now" but the video is edited so you don't see that it's going down hill and the car isn't actually running" that's fraud.

Theranos wasn't in trouble for saying their machines would be great one day. They got in trouble for lying about the current state of things, saying they were performing blood tests on their machines when they were not.

mistercheph•11h ago
I'll give you a more temporally synchronous example if you like, Microsoft's deliberately misleading claims about their quantum computing progress: https://www.science.org/content/article/debate-erupts-around...
pyman•9h ago
BuilderAI never actually told customers that development was done using AI, that's something people made up after the company went bust. If you look at their website (builder.ai), they explain that their virtual assistant "Natasha" assigns a developer, and then uses face recognition to verify the identity of the developer.

Take a minute to visit their site and get informed. We live in a time where people form opinions just by reading a headline.

aprilthird2021•1d ago
> Overselling tech happens all the time in this space.

Overselling is fraud and is a crime at a certain point, which they clearly passed otherwise they wouldn't have had their lenders pull back money and leave them bankrupt

pyman•1d ago
Just to play the devil's advocate: if a software company tells you your data is secure and then someone hacks their server and steals your photos and personal data, did their CEO and marketing department oversell their level of security? Is this fraud as well?
aprilthird2021•1d ago
"Your data is secure" is known to never be 100%. But what assessments and technology they say they use for security needs to be followed. And if it's found out that those are lies, then it's fraud.

Kind of like how these guys lied about the volume of sales they had. Textbook fraud. They aren't in trouble for saying "AI is going to be great"

pyman•23h ago
I agree. But using the "your data is secure" analogy, BuilderAI never actually told customers that development was done using AI, that's something people made up. If you look at their website (builder.ai) they explain that their virtual assistant "Natasha" assigns a developer (I assume from India). That part doesn’t sound like fraud to me, and it's the part everyone seems to be focusing on.

The company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, the Indian founders and accountants reportedly engaged in round-tripping with VerSe Innovation. That raised red flags for investors, regulators, and prosecutors, and led to bankruptcy proceedings.

aylmao•1d ago
> The line between marketing and misleading isn't always so clear.

In the general I kind of disagree with this. I am not a lawyer, so I don't know all the details, but if you look for it, you should be able to find the line since it's generally illegal to mislead customers. There's also a whole set of contractural and perhaps even legal obligations when it comes to investors.

For contracts and the law to be enforceable, they need draw lines as clearly as possible. There's always some amount of details that are up to interpretation, but companies make sure to pay legal counsel to make sure they don't cross these lines.

Now, specifically in this case, I do agree with you. This case doesn't seem to be a legal matter of customer or investor misleadings (thus far). Viola Credit did seize $37 million, so IMO there clearly was a violation of contract in all this, but it seems like that had nothing to do with the whole AI overselling.

osigurdson•1d ago
It doesn't matter for customers but investors would be interested if AI is being used or a bunch of devs due to the scaling potential differences.
profsummergig•1d ago
This is so obviously fake news that it's a good litmus test of the people who are boosting it.

There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.

threeseed•1d ago
But it’s not just about coding quickly but also correctly.

Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.

Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.

pyman•1d ago
I did a bit of research…

Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.

And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

Questions:

1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?

2. Was there a technical team behind Saunders capable of building such a model?

3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?

daveguy•1d ago
> creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

Creating a dedicated pretrained model is a prerequisite of any LLM. What do you mean by "full LLM"?

pyman•1d ago
Just to clarify: I said "pre-trained foundation model".

LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.

sva_•1d ago
Also https://www.bloomberg.com/news/articles/2025-05-30/builder-a...
dang•1d ago
Thanks as well! More at https://news.ycombinator.com/item?id=44176241.
dang•1d ago
Thanks! It took an annoying amount of time to try to sort this out, but I made a consolidated reply here: https://news.ycombinator.com/item?id=44176241.
gamblor956•1d ago
Have been joking with friends that "AI" stand for "Actually Indians".

We never thought that it wasn't just a joke...

ceejayoz•1d ago
See also: Amazon Go. https://www.businessinsider.com/amazons-just-walk-out-actual...
profstasiak•1d ago
many such cases
fazkan•1d ago
This is so weird, its not that hard to actually build an app builder. There are multiple open-source repos (bolt etc), they could have just paid their "AI engineer" to actually build an AI engineer.

Shameless plug, but we built (https://v1.slashml.com), in roughly 2 weeks. Granted its not as mature, but we don't have billions :)

glutamate•1d ago
They launched in 2016
throwaway314155•1d ago
Should have pivoted faster.
hnuser123456•1d ago
Nice, I'll try this out tonight.
fazkan•1d ago
thanks, do ping me if you run into any issues faizank@slashml.com
xkcd-sucks•1d ago
It's plausible they started with a typical software consultancy and its crappy in house app builder scripts, and reformed it as an AI thing in order to inflate its value?
downrightmike•1d ago
That'd be shameful, and a complete disgrace, it'd be like adding "bitcoin" to your company name or 10k fillings a few years ago to boost your stock
fazkan•1d ago
I mean zapier is also calling their workflows, "agents", I remember someone ranting about it on twitter.
mikestew•1d ago
In case anyone thinks parent is speaking hypothetically:

Insider trading charges filed over Long Island Iced Tea’s blockchain ‘pivot’ https://www.cnn.com/2021/07/10/investing/blockchain-long-isl...

1oooqooq•1d ago
worked for GameStop
driverdan•1d ago
> its not that hard to actually build an app builder

Besides simple one page self-contained apps, yes, it's quite hard. So hard that it's still an unsolved problem.

nadermx•1d ago
https://v0.dev?
fazkan•1d ago
not really, lovable, v0, and bolt all are mutlipage. They connect to supabase for db and auth. Replit can spinup custom dbs on-demand, and have a full-fledged IDE.

I did my research before jumping into this space :)

aitchnyu•20h ago
Which ones prevent anybody with a browser accessing other user's data? I have been discussing vibe coding and Supabase's Postgres row level security misconfiguration.
fazkan•13h ago
replit, from what I know, and lovable to a certain extent.
belter•1d ago
Beaten to death: https://hn.algolia.com/?q=Builder.ai
joshuahedlund•1d ago
This looks like the first submission to get traction on this latest development.

Thanks for the search though:

> https://news.ycombinator.com/item?id=30397201

This one from 3 yrs ago had some interesting comments

> off topic, but they have a very suspect pricing page: https://www.builder.ai/studio-store

> "Delivery: 12 weeks"

> is Builder.ai just a CRUD app for indian sweatshops to build the apps?

> > It would not have spawned an entire industry and no code websites every other week or so if it was ‘just a CRUD app’.

belter•1d ago
You need to sort by Date, there was this one just 10 days ago: https://news.ycombinator.com/item?id=44080640
andrewinardeer•1d ago
AI = Actual Indians, apparently.
CobrastanJorji•1d ago
Yep. Also happened with Amazon's "Just Walk Out Technology:" https://www.businessinsider.com/amazons-just-walk-out-actual...
dmazin•1d ago
My understanding is this turned out not to be true. People were used to label stuff for new stores, but the actual implementation did not depend on some sort of fakery.
CobrastanJorji•1d ago
I imagine it's possible the truth was somewhere in between. But if it worked, why did they stop using it in their grocery stores after putting so much money into it?
martinald•1d ago
In the UK at least the grocery stores are completely empty. I've barely seen anyone in them. Bizarrely they are shutting loads down in London but opening new ones at the same time. Absolutely no idea what the strategy is, they must be throwing out the majority of the fresh food stock they have.
rrrrrrrrrrrryan•1d ago
That was the original idea, and that's what Amazon claimed, but IIRC they never got over 70% automated.

Phrased differently, 30% of all transactions were still entered by a human overseas watching cameras at the time they decided to pull the plug, years after the initial launch.

kylecazar•1d ago
Interesting that they can automate some of the transactions and not others... Wonder what was special about those other 30%.

There was a 2 year period in which I bought lunch at an Amazon Go daily. I was naive to the magic so I thought it was the greatest innovation ever.

vel0city•1d ago
I wish more stores would just do something like Scan and Go at Sam's Club. By far the smoothest checkout experience I've ever used.

https://tech.walmart.com/content/walmart-global-tech/en_us/b...

thatguy0900•1d ago
I wonder if that only really works becuase Sam's club can just revoke your membership if you steal
mikestew•1d ago
Apple stores work the same way, except that it is truly “scan and go”, whereas Walmart makes you show a digital receipt. What happens if you steal? I dunno, without checking I’m pretty sure you need an Apple account to use the app, so maybe that gets revoked. Or maybe Apple’s stuff simply works well enough.
vel0city•1d ago
At least at Sam's, cameras glimpse in your cart and seems to apply some kind of trust score. Usually, the person at the exit just waves me by, sometimes if the cart is really loaded with odd items or I've already bagged some things they'll want to take a peek.
RollingRo11•1d ago
I remember being 13 years old and stepping into an Amazon Go store in Seattle. Little me lost my mind. I think I walked in and out of the store like 5 times just to see the amazon charge. Sucks that half of the magic was a lie.

Shame to see another project fall to the strategy of AI = "actually indians". I wonder how many other companies have engaged in this stuff.

pyman•1d ago
See: https://www.businesswire.com/news/home/20240611122778/en/Bui...
calmbell•1d ago
Depends on how you define real. I would argue that GPT-2 was a real LLM and it almost certainly cost a lot less than a billion. I'm sure there are much better examples.
pyman•1d ago
Can you imagine Builder.ai using a model that argues with their clients or discriminates against them? I don't think so. GPT-2 is like bringing a knife to a gunfight in 2025.

If you want to compete with the likes of GPT-4, Claude, or Gemini today, you're looking at billions, just for training, not counting infra, data pipelines, evals, red teaming, and everything else that comes with it.

Builder.ai wasn't able to use GenAI to actually build software. And when the money ran out and no model was ever announced, investors lost trust and clients lost patience.

davidst•1d ago
[Disclaimer: Former Amazon employee and not involved with Go since 2016.]

I worked on the first iteration of Amazon Go in 2015/16 and can provide some context on the human oversight aspects.

The system incorporated human review in two primary capacities:

1. Low-confidence event resolution: A subset of customer interactions resulted in low-confidence classifications that were routed to human reviewers for verification. These events typically involved edge cases that were challenging for the automated systems to resolve definitively. The proportion of these events was expected to decrease over time as the models improved. This was my experience during my time with Go.

2. Training data generation: Human annotators played a significant role in labeling interactions for model training-- particularly when introducing new store fixtures or customer behaviors. For instance, when new equipment like coffee machines were added, the system would initially flag all related interactions for human annotation to build training datasets for those specific use cases. Of course, that results in a surge of humans needed for annotation while the data is collected.

Scaling from smaller grab-and-go formats to larger retail environments (Fresh, Whole Foods) would require expanded annotation efforts due to the increased complexity and variety of customer interactions in those settings.

This approach represents a fairly standard machine learning deployment pattern where human oversight serves both quality assurance and continuous improvement.

The news story is entertaining but it implies there was no working tech behind Amazon Go which just isn't true.

CobrastanJorji•1d ago
That's some fascinating background, thanks! Probably explains why they keep operating it in stadiums but not grocery stores. Works pretty well with a small handful of items, does not scale up reliably to shopping carts full of stuff.
thomassmith65•1d ago
Yes, this article's full title is "Builder.ai Collapses: $1.5bn 'AI' Startup Exposed as 'Actually Indians' Pretending to Be Bots"
blitzar•1d ago
Whats the process for hallucinations? Do 1 in 10 of each of the workers have to be tripping on shrooms all shift?
nickdothutton•1d ago
No, lack of sleep.
blitzar•1d ago
Elegant cost saving (or redistribution of drugs to head office) solution.
bluefirebrand•1d ago
No they're just extremely low paid overseas workers scrambling to do work fast enough that it looks like "AI"?
DebtDeflation•1d ago
It's happening in every industry. CEOs moving back office jobs to India and telling Wall St they replaced the jobs with "AI" to get a stock price boost. I'm convinced this dynamic is a major cause of the "white collar recession" we're experiencing now. Perhaps the intent is to eventually replace the Indians with AI (Artificial Intelligence), but right now it's very much AI (Anonymous Indians) doing the work.
goatlover•1d ago
Is this how the Trump administration imagines bringing jobs back to America by not regulating tech companies?
kristianc•1d ago
GenAI.. generate another Indian..
perryh2•1d ago
Similar to EvenUp: https://www.businessinsider.com/evenup-ai-errors-hallucinati...
pkkkzip•1d ago
Definitely not helping stereotypes
terminatornet•1d ago
speak on that
rdtsc•1d ago
> Linas Beliūnas, Director of the financial company Zero Hash, recently exposed that Builder.ai lacked true AI, instead utilising a group of Indian developers who were merely pretending to be bots writing code.

They probably had to train people to talk like ChatGPT.

Step 0: Make sure you have an em dash shortcut on your keyboard and use that as often as possible.

Step 1: Be extremely polite and apologize profusely.

Step 2: ...

zingababba•1d ago
Step 2: Do the needful
TZubiri•1d ago
Probably they just wrote code and that was fed into an LLM which LLMified the responses.
felineflock•1d ago
Not to confuse with builder.IO - poor founder was posting these days "FOR THE LAST TIME GUYS THIS IS A DIFFERENT COMPANY".
seydor•1d ago
actually he should buy the bankrupt name and be the same company
nikcub•1d ago
I expect the builder.ai story will break into the mainstream via a book / documentary. There are some insane details and it's the first large-scale AI hype failure - which people are hungry to get the details on - and some big names involved.

Another failure of dd - I really wonder how high-profile investors pour hundreds of millions into a co without doing something simple like ordering an app using a burner account.

ethbr1•1d ago
> I really wonder how high-profile investors pour hundreds of millions into a co without doing something simple like ordering an app using a burner account.

You'd think that if you're investing $1M+, there's budget for at least getting an intern / assistant to do that.

pyman•1d ago
Microsoft invested £250M in Inflection AI, £250M in Builder.ai, and has backed several other companies working on LLMs. They’ve been placing strategic bets across the AI space, but only a few of those companies actually had the talent, infrastructure, and funding needed to build real models.

The VP of AI was Craig Saunders, the same person who helped create Amazon Alexa. The problem is, they ran out of money. $500 million sounds like a lot, but it's not even close to what you need to build and train a real LLM. You need billions. Most people just don't realise that.

See: https://www.businesswire.com/news/home/20240611122778/en/Bui...

trilbyglens•1d ago
This is why I think ai is basically the death of startups as we know them. Only big players can even take a swing. No more underdog garage startups, unless you're just downstream getting dorty bath water from the big boys.

Ai all around is purely about consolidation of power and money. It's bad for workers and ultimately probably bad for the startup world and competition more broadly.

pyman•1d ago
I agree. The infra side is dominated by VCs and big players. And the data is in the hands of regulators, who are looking the other way.
ricardobeat•1d ago
DeepSeek cost just over $5M to train. StarCoder cost around $1M, there is no info for Starcoder2 but unlikely to be more than a few million. The idea of spending billions in training is OpenAI trying to build a moat that might not actually exist.
pyman•1d ago
These architectures didn't exist last year. The Chinese are innovating thanks to massive government backing, access to talent, and a clear focus on winning the AI race.
ricardobeat•1d ago
Starcoder was released in 2023, by french/american companies, and there were other coding models before it.

That was right around the time this company had a new $250M funding round, so lack of resources to invest in actual AI is a terrible excuse.

pyman•1d ago
StarCoder is an open-source LLM for code, not text. Builder.ai told investors they were building a virtual assistant called "Natasha", not a code assistant.

I'm just telling you what I read online. Builder.ai wasn't competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. Builder.ai was building a virtual assistant for customers. It meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.

And like I said before, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.

ricardobeat•17h ago
The point is that one doesn't need billions to create useful models — there is no reason they would need new foundation models in the first place — and that 'running out of money' is unlikely to have been their main problem, 250M should be more than enough to create an AI website builder.
pyman•15h ago
I get what you are saying, but you're missing the most important piece of the puzzle: the data.

Everyone talks about models and infrastructure, but without the right data, you've got nothing. And that's where the biggest hidden cost is.

According to the company's own website, they were creating the data themselves. Labelled datasets, transcripts, customer conversations, documents, and more. That's where the real money goes. Millions and millions.

SteveNuts•1d ago
>You'd think that if you're investing $1M+, there's budget for at least getting an intern / assistant to do that.

Or having an AI Agent do it...

bobthepanda•1d ago
maybe this is the first large solely AI failure but algorithms and AIs have done lots of damage before. There have been flash crashes on Wall St, Zillow lost $1B using an algorithm to try and house flip, Klarna is circling the drain after hyping up AI, etc.
seanp2k2•1d ago
Opening up gTLDs was a mistake.
mkl•1d ago
Both those are ccTLDs.
noworriesnate•16h ago
Yeah thanks for calling this it. I’ve been following builder.io for a while and seeing builder.ai recently made me think they had possibly pivoted because builder.io has always been on the code generation / design to code / form building space from what I’m aware of.
koakuma-chan•1d ago
Oh no, qwik was my favourite JavaScript framework
zachncst•1d ago
Isn’t this what they always tell startups to do? Fake it and get product market fit. I recall the stories of task rabbit where the founder was delivering all the meals.
pyman•1d ago
See: https://www.businesswire.com/news/home/20240611122778/en/Bui...
tacheiordache•1d ago
You say that as if Amazon Alexa was some kind of amazing product. t succeeded because it has Amazon backing and it's kindof a crappy product.
oblio•1d ago
All voice assistants, at least the original iterations, are only good for 3-4 trivial things people actually want. And they've been around for a decade at this point.
sokoloff•1d ago
It’s a kitchen timer, music player, and weather sayer.

That’s surely worth the $30 I paid.

jaymzcampbell•1d ago
I had to laugh, these are literally the only three things my wife and I use ours for. At a stretch, I'll count the multi-room speaker sync as a great value add to the OOTB audio playback. Anything else, forget it.
codegrappler•1d ago
Don’t forget grocery list add-er!
Yeul•1d ago
I will never talk to a computer until they are sentient.
Gothmog69•1d ago
What makes it crappy? I find it really good at voice detection
manuelisimo•1d ago
so, yeah, but not enough?
mrtksn•1d ago
IMHO you are not supposed to fake your core value proposition when taking money. If the AI part was an implementation detail probably wouldn’t have been a problem.
Aurornis•1d ago
"Fake it till you make it" was about presenting yourself as an established, stable company to overcome objections about using a startup. Things like having a "Customer Support" phone number and e-mail address that just go to the founders, for example. It's fair game if the founders are actually picking up the phone and doing customer support, and it overcomes one objection people might have about using a startup instead of a big company.

Claiming you can do something specific (use AI to do something) and then using humans to do the labor is something else entirely. If you raise money on that, it's just fraud.

MangoToupe•1d ago
There's a good deal of grey-area there, for instance in faking user activity in social media startups. Reddit did this for instance, although I don't know if they reported active user numbers as part of fundraising.
Aurornis•1d ago
If Reddit create a material number of fake accounts and reported those as a key metric for fundraising, that would be fraud.

I think the story has been exaggerated a lot, though. The original story was that the admins were doing real submission activity (links, etc.) but they had a mechanism to create a new user account with the submission. So they created a lot of new user accounts for themselves, but the activity was real and driven by the founders.

We all have test accounts on our production systems. If it's a tiny number of the overall users at time of fundraising it doesn't matter. On the other hand if they created 10,000 accounts and then claimed they had 11,000 users that would be blatant fraud. I really don't think they did anything like that, though. I think they seeded the very initial site with content and made different "accounts" for it, but by the time they raised they had real traffic.

spwa4•1d ago
... and what if Twitter does it?

Because at the very least they killed most countermeasures to bots and a serious percentage of activity on twitter is "fake engagement".

I also have a much more difficult question: Could you explain how this fraud works/applies if nation states are the ones developing the bots? Is there a difference between foreign and US bots?

Aurornis•1d ago
It's not complicated. If a company knowingly misrepresents their user activity then it's fraud. Knowing that a significant portion of your user activity is bots but then claiming you don't have bots would be fraud.
MangoToupe•1d ago
> If a company knowingly misrepresents their user activity then it's fraud.

Demonstrating this in court might get pretty complicated, though. Legal terms often have a way of obscuring the complexity of real life (which is understandable, of course).

I'm guessing the number of well-known startups who have committed fraud by "faking it until they make it" is somewhere between 1 and N. What that number is might well be subjective to the judge or jury rendering a verdict. Unfortunately, lack of serious insight into this might also be evidence that "faking it until you make it" works even if it's fraud, so long as you can spin revenue that investors demand out of it eventually.

Edit: forgive my claiming lack of evidence = evidence; i'm just tired. I think my point that it's kind of unknowable, and this might prompt people to accept it as proof positive (even irrationally). I hope my comment can be received in good faith

spwa4•1d ago
Really? Because just about every dating site, every forum, every ... has been doing this for decades. If this were true, where are the many court cases where management loses against investors? Because I don't see them. The only one I see is the whole shitshow around Elon Musk buying Twitter.

Also a bunch of the bots are by nation states. In that case I would expect that at least some courts would not cooperate with any such fraud case (Russia, India, China, I don't know in Europe but I doubt there aren't a few examples ... and maybe US. Probably at least a few states). Best of luck to make anything stick if the courts to not cooperate.

snowwrestler•1d ago
Most people on web forums or social media sites are browsing, reading, watching. Only a small percentage are posting UGC, user-generated content.

So when founders are starting a new site, they need to bootstrap by getting enough content in there to drive browsing. Only then will the audience grow, and only then will users start to post their own stuff. This is what Reddit did, and it’s not unique to them. YouTube’s founders did the same thing when they started.

Note that this is not “fake it til you make it.” This is investment in audience growth.

drewda•1d ago
"Do things that don't scale" to quote Sir PG.
ricardobeat•1d ago
So.. where did the $450M go? A team of 700 developers in India built over eight years would have cost a fraction of that.
monksy•1d ago
The Chai budget is completely justifiable expense. (Probably more so than the difference being run away with)
more_corn•1d ago
[flagged]
pryelluw•1d ago
$400M!

I get $100M. Maybe even $200M.

But $400M?

Unforgivable.

nadermx•1d ago
You figure 700 employees. 400m. Avg cost per hooker can't be more than a few hundred.

So by this math each employee got 1,900ish hookers. Since i figure male hookers for the female employees where cheaper well round up to 2,000.

That is in fact unforgivable. 1,000 would of been acceptable. 2,000... just excess

pryelluw•1d ago
Did you factor in the nose candy?

That estimate seems off. Please crunch the numbers once again. Make sure to factor in inflation.

kridsdale1•1d ago
Shit, those benefits are way better than Suicide Bomber.
dang•1d ago
Please don't do this here.
CSMastermind•1d ago
How do you figure? $450M / 8 years / 700 developers = $80k / year per developer.
casion•1d ago
Average salary for a developer in India is about 1/10th of that.
darth_avocado•1d ago
Median salary of a reasonable developer is about 1/2th of that and if you are talking about Microsoft, Uber, Google etc., then that’s the salary of a senior dev.

https://www.levels.fyi/t/software-engineer/locations/greater...

But more importantly, we’re all pretending, the only cost of building anything is salaries. A company that size could blow a million dollars a month just on AWS, and the AI stuff is waaaay more expensive.

aprilthird2021•1d ago
No, it's not
polyaniline•22h ago
It is
spamizbad•19h ago
That hasn't been the case in like 20 years. Engineering salaries are around 40K USD, although they can even stretch into the six figures for major companies with deep pockets wanting to attract elite talent. The band is pretty wide and is largely based on whether you work in a body shop consultancy (low end) or a major tech company like Google (high end).

And, like many things in this world, you'll find you'll pay for what you get.

bigfatkitten•1d ago
Only if they’re all ex FAANG staff/principal.
cubano•1d ago
Typically, scams like this are very top-heavy with the vast majority of the pilfered cash going to a few well-placed "bros" at the top of the company pyramid.

My guess? Most of the cash is socked away in BTC or some such wealth sink just waiting for the individuals to clear their bothersome legal issues.

owebmaster•1d ago
> My guess? Most of the cash is socked away in BTC

Had they done this years ago they would be so rich it would be worthy keep builder.ai going just to avoid legal problems.

rokob•1d ago
Why do you think it would be to pay for actual costs? The whole point of running a scam is to spend the money.
antithesizer•1d ago
I really wish I'd read this before starting my career as a scammer ten years ago.
pyman•1d ago
Elon Musk spent $6 billion training his model. Sam Altman spent $40 billion. Where did Builder AI's $500 million go? Probably into building a foundation model, not even a full LLM.
1oooqooq•1d ago
shhhh. we don't talk about the ongoing scams. those you keep hyping and try to sell your SaaS around it.
paxys•1d ago
They have been operating since 2016. Companies can and have burned through $450M in funding a hell of a lot faster than that.

OpenAI is on track to spend $14 billion this year.

TrackerFF•1d ago
These kind of scumbags pocket 90% of the cash.

Wouldn't surprise me if the developers were hired from sweatshop staffing agencies, or just working directly for minimum wage - if that even.

tartoran•1d ago
What happens with all the money they collected from investors? Was it all just squandered away? Pocketed?
xyst•1d ago
The Theranos of AI. What a joke.
LeicaLatte•1d ago
Do things that don’t scale taken to another level :)
Yeul•1d ago
Honestly I feel bad for Indians but yeah everything annoying comes from them. Scamming, call centers and worst of all Microsoft agents.
orochimaaru•1d ago
Scamming is Cambodia via the Chinese triad. That’s where all the scam farms are.

There’s a lot of poor Indians forced into slave labor conditions there by tricking them into job opportunities. But there is not a lot of call center scams run today in India. Not at the scale at which Cambodia runs them.

moralestapia•1d ago
The elephant in the room is how many builder.ai(s) are still out there.

My personal estimate is that it is about 80% of the startups you see around.

bigfatkitten•1d ago
The WITCH consultancies make tons of money delivering code of similar quality without pretending that it was done by AI.
anal_reactor•1d ago
In this whole AI revolution we sometimes forget the power of cheap human labour... and if I recall correctly, that's not the first time such a thing happens. Amazon made a "no-checkout AI automated store" which was a bunch of cameras connected to a bunch of Indians. At this point, I think we should consider "Indians" a valid element of any engineering architecture, because they perfectly fill the niche where you have work that is almost easy to automate, but not quite.

Of course, "Indian-as-a-Service" doesn't sound as cool as AI, but besides this, I think it's a valid solution and a business model for many use cases.

axus•1d ago
https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk , "conceived by Venky Harinarayan in a U.S. patent disclosure in 2001".
bartread•1d ago
This is not news, or at least not fresh news. The FT reported the collapse ~9 days ago and it was discussed here: https://news.ycombinator.com/item?id=44080640
apsurd•1d ago
news to me buddy. this is perhaps a useless comment but then i think, articles resurface every now and again and it's intentional and welcome for those that missed. and this isn't exactly that of course, rather makes me think it's worth a comment: news is relative. discussion ensues, it's all good
macintux•1d ago
Except that it's contrary to the site FAQ.

> If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.

ManBeardPc•1d ago
Another AI scam. Wasn’t there a similar case with the Amazon stores? Just walk out I think. Could be understood as sound advice if someone pitches you something groundbreaking done by AI.
cubano•1d ago
We wanted flying cars, but instead got fake AI.

Shameful.

paxys•1d ago
> Less than two months ago, Builder.ai admitted to revising down core sales numbers and engaging auditors to inspect its financials for the past two years. This came amidst concerns from former employees who suggested sales performance had been inflated during prior investor briefings.

I was hoping for something interesting, but it is just plain old fashioned accounting fraud.

moonikakiss•1d ago
I did due-diligence on Builder.AI for a venture firm I was interning at (circa 2019). It was extremely apparent (Glassdoor, talking to any employee) it was complete BS.

When I say apparent, it took less than 15 minutes and a couple of google searches to get a sniff of it.

Somehow, you can still raise $500MM ++.

I think about that a lot

aprilthird2021•1d ago
You have to elaborate! What were the signs? When you did due diligence what were you told about the company? Was the marketing or premise itself fishy or you only realized it was fraudulent after starting the due diligence?
dang•1d ago
Recent and related:

Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency - https://news.ycombinator.com/item?id=44080640 - May 2025 (136 comments)

dang•1d ago
Two claims are being made here, one boring and one lurid.

The boring claim is that the company inflated its sales through a round-tripping scheme: https://www.bloomberg.com/news/articles/2025-05-30/builder-a... (https://archive.ph/1oyOw). That's consistent with other recent reporting (e.g. https://news.ycombinator.com/item?id=44080640)

The lurid claim is that the company's AI product was actually "Indians pretending to be bots". From skimming the OP and https://timesofindia.indiatimes.com/technology/tech-news/how..., the only citation seems to be this self-promotional LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7334521... (https://web.archive.org/web/20250602211336/https://www.linke...).

Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.

To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?

(Thanks to rafram and sva_ for the links in https://news.ycombinator.com/item?id=44172409 and https://news.ycombinator.com/item?id=44175373.)

pyman•1d ago
I couldn't find any reference on the BuilderAI website claiming they use GenAI to build software. So the second claim lacks evidence.

Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.

ivape•1d ago
Speculating, don’t they offer dev services that’s supposed to be done by AI? If the dev services were offered by devs, then that would be the scam. Now that I’ve said the second part, it does seem lurid because who the hell is paying for AI first code deliverables.

—-

Message to HN:

Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.

Things you’ll need:

1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.

2) Ex VC who exudes wealth with every footstep he/she takes

3) The camera

4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.

See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.

pyman•1d ago
The short answer is no, their website doesn't claim that development is done using AI.

My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.

If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):

> Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.

Source: https://www.builder.ai/how-it-works

They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:

Source: https://www.builder.ai/under-the-hood

It sounds impressive, but listing libraries doesn't prove they built an actual LLM.

So, the questions that remain unanswered are:

1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?

2. Was there a technical team behind Saunders capable of building such a model?

3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?

Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings

kamikazechaser•23h ago
There are personal testimonials in the indiandevelopers subreddit from quite a while ago, if those are to be believed.
stuartd•1d ago
cowboys
a_void_sky•1d ago
Nobody has mentioned that they were reselling the AWS credits they had. We had them as our billing partner with very good discounts. The day it happened, AWS sent us a mail to remove them as our billing partner.