There is a secret funnel from YC to a select group of top tier VCs.
b) Everyone needs to stop perpetuating the YC lie that they invest in the best founders and they just happen to want to do AI. It's rubbish and insulting because it implies that only young, male, SF-based founders can be the best. Instead it's clear that YC has been aggressively pushing AI which makes sense given they are a significant investor in OpenAI.
Of course it does.
Founders look at YC batches, see that it is 99% AI companies and are then forced to also go in that direction if they want the benefits of the accelerated YC path.
And YC deliberately chooses founders with AI companies because they have an investment thesis that is different from "request for startups". Garry Tan has been a massive e/acc fanboy since the beginning and genuinely believes that AI in every use case will advance humanity. And the partners all align with this.
This is all inarguable because amongst the tens of thousands of applications there are surely many amazing non-AI companies. Is this implication that they are all worse than what was selected in the batch ?
YC's leadership nowadays seem to lack of any kind of vision and just follows whatever shiny tech is currently in vogue, which is a huge red flag for accelerators.
Sell your AI hot potato quickly!
a) what are the growth rate when measuring customers which are *not* YC companies?
b) what is the churn rate for that same group.
Measuring by Series A (Assuming investors are the compass not the users - aka the market), is completely anti-YC the way I perceive the YC philosophy from afar.It might prove to be their downfall.
The economic purpose of seed investors is to take on technology-market risk. It's a necessary part of the economy, since the only way to find out if a technology has business value is to have companies build things with it. Without investors willing to take on that risk (and lose frequently -- more than half the time) there'd be only incremental technology progress.
there's quantitative evidence in this article
unfortunately it doesn't support your position
There appears to be a pattern. Unmet need is identified: "I want ChatGPT -- but able to read PDFs" or "I want ChatGPT -- but able to do research and produce lengthy reports." Startup gets funding for this and, if they're lucky, releases a rough beta that leans heavily on the OpenAI API. Two months later OpenAI launches a better, much more polished and seamless version, which is integrated into ChatGPT itself.
I had briefly considered forming a Legal AI startup (going so far as to download the fulltext of every legal ruling ever made in the US -- something like 400GB) but then o3/DeepResearch got so good that it became apparent that there'd be little point.
Steve Hsu claims to have solved hallucinations in a customer service context, which might be the only "startup-type" idea that has a head-start over the giants.
Google has all the technical infrastructure, talent and everything to make something like AirBnB, Docusign and hell even intellij. Why not?
"AI startups," if they make sense, seem to have a very short shelf-life. They're either overtaken by the continuing improvement in LLM context windows, or, if there's a real and general unmet need for what they offer, the giants will tend to integrate it.
It probably comes down to the fact that code is not that crucial but all the other non technical aspects like distribution, supplier relations and marketing that makes a product.
Maybe LLM wrappers turn out to be that way. The model may not matter but the distribution and customer relation etc would matter more.
There's only so much product bandwidth a company can take on that makes sense, look at the graveyard of Google products.
To a technical user there may be little/no difference to you between prompt engineering into a chat box vs. clicking a button with premade text slots.
But to the average non-programmer, a chat app like ChatGPT is somewhat pigeonholed into the chat format, and so use cases that don’t lend themselves to this interface will be outcompeted by specific apps that do.
- This is the best AI tool ever!
- Does it fix the reliability issue?
- Not now, but wait 6 months, because that’s when better models will be coming out
And so since ChatGPT came out.You can see "AI startups" examples there, A company that manages your business outbound communication with lots of AI features. AI powered code generation for business operations, Accounting services with lots of AI features, business finance software with lots of AI features
Where can I find this?
The "opinions" are what you want.
These are huge files heavily compressed, so they're quite difficult to handle.
If for nothing else openAI won't be able to market itself for every single use case, and so long as people aren't using chatGPT for some use case (even if it could perform the task) there's still an opening.
Microsoft owns GitHub and VSCode yet cursor was able to out execute them. Legora is moving very quickly in the legal space. Not clear yet who will win.
Really? My startup is under 30 people. We develop in the open (source available) and are extremely willing to try new process or tooling if it'll gain us an edge -- but we're also subject to SOC2.
Our own evaluation was Cursor et all isn't worth the headache of the compliance paperwork. Copilot + VSCode is playing rapid catch-up and is a far easier "yes".
How large is the intersection of companies who a) believe Cursor has a substantive edge in capability, and b) have willingness to send Cursor their code (and go through the headaches of various vendor reviews and declarations)?
I'm doubtful. Remember when Google said their strategy was AI First? Baidu too? I'm old enough to remember that the criticism then was along the line "AI is technology. What problems do you want to solve?". The line of thinking seems still relevant to me today.
This makes sense. The entire engineering/tooling field is so gonna change. Picking a winner makes isn't really possible. Most people are just starting to solve real problems with it and starting to build patterns that are not complete nonesense. But it will still change a lot
> “AI for X” verticals are surprisingly narrow.
I think that makes sense too. Those were a significant part of the initial hype. A lot of people promising that they'll take a "generic" LLM (which you all have seen how already smart that is) but now train it specifically on parenting, or trivia, or your emails, or your help center. It's a service type that will continue to exist. Perhaps it needs to tailor to a specific enterprise scenarios to gain traction as a startup. Though the need for these companies to manage the privacy concerns of the customers with their ability to inspect and look at the data and clean it might not be fully solved yet.
> Reducto - Reducto is an AI-driven API that specializes in converting unstructured documents like PDFs and images into structured data.
This is an example of the type of companies where "extracting LLM relevant context from X" and are relevant for any company doing the "AI for X" schtick or enterprise doing AI development on their own. This company is specifically about PDF and images, but we probably gonna see others that are for videos, archives, isos, msoffice docs, and even the ultimate holy grail of "universal binary => very rich structured data" API.
> Developer Tools & Infrastructure
The picks in this category are the most perplexing to me.
B2B opportunities are inherently easier to identify during early-stage evaluations due to clearer revenue models and problem-solution alignment.
YC’s network amplifies B2B growth by directly connecting startups to other batch companies as potential early adopters or customers. I found it interesting that https://www.ycombinator.com/companies/legora (former Leya) is using Reducto. If I get page 9 in the pitch deck correct: https://www.pitchdeckinspo.com/deck/Reducto_02c1f2af-3fa2-4a...
Early-stage B2B startups require less capital and shorter timelines to demonstrate product-market fit compared to B2C.
YC’s founder archetype—technical, execution-driven, and efficiency-focused - naturally gravitates toward building scalable B2B solutions.
Thanks for pointing out Reducto! I added it to my market overview: https://idp-software.com/vendors/reducto-ai/
TLDR
The IDP market remains a massive and growing space. There will be a new segment of the market for simple cases that do not need domain expertise, validation, and integration. Generic Document AI tools, so-called AI wrapper, provide easy wins for basic input extraction / categorization and splitting tasks.
Operational complexity, on-premises, integrating directly with enterprise infrastructure and domain-specific validation across fields mean different workflows require specialized handling. I think this is why Hyland, Abbyy and others can compete with the market, event the tech stack lagging.
I was pretty shocked that of 275 companies in the Winter 2023 batch, only 12 have received Series A deals. Granted, I know a huge part of that is that the VC environment has just collapsed due to the end of the ZIRP era, but those numbers at least sound pretty brutal to me.
What does he/she mean by tooling in this context?
Later section talks about developer tools so not that by the sounds of it. So tooling around inference maybe? Pretty sure ollama is in YC and that's surely "tooling"?
This was trickier 18 months ago, but every major LLM provider has solid support for this now. You can just drop an API call to Google, OpenAI, etc. your existing pipeline. What am I missing? Maybe the selling point was batch, but all LLM providers have a batch product now too.
classification_response = requests.post(
"https://platform.reducto.ai/extract",
json={
"document_url": f"jobid://{job_id}",
"schema": {
"type": "object",
"properties": {
"document_type": {"type": "string", "enum": ["W2", "Passport", "Other"]}
},
"required": ["document_type"],
},
},
headers=headers,
)
I have 15 years of hardware development experience and I would stay far, far away from anyone in the VC space.
I simply don’t believe that anyone in VC is capable of aligning their own incentives to the timescale that a hardware based business requires to show a return.
- We only use models to simulate the chip, and this is at best partial. Verification coverage if the code is one thing. There are thousands of effects from power supply network to thermals, reliability (device aging, electromigration etc) and a bunch of analog stuff which has weird failure modes which is almost impossible to fully cover before shipping it. So, we never actually know what would fail before tapeout. - Tapeour cost is immense. A full mask cost of an advanced node is easily $10 Million or more. You can always go to an MPW, but then they are rare for advanced nodes (1-2 times per year), putting an immense pressure on schedules. - Chip production takes time. For old nodes ~3 months, advanced nodes it's getting close to ~5 months. - Package design, test PCB design, their production takes a lot of time and money too. Typically package costs as much as the silicon to produce if the design is heavily IO limited and uses and advanced packaging solution. - Lab test preperation and test itself takes time. Typically you would need months of test to get a meaningful picture of the issues. You woul need to go through temperature and voltage cycles, on/off cycles etc. This of course depends on the end application. Automotive and data centers are quite demanding. - There is a lot of competition for pretty nuch the same product and there is a lot of vender lock-in as the customers don't want to redesign their system ever.
So at the end, if you are designing a complex ASIC, you will spend a lot of money and time per tapeout cycle. If you have a big issue, your customer will go to the next guy. You lost them for forever (or for this product cycle of 4-5 years, if you are lucky but that's death sentence for a start-up). Now you are tens of millions in negative without your main customer. Again, if you are lucky you can either find another customer or repurpose your design. This is increasingly difficult as complex chips often aim a narrow market. This makes everyone very risk averse, including your customers.
For less complex chips in old process technology nodes things can be sped up, and is already being sped up, by a lot of IP reuse or buying ready silicon proven IPs. The problem there is, the time to market isn't the determining factor anymore as anyone can make a functional chip relatively quickly, but what matters is who can do it cheapest. There's a reason why most audio codecs and 1Gbit ethernet PHYs in PCs are Realtek. This type of products aren't attractive for start-ups.
It's often a happy middle ground of these with a niche application which resonates well with the experience and the talent of the engineering team makes a good beginnings of a HW start-up. Even with the best team, you need minimum 2 years to show something though.
Those investors rarely brand themselves as VCs. To their portfolio companies or LPs.
Not in practice.
Depends on what hardware, who is developing it, and for who. Proof of concept is almost always cheap enough to make. Beauty of HW is that you get PMF straight away - even if you have the worst completely broken product, but it it brings value to someone, they will pay for it. From there you can bootstrap, take on credit etc. Capital intense part can wait - refinement, certifications, patents, packaging, documentation, mass production etc. This is of course all under assumption that the core team knows how to build everything, if you're outsourcing in the prototype phase then you're probably toast anyway.
> So the right kind of investor could add significant value if they were aligned.
I always perceived value of investors to be everything but the money. If you need just the money then get a loan.
Bootstrapped companies were much better but they lacked the capital to develop their own products, so they were often reliant on one big customer. Growth was slow but more organic.
Those numbers seem quaint now compared to OpenAI's "can we borrow $1B, actually $10B, hang on we need $50B, sorry we meant $500B, also we might need $7T" investment death spiral. Maybe when this is over VCs will be glad for how cheap and low-risk hardware development is, relatively speaking.
Put on some ambient music, grab a trackball, do some CAD, don't think about equity.
It isn't a no-brainer. Many founders in the new era are weighing bootstrapping, seed-strapping, and VC money without a clear answer.
If you can see a path to growing MMs of revenue with little need for staff, you may just not go do a follow-on round.
The report from a year ago had lots of AI Text infra companies, and now its mostly AI text product companies. We needed the first to get to the second layer.
And next we have a few AI audio infra companies, so probably a year from now, we will have AI audio product companies. Maybe the same with video.
1m, 2m ARR don’t have the same impact in fundraising so later Series A are bound to happen and are not a great indicator imo of how well a company is doing
throwaway but several from my batch (including us) are cash flow positive, higher ARR than these Series A companies and not on the fundraising treadmill
that cohort of YC companies is not represented in data anywhere because from what I’ve seen it is a bit of an anomaly vs previous YC batches
There is a clear business case and buying large trucks is already a capex play. Then slowly work your way through more complex logistic problems from there. But no! The idea to sell was clearly the general problem including small cars that drive children to school through a suburban ice storm with lots of cyclists. Because that's clearly where the money is?
It's the same with AI. The consumer case is clearly there, people are easily impressed by it, and it is a given that consumers would pay to use it in products such as Illustrator, Logic Pro, modelling software etc. Maybe yet another try in radiology image processing, the death trap of startups for many decades now, but where there is obvious potential. But no! We want to design general purpose software -- in general purpose high level languages intended for human consumption! -- not even generating IR directly or running the model itself interactively.
If the technology really was good enough to do this type of work, why not find a specialized area with a few players limited by capex? Perhaps design a new competitive CPU? That's something we already have both specifications and tests for, and should be something a computer could do better than human. If an LLM could do a decent job there, it would easily be a billion dollar business. But no, let's write Python code and web apps!
The other thing people have been trying to do is build general agents e.g. Manus.
I just think this misses the key value add that agents can add at the moment.
A general agent would need to match the depth of every vertical agent, which is basically AGI. Until we reach AGI, verticalized agents for specific real issues will be where the money/value is at.
Congratulations, you just reinvented the railroad.
The railroad is an amazingly low cost way to move tonnage, if you’re going from a place where the railroad stops to another place where the railroad stops. There aren’t really companies that _could_ be using rail and aren’t.
But it just isn’t cost effective in many cases once you add in last-mile costs. If we built more rail (politically infeasible), you might see more usage but ultimately you still suffer from needing at least one locomotive per train.
Solving the last mile by having stores that get shipments near a local train station that serves both cargo and passenger needs, and using kei trucks for small local deliveries is definitely a different set of tradeoffs.
On the other hand, trucks are very popular in Europe and Asia. 75% of land freight in Europe are by truck [1]
[0]-https://www.worldatlas.com/articles/highest-railway-cargo-tr...
So the hypothetical trucks can't handle freeways but can self-drive on much more complex urban and suburban roads?
One EMD SD70ACe locomotive moves over 10,000 tonnes of cargo using 1,300 L of diesel per 1,000 km. The equivalent 286 trucks would consume 107,250 L, while needing 55.8 km of a single-lane highway, compared to the 2.16 km freight train.
Similarly, the average US car has 1.5 passengers per ~30 m² of space, so 20 m² per person. An average bike is about 2 m² per person. A typical trolley car holds ~160 passengers per 200 square meters, so 1.25 m² per person. A tram reliably moves at 60–80 km/h on interurban routers, or 30 km/h in urban centers with frequent stops, a considerable improvement over San Francisco's 16 km/h by car for last mile.
getting new tracks built takes waaay too long (because of NIMBY and simply because the road is usually already there)
there's no long-term thinking from politics, and no market forces converging to somehow over the years lead to some compounding (so the inefficiencies don't really translate to some big problem -- well, climate change and slower GDP growth)
lubujackson•15h ago
biccsdev•15h ago
ljf•14h ago
sokka_h2otribe•7h ago
I'm not actually sure where tractors fit in. I haven't heard of them in the equation early on. I think at some point they were probably viable, but I never heard of a coal powered tractor (maybe there were some). I suppose tractors could leave piles of coal and stuff if they needed to by the fields.
MangoToupe•6h ago
Ezhik•15h ago
shivbhatia•15h ago
econ•12h ago
You would rather have a thing that solves a specific problem in a completely reliable way. An application that knows what you want to do because there is only one thing to do in the universe. AI can write it but never be it.
JumpCrisscross•9h ago
How about it just working? No need to ask. The way a great assistant just makes the things you need and want happen.
throwanem•15h ago
BoorishBears•7h ago
You seem unfamiliar with the space, there are plenty of players outside of OpenAI, Anthropic, and Google bringing AI to the consumer space: https://a16z.com/100-gen-ai-apps-4/
Consumer AI is arguably doing better than enterprise where 99% of the spend is poorly scaling undertakings that don't deliver on even 1/10th of their cost.
throwanem•7h ago
BoorishBears•4h ago
Ah wait, you were talking out of your ass and want to deflect.
throwanem•3h ago
keiferski•14h ago
1. YC startups target consumers. (B2C)
2. YC startups target businesses. (B2B)
3. YC network becomes large enough that startups can exist purely to serve other YC startups. (B2YC)
4. A new accelerator is launched which aims to fund YC companies that serve other YC companies. (YC4YC)
5. ?
Mostly joking, but I do sometimes look at the social media accounts of people in YC / Silicon Valley and wonder if they are living in an increasingly insular world. I think they would benefit from stepping outside of that into the greater world economy more deliberately.
spiderfarmer•13h ago
Ozzie_osman•13h ago
danenania•4h ago
OpenAI is lighting boatloads of money on fire to provide the ChatGPT free version. Same with Google for their search results AI, and perplexity which has also raised a lot. Unless you can raise a billion and find a unique wedge, it’s hard to even be in the game.
You can try to use small cheap models, but people will notice that free ChatGPT is 10x better.
PeterStuer•13h ago
In both cases backing 1 company with significant investment is not rational.
BoorishBears•8h ago
I personally have a consumer AI product that had 3 competitors get into YC, and they just didn't perform very well:
- One has so little distribution the only sign of life in the last 3 months was that they updated their landing page.
- Another released a disappointing app, didn't really iterate on it, and eventually pivoted into being a legal AI answering machine after that flopped.
- The third took down their app shortly after YC and pivoted to a content creation site for YT channels... then randomly let their site start going down, ignoring the customers, and doesn't seem to be doing anything anymore.
Meanwhile some competitors that didn't get into YC are now at 7 figure MRR (I'm at a measly 5 figure MRR). So it's not like the space these apps were in is as disastrous as these comments are making them out to be: YC took a chance and unfortunately these teams just weren't the right teams.