So if you assume their revenue is in that range, you're looking at 66x to 133x ARR multiple. In today's market that's quite a big markup. Standard SaaS right now is probably more like 5-15x. AI is a lot more (but Supabase isn't AI). But they are a key leader in their market, so probably get a meaningful bonus for that. And I'm sure a lot of big industry investors were competing against each other for the Supabase deal, so that definitely would have helped valuation too. Also, at their maturity today, they are probably showing some great success signing big enterprise deals and telling a story about how that will grow.
That being said, those factors alone don't answer 66-133x. Perhaps Supabase's strongest angle is their opportunity for product-led growth:
- They have a huge number of people on a free tier
- The growth rate of free tier users might be accelerating
- The conversion rate of free tier users to paid users might also be increasing
- They're adding more things that people can pay for, increasing LTV of customers. e.g., for my business, we probably 20x our Supabase cost in the last 6 months - most of that is due to our growth but also there are a lot of things we can buy from Supabase beyond compute.
So I would assume, in addition to the above, they're telling a story about their actual revenue growth rate will accelerate meaningfully because of all of these factors working together.
Lots of assumptions in here, but you can start to see how a lot of different factors + a hype multiple could lead to such a valuation.
> The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase. Supabase’s goal: To be a one-stop backend for developers and "vibe coders."
Makes sense to me, vibe coding basically shifts your burden to specification and review, which are traditionally things a senior developer should be good at.
I have a limited intuition for this based off my AI usage the past few years, but I want to learn from the pros.
The problem we have now is we have people who aren't engineers trying to make an app and they end up creating insecure and buggy messes, then struggle with figuring out how to deploy, or they end up destroying all their code with no recovery because they didn't know anything about version control.
I used to pride myself of knowing all the little ins and outs of the tech stack, especially when it comes to ops type stuff. This is still required, the difference is you don't need to spend 4 hours writing code - you can use the experience to get to the same result in 4 minutes.
I can see how "ask it for what you want and hope for the best" might not end well but personally - I am very much enjoying the process of distilling what I know we need to do next into a voice dictated prompt and then watching the AI just solve it based on what I said.
With Vercel/Netlify, you're paying for ease of use. For a lot of people, that tradeoff is worth it. Not everything can be free.
Starts?!
I remember getting a sheet from an employer early in my career that fully broke down the cost of benefits and taxes and showed me the full cost of just my employment, not including overhead, profit, etc. it was rather eye opening because although I kid of knew it from accounting and finance, it never really impacted me quite as much before seeing the numbers.
But the market rate for a freelance midlevel US-based engineer would be about double per hour what you'd pay a full-time employee of the same level, to account for taxes/PTO/health care/etc.
https://money.usnews.com/careers/best-jobs/computer-programm...
Do you have a better source for your number.
As far as cost, 200/month is nothing, but those are not the numbers we hear about when things spiral out of controll due to a ddos or sudden surge in popularity.
All this is to say: even if all progress on AI halted today, it would remain the case that, after the Internet, LLMs are the most impactful thing to happen to software development in my career. It would be weird if companies like Supabase weren't thinking about them in their product plans.
I have two main issues, first the tooling is changing so rapidly that as I start to hone in on a process it changes out from under me. The second is verifying the output. I’m at like 90% success rate on getting code generated correctly (if not always faster than I could do it) but boy does that final 10% bite when I don’t notice.
An aside, I think the cloud ought to make your (perhaps especially your) list. At least for me that changed the whole economy of building new software enterprises.
For “real work” done by a “real engineer”, I approach it almost exactly as you say.
For side projects/personal software that I most likely would have never started pre-llms? I’ll just go full vibe code and see how far I get. Sometimes I have to just delete it, but sometimes it works. That’s cool.
An unsuccessful project might be unsuccessful because it got eaten by costs before it became successful.
A wildly successful project is risky to migrate.
If not, then it’s poor price controls.
IIUC Pieter Levels talks a lot about not prematurely optimizing engineering solutions because most ideas will flop.
Most startups fail. Optimizing for getting revenue is more important than optimizing cost in the beginning.
If you get revenue you can solve the cost problem. If you don’t, it doesn’t matter.
Anything that gives you more shots at the goal is a win in a startup.
I've seen many colleagues bootstrap something - even if they're not themselves very technical - because they've leveraged these well integrated low cost platforms.
I think it’s rare that fails to show potential because of the underlying technology that’s chosen.
Sure, Vercel is relatively expensive. But I just don’t see how you’d throw in the towel because the costs are too high without first evaluating how to lower them.
If you’re saying that the evaluation is likely to show that you’re stuck - I have never seen that be the case personally.
Yes, “vibecoding” still has issues (and likely will for the forseeable future). I’m sure the next decade will be an absolute boon for security researchers working with new companies. But you shouldn’t dismiss people based on their use of these tools.
And other commenters are right that these expensive infra tools can be replaced later when the idea has actually been validated.
Based on the “vibe coders” crowd I see on X, they are a superset of indie hackers with lower barrier to entry when it comes to coding skills and less patience for mediocre success. They seem to have the “go big or go home” mindset.
As long as they have a popular product, they don’t mind forking over some of their profit to OpenAI or a hosting provider. None of the Ghibli generator app creators complained about paying OpenAI… If the product is not popular, no outrageous costs, and the product will be abandoned anyway very fast.
Not necessarily applicable to vibing with Supabase specifically, right?
There are several ways to host Supabase on your own computer, server, or cloud.
Migrating from it is not that hard so far. I did it on an afternoon for a customer.
Also a couple friends are running the open source version in their own containers.
Maybe there are (or will be) cloud only features, but for the basic service there isn’t as much lock-in as something like AWS.
Making it easy for engineers, experienced OR aspiring is huge.
I don't mean to demean "vibe coders" exactly either, but rather jumping on the hype train of using that term for your funding pitch. You're using AI to learn to become a software developer? Great! No problem with that.
But also — if you now have a database involved and you're handling people's data, you better learn what you're doing. A database provider pushing "vibe coding" is not a good look imo.
Nevertheless congrats to the Supabase team!
A non-technical family member is working on a tech project, and giving them Lovable.dev with Supabase as a backend was like complete magic. No amount of fiddling with terminals or propping up postgres is too little.
We technical people always underestimate how fast things change when non-technical users can finally get things done without opening the hood.
This is good and bad. Non-technical users throwing up a prototype quickly is good. Non-technical users pushing that prototype into production with its security holes and non-obvious bugs is bad. It's easy for non-technical users to get a false sense of confidence if the thing they make looks good. This has been true since the RAD days of Delphi and VisualBasic.
I think there's going to be the same problems as there are fixing bad body shop code. The companies that pushed their "vibe code" for a few dollars worth of AI tokens will expect people to work for pennies and/or have unreasonable time demands. There's also no ability to interview the original authors to figure out what they were thinking.
Meanwhile their customers are getting screwed over with data leaks if not outright hacks (depending on the app).
It's not a whole new issue, shitty contractors have existed for decades, but AI is pushing down the value of actual expertise.
For nearly 50 years now, software causes disruption, demand drives labor costs, enterprise responds with some silver bullet, haircuts in expensive suits collect bonuses, their masters pocket capital gains, and the chicken come home to roost with a cycle of disruption and labor cost increases. LLMs are being sold as disruption but it's actually another generation of enterprise tech. Hence the confusion. Vibe coding is just PR. Karpathy knows what he's doing.
I beg to differ. Non-technical users pushing anything into production is GREAT!
For many, that's the only way they can get their internal tool done.
For many others, that's the only way they might get enough buyers and capital to hire a "real" developer to get rid of the security holes and non-obvious bugs.
I mean, it's not like every "senior developer" is immune from having obvious-in-retrospect security holes. Wasn't there a huge dating app recently with a glaring issue where you could list and access every photo and conversation ever shared, because nobody on their professional tech team secured the endpoints against enumeration of IDs?
I agree it is great that more people can build software, but let's not pretend there are zero downsides.
it's just cybersecurity people fearing for their jobs :-)
I agree with you on the downsides.
There was a reason the industry was regulated, and circumventing these reasons with an app has been a net negative to society.
Even us entrepreneurially minded technical devs cut corners on our personal projects that we just want to through a Stripe integration or Solana Wallet connect on
And large companies with FTC and DOJ involved data breaches just wind up offering credits to users as compensation
so for non-technical creators to get into the mix, this just expands how many projects there are that get big enough to need dedicated UX and engineers
Feels we're skipping these steps and "generating" prototypes that may or may not satisfy the need and moving forward with that code into final.
One of the huge benefits of things like Invision, Marvel, Avocode, Figma, etc. was to allow the idea and flow to truly get its legs and skip the days where devs would plop right into code and do 100s of iterations and updates in actual code. This was a huge gain in development and opened up roles for PMs and UI/UX, while keeping developer work more focused on the actual implementation.
Feels these generate design & code tools are regressing back to direct-Code prototypes without all that workflow and understanding of what should actually be happening BEFORE the code, and instead will return to the distractions of the "How", and its millions of iterations and updates, rather than "What".
Some of this was already unfortunately happening due to Figma's loss of focus on workflow and collaboration, but seems these AI generation tools have made many completely lose sight of what was nice about the improved workflow of planning, simply because we CAN now generate the things we think we want, doesn't mean we should, especially before we know what we actually want / need.
Maybe I'm just getting old, but that's my .02 :).
you can vibe code a fully working UI+backend that requires way less effort so why bother with planning and iterating on the UI separately at all?
anybody who actually knows what they are doing gets 10x from these tools plus they enable non-coders to bring ideas to the market and do it fast.
My point isn't to stitch things to Figma, that's abhorrent to me as well. My point is to not get bogged down on the implementation details, in this case an actually working DB, those tables, etc, but rather less fidelity actual full flow concepts that can be generated and iterated.
Then that can be fed into a magic genie GPT that generates the front-end, back-end, and all that good jazz.
The thing is, the cost of producing websites is already pretty low, but the value of websites mostly derives from network effects. So a rising flood of micro crud saas products will not be likely to generate much added value. And since interoperability will drive complexity, and transformer based LLMs are inherently limited at compositional tasks, any unforeseen value tapped by these extra websites will likely be offset by the maintainability and security breaks I mentioned. And because there is a delay in this signal, there is likely to be a bullwhip effect: an explosion of sites now and a burnout in a couple of years in which a lot of people will get severely turned off by the whole experience.
> you can vibe code a fully working UI+backend
…is gonna bring a lot of houses crashing down sooner or later.
One thing I will agree on though is that LLMs make it easier to iterate or try ideas and see if they'll work. I've been doing that a ton in my projects where I'll ask an LLM to build an interface and then if I like it I'll clean it up and or rebuild it myself.
I doubt that I'll ever use Figma to design, it's just too foreign to me. But LLMs let me work in a medium that I understand (code) while iterating quickly and trying ideas that I would never attempt because I wouldn't and be sure if they'd work out and it would take me a long time to implement them visually.
Really, that's where LLMs shine for me. Trying out an idea that you would even be capable of doing, but it would take you a long time. I can't tell you how many scripts I've asked ChatGPT or similar to write that I am fully capable of writing, but the return on investment just would not be there if I had to write it all by by hand. Additionally, I will use them to write scripts to debug problems or analyze logs/files. Again, things that I am perfectly capable of doing but would never do in the middle of a production issue because they would take too long and wouldn't necessarily yield results. With an LLM, I feel comfortable trying it out because at worst I'd burn a minute or two of of time and at best I can save myself hours. The return on investment just isn't there if it would take me 30 minutes to write that script and only then find out if it was useful or not.
They are great products that cover 95% of what a CRUD API does without hacks. They’re great tools in the hands of engineers too.
To me it’s not about vibe coding or AI. It is that it's pointless to reinvent the wheel on every single CRUD backend once again.
I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns. Your entire schema is exposed. Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly. You're pushed into fake open source where you can't always run the software independently. Who knows what will happen when the VC backers demand returns or the company deems the version you're on as not worth it to maintain compared to their radically different but more lucrative next version.
I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
As a long-time Hasura stan, I can't agree with this in any way.
> Your entire schema is exposed
In what sense? All queries to the DB go thru Hasura's API, there is no direct DB access. Roles are incredibly easy to set up and limit access on. Auth is easy to configure.
If you're really upset about this direct access, you can just hide the GQL endpoint and put REST endpoints that execute GQL queries in front of Hasura.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper
> Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops
... How is an API that queries Hasura via GQL any different than an API that queries PG via SQL? Put your business logic in an API. Separating direct data access from API endpoints is a long-since solved problem.
Colocating Hasura and PG or Hasura and your API makes these network hops trivial.
Since Hasura also manages roles and access control, these "extra hops" are big value adds.
> You're pushed into fake open source where you can't always run the software independently
... Are you implying they will scrub the internet of their docker images? I always self-host Hasura. Have for years.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I think your arguments pretty much sum up why people think it's just about backend engineers feeling threatened - your sole point with any merit is that there's one extra network leg, but in a microservices world that's generally completely inconsequential.
Backends are far messier (especially when built over time by a team), more expensive and less flexible than a GraphQL or PostgREST's api.
> I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns
Writing backend code without knowing what you're doing is also an insecure nightmare that forces anti-patterns. All good engineering practices still need to apply to Hasura.
Nothing says that "everything must go through it". Use it for the parts it fits well, use a normal backend for the non-CRUD parts. This makes securing tables easier for both Hasura and PostgREST.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly
I'm gonna disagree a bit with the sibling post here. If you think that going through Hasura for everything is not working: just don't.
This is 100% a self-imposed limitation. Hasura and PostgREST still allow you to have a separate backend that goes around it. There is nothing forbidding you from accessing the DB directly from another backend. This is not different from accessing the same database from two different classes. Keep the 100% CRUD part on Hasura/PostgREST, keep the fiddly bits in the backend.
The kind of dogma that says that everything must be built with those tools produces worse apps. You're describing it yourself.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I have heard the arguments and all I hear is people complaining about how hard it is to shove round pieces in square holes. These tools can be used correctly, but just like anything else they have a soft spot that you have to learn.
Once again: "use right tool for the job" doesn't mean you can only use a single tool in your project.
Mike can edit his name and his bio. He could edit some karma metric that he's got view access to but no write access to. That's fine, I can introduce an RLS policy to control this. Now Mike wants to edit his e-mail.
Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely. I can attach webhooks for this and use pg_net, but I could quickly have a lot of triggers firing webhooks inside my database and now most of my business logic is trapped in SQL and is at the mercy of how far pg_net will scale the increasing amount of triggers on a growing database.
Even for simple CRUD apps, there's so much else happening outside of the database that makes this get really gnarly really fast.
Congratulations: that's not basic CRUD anymore, so you ran into the 5% of cases not covered by an automatic CRUD API.
And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time.
You don't have to throw the baby away with the bathwater just because it doesn't cover 5% of cases the way you want.
It's a rite of passage to realize that "use the right tool for the job" means you can use two tools at the same time for the same project. There are nails and screws. You can use a hammer and a screwdriver at the same time.
It's Postgres, but bundled with some extensions and Postgrest. And a database UI. But hosted and it runs locally also by pulling the separate parts. Running it locally has issues though, so much so that I found it easier to run a docker compose of the separate parts from scratch and at that point just carry that through to a deployment, at which point is there still a reason to use Supabase rather than another hosted Postgres with the extensions?
It's a bit of a confusing product story.
Realistically 99% of the users would still be screwed if they ever shut down, regardless of if it's open (see: Parse)... but it gives people a some confidence to hear they're building on a platform that they could (strictly in theory) spin up their own instance of should a similar rug pull ever occur
I agree you might prefer to choose the stack yourself, but for total n00bs and vibe coders supabase is a great start / boilerplate vs say the MEAN stack that was a hit 5y ago
The developer experience is first rate. It’s like they just read my mind and made everything I need really easy.
- Deals with login really nicely
- Databases for data
- Storage for files
- Both of those all nicely working with permissions
- Realtime is v cool
- Great docs
- Great SDK
- Great support peeps
Please never sell out.
The major issue is - cost. It is way more expensive than I realized as they have so many little ways they charge you. It's almost like death by thousands of paper cuts. My bill for my app with just a few thousand users was $70 last month.
I do like the tooling and all, but the pricing has been very confusing.
When I see valuations like this, they are overvalued until they use that money to acquire another company for a total addressable market expansion.
I don't think this is a good sign.
I was a speaker in a local Supabase event just few weeks ago, https://shorturl.at/JwWMk. We had a local event in Abuja, Nigeria. There we promoted their Launch Week 14 series, highlighting new features from Supabase. In reality, it became an event to show people how to bootstrap a quick backend for their SME business in a weekend.
While the funding is impressive, I haven’t come across too many people touting Supabase or using it in production.
It is good to get started and no doubt useful for simple CRUD apps. But once you want to start doing more complicated stuff, a lot of the RLS primitives become very hard to maintain and test, for example. You could say that that's Postgres's fault, but Supabase strongly pushes you in that direction.
The tooling, while looking quite polished, just felt pretty half baked along with docs (at least a year ago when we pulled the plug). Try to implement even a halfway complicated permissions scheme with it and RLS and you are in for a world of hurt and seemingly unmaintainable code.
So we ditched Supabase Auth for AuthJS, and are using vanilla postgres with Prisma. That's worked well for us. All the tooling is relatively mature, it's easy to write tests, etc.
Maybe if AI is writing some of the code, it might get easier, but for now, I'm avoiding Supabase like the plague until I see a project that's relatively complex that's actually easy to maintain.
My experience of supabase really demonstrates to me that the ideals of all of the postgres layer technologies - postrest, realtime via wal, jwt auth in the db -, just don't make for an easy experience. It all works (mostly) but I find it more annoying than useful and have to work around it more often than I'd like. I suppose I'm old school, but just building the things that one needs is often more robust and less work than trying to plug into what they've provided.
I really don't know what they're going to do with a series D. It seems they now _have_ to go for a high-value exit, but I really don't see which company would provide that exit.
How many of those users are paid. You can sign up for free without a credit card.
It's cool, for certain use cases. I ended up trying it for a few months before switching to Django.
If you ONLY need to store data behind some authentication, and handle everything else on the frontend, it's great. Once you need to try some serverside logic it gets weird. I'm open to being wrong, but I found firebase phenomenally more polished and easier to work with particularly when you get to firebase functions compared to edge functions.
Self hosting requires magical tricks, it's clearly not a focus for them right now.
I hope they keep the free tier intact. While it's not perfect, if your in a situation where you can spend absolutely no money you can easily use it learning ( or for portfolio piece).
But if you usecase involves Supabase auth, using a service account to bypass RLS is kind of like hardcoding connection strings.
Has anything changed recently? ~1year ago I installed a local instance (that I still use today for logging LLM stats) and IIRC all I had to do was "docker compose up". All the dockers are still starting for me at boot from that 1yo install, to this day. (I use it on 127.0 so no SSE & stuff, perhaps that's where the pain points are? Dunno, but for my local logging needs it's perfect).
This isn't documented anywhere. Deep deep in their GitHub issues you'll find a script for generating this magic string which needs to be set as an environment variable.
See https://github.com/supabase/supabase/issues/17164#issuecomme...
I had done something similar in Firebase and it was easy. Supabase wasn't straightforward here. It got to a point where I'm sure I could eventually get it working, but I also think I'm outside the expected usecase.
Django is much more flexibility in this regard.
The whole growth of vibe coding really did help them because I don't think actual developers use it because putting things like functions in the database and authorization in the database is something that we learnt a few decades ago is a bad idea.
So I would guess they are used by massive amounts of developers who are new to coding or do not fully know how to code, but are becoming developers and who love the free databases Supabase provides.
Would love to know what is their actual revenu.
Why are those things a bad ideas? You could be right but if you insist on making value judgements without explanation or elaboration, you're going to sound like a whiny old crank who is scared of becoming obsolete.
AWS needs to get their act together and start prioritizing developer experience
Also, supabase is looking like the go to database for ai created apps. Which will be a major tailwind
And I believe both Supabase and Vercel run all their services on AWS anyways, so AWS gets paid no matter what.
"They ship buggy, insecure messes" "They don't know how to fix what AI gave them" etc etc etc
Right. Like that same thing hasn't been happening literally during the entire existence of programming. I, for one, welcome the vibe coders. I hope it grows their interest in the field and encourages them to go deeper and learn more. Will some be lazy and not even try? Of course! Will some get curious and learn the ins and outs? Absolutely.
Google were late to the game but they've built perhaps one of the easiest cloud platforms to work with.
1. oxc (oxlint)
2. vercel
3. fly.io
probably more! and more every day
https://github.com/dbos-inc/dbos-docs/blob/main/docs/python/...
I did :) I made a browser-based MMO with Phoenix to test out liveview and learn the language: https://shopkeep.gg
And it was pretty annoying. Elixir doesn't really lend itself to vibe coding due to namespacing and aliasing of modules, pattern matching, all without static typing (I know, Dialyzer...). It also struggles to understand the difference between LiveComponents and LiveViews, where to send/handle messages between layers.
Without references to filenames, the agent perpetually decides "this doesn't exist, so I'll write it :)". I found it to be pretty challenging before figuring out I could force the agent to run `mix xref callers <Some.Module>` when trying to do cross-module refs.
(caveat: this was all with claude 3.5 sonnet)
Either that or they need to add features and products alongside the DB to essentially replace the likes of Vercel.
Having said that Supabase is probably the best 'cloud DB' I've played around with so hope they succeed.
I've always taken issue with branding Supabase as an alternative to Firebase. Firebase is a PaaS whereas Supabase is more of a BaaS.
My prediction: They're banking on a big exit to OpenAI or Claude as the defacto backend for an AI IDE.
They're the only big alternative to Firebase, and Firebase just got pulled into Google AI Studio.
otterley•6h ago
What’s Supabase’s exit strategy? Are they sustainable long term as a standalone business?
You can also see how money is starting to chase “vibe coding” — as long as you say the magic words, even if your product is only tangentially related to it, you can get funding!
candiddevmike•6h ago
clvx•6h ago
philomath_mn•4h ago
adamnemecek•6h ago
Supabase defo has a much higher mindshare.
TechDebtDevin•5h ago
adamnemecek•4h ago
hirako2000•4h ago
adamnemecek•3h ago
firtoz•4h ago
They also offer so much more than just postgres. Though I use them only for postgres myself.
BoorishBears•3h ago
This is like if Google Spanner were open sourced tomorrow morning: realistically how many people are going to learn how to deploy a thing that was built by Google for Google to serve an ultra-specific persona?
Maybe you might get some Amazon-sized whale peeking at it for bits to improve their own product, but the entire value prop is that it's a managed service: you're probably going to continue paying for it to be managed for you.
colesantiago•6h ago
Acquisition best case, Private Equity worst case.
Do you see Supabase going public on the stock market? Perhaps unless they do what Cloudflare done and are replicating AWS, it may be hard to see a stock market debut.
Could be wrong though.
fakedang•6h ago
carlhjerpe•5h ago
Also they can't run on AWS postgres with all their postgres plug-ins AFAIK.
The point of "cheaper to host everything yourself" is a lot higher than what most estimate.
My only concern is that is supabase goes out of business or go evil you're gonna have a bad time, however everything is open-source
fakedang•1h ago
ZiiS•4h ago
fakedang•1h ago
tootie•4h ago
fakedang•58m ago
diggan•6h ago
It's bananas to me that questions like these could be unanswered even 5 years after the business started. This possibly cannot be the most efficient way for finding new solutions and "disrupting" stale industries?
jsheard•6h ago
Those are rookie numbers, Discord is coming up on 10 years old and has made zero dollars to date, yet is supposedly considering an IPO soon.
vecter•5h ago
hirako2000•4h ago
bombcar•5h ago
hashamali•5h ago
jsheard•5h ago
jihadjihad•5h ago
lionkor•6h ago
carlhjerpe•5h ago
ZiiS•4h ago
azemetre•1h ago
returnInfinity•6h ago
9283409232•6h ago
FloorEgg•4h ago
nikanj•3h ago
doctorpangloss•2h ago
Was Meteor? They are exactly the same thing. And I really liked Meteor!
To me, the more money pouring in, the better. That said:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRCVKYR...
(The Silicon Valley Economy cartoon)
fsndz•2h ago
chasd00•1h ago
mrweasel•2h ago
If they truly have 3.5 million databases, that's only ~$500 per database to recoup the investments, that doesn't seem to crazy. Companies like OpenAI or Twitter/X are never going to be profitable enough to cover what they've already spend/cost. Supabase could because the amount is so much lower and they have paying customer, but I'd like to emphasize the "could".