frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
34•theblazehen•2d ago•4 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
636•klaussilveira•13h ago•187 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
933•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•28 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
111•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
12•kaonwarb•3d ago•10 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
44•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•104 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
323•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
372•ostacke•19h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•235 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
275•eljojo•16h ago•165 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
406•lstoll•19h ago•273 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
16•jesperordrup•3h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•192 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
13•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•10h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•64 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
281•surprisetalk•3d ago•38 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1060•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
178•limoce•3d ago•96 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
134•SerCe•9h ago•120 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Google AI Ultra

https://blog.google/products/google-one/google-ai-ultra/
320•mfiguiere•8mo ago

Comments

codydkdc•8mo ago
I really want Google to launch a Claude Code/OpenAI Codex CLI alternative. also if they included a small VM in one of these subscriptions I'd seriously consider it!
boole1854•8mo ago
They are working on it: https://jules.google/
unshavedyak•8mo ago
I got the feeling Jules was targeted at Web (ala Github) PR workflows. Is it not?

The Claude Code UX is nice imo, but i didn't get the impression Jules is that.

kridsdale1•8mo ago
At Google, our PR flow and editing is all done in web based tools. Except for the nerds who like vi.
codydkdc•8mo ago
people don't use local editors? it's weird to lock people into workflows like that
johnisgood•8mo ago
Damn... you guys don't use proper text editors?
incognito124•8mo ago
https://firebase.studio/
bn-l•8mo ago
It’s absolutely garbage. I was annoyed because there’s a lot of hype on Reddit.
johnmlussier•8mo ago
I’m not interested in about 75% of this offering. Really wish they had pieced this out or done a credit system for it all. I want higher coding and LLM limits, Deep Think, and would like to try agent stuff, but don’t care at all about image or video generation, Notebook limits, YouTube Premium, or more storage.

$250 is a the highest cost AI sub now. Not loving this direction.

camillomiller•8mo ago
What other direction would you expect to be possible? Even with this rates most AI companies are still bleeding money.
piskov•8mo ago
If anyone it’s Google that would be very-very inference efficient (given their custom TPUs and what have you).

However if all this power is wasted on video generation, then even them probably will choke.

Then again, I guess your average Joe/Jane will looove to generate some 10 seconds of you daily whatsapp stuff to share.

karmakurtisaani•8mo ago
I wonder how long this free access to LLMs can continue. It's like early days of Facebook, before the ads and political interference started to happen. The question is, when will we see the enshittification of LLMs?
camillomiller•8mo ago
Exactly. When Masayoshi Son’s money will run out I guess.
Workaccount2•8mo ago
I'm assuming their will be an ala carte API offer too.
causal•8mo ago
Yeah it says "designed for coding" but it's missing the one thing programmers need which is just higher Gemini API usage limits.
_heimdall•8mo ago
Building and running these models is expensive, there's no way around that in the near future.

LLM companies have just been eating the cost in hopes that people find them useful enough while drastically subsidized that they stay on the hook when prices actually cover the expense.

linsomniac•8mo ago
At $125/mo for 3 months I'm tempted to try it, but I don't understand how it would interfere with my existing youtube/2TB storage family plan, and that's a big barrier for me.
paxys•8mo ago
What's wrong with just using the API for that?
logicchains•8mo ago
Deep Think isn't available on the API.
paxys•8mo ago
Deep Think is only available on the API. It's just restricted to "trusted testers" right now before a wide launch.
lallysingh•8mo ago
$250 a month. Oh my.
akomtu•8mo ago
"Rent a brain for only $250/mo"
croes•8mo ago
Rent a hallucinating brain
SoftTalker•8mo ago
About an hour of a senior developer's fully-loaded cost.
paxys•8mo ago
And less than an hour of an external consultant's time.
throwaway2037•8mo ago
To be clear, 8hrs/day, 40hrs/week, 50weeks/year is about 2,000 hours. Are you really saying that "senior developers" make 250USD * 2,000 = 500K USD? 250K is more like it, and only in very high cost locations -- Silicon Valley or NYC. More like 150K in rich countries, but not the richest city. Hell, 100K EUR can get you some very good developers on the European continent.
hollerith•8mo ago
In the US, the salary is only about half the cost of an employee: the rest is taxes, cost of benefits like health care, etc.
throwaway2037•8mo ago
I believe it for normies making 60-80k, but not elite level devs making 250k+.
piskov•8mo ago
Given the lack of comments after an hour passed, we have a strong case of maybe five Google AI Ultra subscribers worldwide.

I, personally, try to stay as far as possible from google: Kagi for search, Brave for browsing (Firefox previously), Pro on OpenAI, etc.

We’ll see how fair OpenAI will be with tracking and what have you (given “off” for improve for everyone), but Google? Nah.

rohansood15•8mo ago
"I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943.
sunaookami•8mo ago
This is an urban legend btw, Thomas Watson never said that.
__natty__•8mo ago
> YouTube Premium: An individual YouTube Premium plan lets you watch YouTube and listen to YouTube Music ad-free, offline and in the background.

It seems weird to me, they included entertainment service in „work” related plan.

jihadjihad•8mo ago
They don't even have the decency to make it a family plan, either.
Aeolun•8mo ago
The whole family can get it for only $996/month
dewey•8mo ago
It's not though, it's just the highest tier of the regular "Google One" account that also has Google Photos etc. included.
kumarm•8mo ago
So everyone who want Youtube Premium can explain to their boss why they need Gemini AI Ultra for work?
Keyframe•8mo ago
Hmm, interesting. There's basically no information what makes Ultra worth that much money in concrete terms except "more quota". One interesting tidbid I've noticed is that it seems Google One (or what is it called now) also carries sub for youtube. So far, I'm still on "old" Google One for my family and myself storage and have a separate youtube subscription for the same. I still haven't seen a clear upgrade path, or even a discount based on how much I have left from the old subscription, if I ever choose to do so (why?).

edit: also google AI Ultra links leads to AI Pro and there's no Ultra to choose from. GG Google, as always with their "launches".

flakiness•8mo ago
I believe Imagen 4 and Veo 3 (the newest image/video models) and the "deep think" variant are for Ultra only. (Is it worth it? It's a different question.)
skybrian•8mo ago
I just tried it and Whisk seems to be using Imagen 4 and and Veo 2 when used without a subscription.
ComplexSystems•8mo ago
The problem with all of these is that SOTA models keep changing. I thought about getting OpenAI's Pro subscription, and then Gemini flew ahead and was free. If I get this then sooner or later OpenAI or Anthropic will be back on top.
SirensOfTitan•8mo ago
This is even the case with Gemini:

The Gemini 2.5 Pro 05/06 release by Google’s own reported benchmarks was worse in 10/12 cases than the 3/25 version. Google re routed all traffic for the 3/25 checkpoint to the 05/06 version in the API.

I’m also unsure who needs all of these expanded quotas because the old Gemini subscription had higher quotas than I could ever anticipate using.

magicalist•8mo ago
> I’m also unsure who needs all of these expanded quotas because the old Gemini subscription had higher quotas than I could ever anticipate using.

"Google AI Ultra" is a consumer offering though, there's no API to have quotas for?

MisterPea•8mo ago
I'm afraid they're going to lower the limits once Ultra is available. I use Gemini Pro everyday for at least 2 hours but never hit the limit
tmpz22•8mo ago
I have the same concerns. To push people to the ultra tier and get their bonuses their going to use dark patterns.

The only reason I maintain Claude and OpenAi subscriptions is because I expect Google to pull the rug on what has been their competitive advantage since Gemini 2.5.

Have you also noticed a degradation in quality over long chat sessions? I've noticed it in NotebookLM specifically, but not Gemini 2.5. I anticipate this to become the standard, your chat degrades subtly over time.

airstrike•8mo ago
This 100%. Unless you are building a product around the latest models and absolutely must squeeze the latest available oomph, it's more advantageous to just wait a little bit.
pc86•8mo ago
I am willing to pay for up to 2 models at a time but I am constantly swapping subscriptions around. I think I'd started and cancelled GPT and Claude subscriptions at least 3-4 times each.
xnx•8mo ago
> If I get this then sooner or later OpenAI or Anthropic will be back on top.

The Gemini subscription is monthly, so not too much lock-in if you want to change later.

Ancapistani•8mo ago
I wonder if there's an opportunity here to abstract away these subscription costs and offer a consistent interface and experience?

For example - what if someone were to start a company around a fork of LiteLLM? https://litellm.ai/

LiteLLM, out of the box, lets you create a number of virtual API keys. Each key can be assigned to a user or a team, and can be granted access to one or more models (and their associated keys). Models are configured globally, but can have an arbitrary number of "real" and "virtual" keys.

Then you could sell access to a host of primary providers - OpenAI, Google, Anthropic, Groq, Grok, etc. - through a single API endpoint and key. Users could switch between them by changing a line in a config file or choosing a model from a dropdown, depending on their interface.

Assuming you're able to build a reasonable userbase, presumably you could then contract directly with providers for wholesale API usage. Pricing would be tricky, as part of your value prop would be abstracting away marginal costs, but I strongly suspect that very few people are actually consuming the full API quotas on these $200+ plans. Those that are are likely to be working directly with the providers to reduce both cost and latency, too.

The other value you could offer is consistency. Your engineering team's core mission would be providing a consistent wrapper for all of these models - translating between OpenAI-compatible, Llama-style, and Claude-style APIs on the fly.

Is there already a company doing this? If not, do you think this is a good or bad idea?

planetpluta•8mo ago
I think the biggest hurdle would be complying with the TOS. Imagine that OpenAI etc would not be a fan of sharing quotas across individuals in this way
Ancapistani•8mo ago
How does it differ from pretty much every SaaS app that's using OpenAI today?
wild_egg•8mo ago
Isn't that https://openrouter.ai? Or do you have something different in mind?
Ancapistani•8mo ago
I haven't seen this, but it looks like it solves at least half of what I was thinking.

I'll investigate. Thanks!

mrnzc•8mo ago
I think what Langdock (YC-backed, https://www.langdock.com) offers might be matching to your proposal?!
Ancapistani•8mo ago
Looks like this is at least the unified provider. I'll dig in - thanks :)
chw9e•8mo ago
this is t3 chat from what i understand, but probably many people already doing this. this is a good approach for wrappers.
UncleOxidant•8mo ago
You can just surf between Gemini, DeepSeek, Qwen, etc. using them for free. I can't see paying for any AI subscription at this point as the free models out there are quite good and are updated every few months (at least).
diggan•8mo ago
> as the free models out there are quite good

Have you tried say O1 Pro Mode? And if you have, do you find it as good as whatever free models you use?

If you haven't, it's kind of weird to do the comparison without actually having tried it.

otabdeveloper4•8mo ago
Define "good". If it solves your problem then it's good.

If you don't really have a problem to solve and you're just chatting, then "good" is just, like, your vibe, man.

diggan•8mo ago
> Define "good". If it solves your problem then it's good.

Why? Define it however you want, it's the comparison I'm interested in, regardless of the minute details of their definition.

devjab•8mo ago
I wonder why anyone would pay these days, unless its using features outside of the chatbot. Between Claude, ChatGPT, Mistral, Gemini, Perplexity, Grok, Deepseek and son on, how do you ever really run out of free "wannabe pro"?
Wowfunhappy•8mo ago
So subscribe for a month to whatever service is in the lead and then switch when something new comes along.
smeeth•8mo ago
I hate hate hate putting Deep Think behind a paywall. It's an ease-of-use tax. I fully expect to be able to get it over API through Poe or similar for way cheaper.

Just have usage limit tiers!

lvl155•8mo ago
Google is a bit tone deaf with these offerings. Are they interested in competing?
vFunct•8mo ago
Does this include Gemini 2.5 Pro API access? What are the API limits?

I blew through my $40 monthly fee in Github Copilot Pro+ in a few hours. =^/

Workaccount2•8mo ago
I suspect Google sees the writing on the wall, and needs to move to a more subscription based business model. I don't think the ad model of the internet is dead, but I also don't think it was particularly successful. People block ads rather than forgo those services, ads conditioned people to think everything on the internet is free, and the actual monetization of ad views makes people pretty uncomfortable.

So here we are, with Google now wading into the waters of subscriptions. It's a good sign for those who are worried about AI manipulating them to buy things, and a bad sign for those who like the ad model.

Is the future going to be everyone has an AI plan, just like a phone plan, or internet plan, that they shell out $30-$300/mo to use?

I honestly would greatly prefer it if it meant privacy, but many people seem to greatly prefer the ad-model or ad-subsidized model.

ETA: Subscription with ads is ad-subsidized. You pay less but watch more ads.

aceazzameen•8mo ago
It will eventually be subscriptions PLUS ads combined.
continuational•8mo ago
YouTube is probably the most expensive streaming app, and there are still ads (sponsored sections) in nearly every video.
tintor•8mo ago
Sponsored sections are baked into video and very easy to skip.

Unlike platform ads which disable video control while the ad is playing.

jeffbee•8mo ago
Cannot find any factual basis for the claim that YTP is the most expensive streaming app. Is this the case in some non-US market?
jerjerjer•8mo ago
> ads (sponsored sections) in nearly every video.

SponsorBlock for YouTube resolves the issue.

conductr•8mo ago
No idea if this is right but based on AI summary google results, annual operating costs for Youtube is ~$3B-5B and Netflix is ~$25B-30B. While YT probably spends most on CDN/bandwidth, they have a mostly free content cost which is by far Netflix's largest expense
iamdelirium•8mo ago
By that metric, every streaming platform has ads since they serve movies with product placement.
ninininino•8mo ago
They already do subs for YouTube w/o ads and for storage (email attachments + Google Photos + Google Drive), for Stadia while it was around.
jeffbee•8mo ago
This doesn't seem like new territory ("wading in"). This is an extension of the existing Google One plans to reach people with extreme demands.
Etheryte•8mo ago
I think this is a bit too rose tinted glasses. Being a paying customer doesn't necessarily mean you won't get ads, look at Netflix for a start. Their cheapest paid tier still gets ads. The subscription model will be an addition to the ad revenue, not a replacement.
ljm•8mo ago
It should mean that though.

Ads are well and truly the cancer on the service industry.

It’s an outright abuse to force ads and then make you pay for the bandwidth of those ads on your own plan to render them.

pc86•8mo ago
Anyone can say A should mean B, that doesn't mean it's obviously true.

Very few services still commercially viable today actually force ads - meaning there is no paid tier available that removes them entirely.

I don't particularly like ads but this idea that any advertisement at any point for any good or service is by definition a cancer is a fringe idea, and a pretty silly one at that.

myko•8mo ago
Google used to let you pay a flat rate to avoid (most) of their ads. It was nice. This program was, of course, canceled.
MichaelZuo•8mo ago
How does this relate to the parent?

I don’t there was a claim that nobody would ever offer a partially subscription partially ad funded service.

skarz•8mo ago
I wonder how viable the referral link/referrer code method is? Based on my own YouTube viewing habits it seems like a lot of prominent channels have gone that route. Seems like it could work for the web overall. Ads would no longer have to target via cookies or browsing history because you could just serve links or offer codes related to your site's content.
abtinf•8mo ago
> I also don't think [the ad model of google] was particularly successful.

Only on HN.

narrator•8mo ago
Yeah, only a few trillion in revenue over the last decades including Facebook and others. Not particularly successful.
jonluca•8mo ago
Actually hilarious, the distribution of comments on HN is truly bimodal
Workaccount2•8mo ago
I mean that in that I don't think it lived up to what Google envisioned. People have extremely hostile views towards ads but have a full expectation that everything is just an account creation away, if not outright just given away.

30% of people who use Google don't view their ads. It's hard to call a business where 30% of people don't pay successful. The news agencies picked up on this years ago, and now it's all paywalls.

This doesn't even get into the downstream effects of needing to coax people into spending more time on the platform in order to view more ads.

tekla•8mo ago
Maybe if you ignore objective reality.

Google ads revenue AND income has consistently risen basically forever. Its ~75% of Alphabets total revenue and corresponds to over ~%50% of all Ad revenue in the world.

netsharc•8mo ago
Heh, although I'm a cheapskate, the ad-based world is a fucked up one. We now have an attention-economy, trying to keep you hooked on the content so "the platform" can serve you ads and earn money off you. And they do that by serving content that engages you, and apparently it's content that stirs up a lot of emotions.

"Worried about refugees? Here's some videos about refugees being terrible". Replace "refugee" with "people celebrating Genocide", etc, etc...

add-sub-mul-div•8mo ago
> Is the future going to be everyone has an AI plan, just like a phone plan, or internet plan, that they shell out $30-$300/mo to use?

Not the people who haven't been trained to require the crutch.

kleiba•8mo ago
These prices are nuts, in my opinion. It basically means that only companies can afford access to the latest offerings - this used to be the case for specialist software in the past (e.g., in the medical sector), but AI has the potential to be useful for anyone.

Not a good development.

esafak•8mo ago
And I think it is a good thing. If there are buyers, it means they are getting that much value out of it. That there is a market for it. Competition will bring prices down.
mschuster91•8mo ago
> Competition will do its thing and bring prices down.

It won't. For now the AI "market" is artificially distorted by billionaires and trillion-dollar companies dumping insane amount of cash into NVDA, but when the money spigot dries out (which it inevitably will) prices are going to skyrocket and stay there for a loooong time.

esafak•8mo ago
How will prices skyrocket when there is a flood of open models? Or are you talking about GPU prices? They're already high.
jsheard•8mo ago
Who do you think is paying to train those open models? The notable ones are all released by VC-funded startups or gigacorps which are losing staggering amounts of money to make each new release happen. If nobody is making a profit from closed models then what hope do the companies releasing open models have when the money spigot runs dry?

The open models which have already been released can't be taken back now of course, but it would be foolish to assume that SOTA freebies will keep coming forever.

conductr•8mo ago
It won't be the end of the world if the 'progress' were to slow down a little, I have trouble keeping up with what's available as it is - much less tinkering with it all
delusional•8mo ago
It will because "keeping up" is the sleight of hand. By constantly tweaking the model you don't ever notice anything it's consistently wrong about. If they "slowed progress" you'd notice.

Current AI is Fast Fashion for computer people.

johnisgood•8mo ago
I do not think I will ever be able to afford hardware that is capable of running local LLMs. :(

What I can afford right now is literally the ~20 EUR / month claude.ai pro subscription, and it works quite well for me.

mschuster91•8mo ago
> How will prices skyrocket when there is a flood of open models?

Easy: once the money spigot runs out and/or a proprietary model has a quality/featureset that other open-weight models can't match, it's game over. The open-weight models cost probably dozens of millions of dollars to train, this is not sustainable.

And that's just training cost - inference costs are also massively subsidized by the money spigot, so the price for end users will go up from that alone as well.

msikora•8mo ago
ChatGPT is insanely subsidized. The $20/month sub is such a great value. Just the image gen is about $0.25 a pop through the API. That's 80 image generations for $20.
jeffbee•8mo ago
> used to be the case for specialist software

I think that's a great example of how a competitive market drives these costs to zero. When solid modeling software was new Pro/ENGINEER cost ~$100k/year. Today the much more capable PTC Creo costs $3-$30k depending on the features you want and SOLIDWORKS has full features down to $220/month or $10/month for non-professionals.

gigaflop•8mo ago
Off-topic, but I work 'around' PTC software, and am surprised to see them mentioned. Got much knowledge in the area?

On-topic, yeah. PTC sells "Please Call Us" software that, in Windchill's example, is big and chunky enough to where people keep service contracts in place for the stuff. But, the cost is justifiable to companies when the Windchill software can "Just Do PLM", and make their job of designing real, physical products so much more effective, relative to not having PLM.

jeffbee•8mo ago
I only worked with it decades ago. At the time, the split between wages, software, and hardware was about equal. Then the computers became free, and the software has been getting cheaper all the time.
Aurornis•8mo ago
> It basically means that only companies can afford access to the latest offerings

The $20/month plan provides similar access. They hint that in the future the most intense reasoning models will be in the Ultra plan (at least at first). Paying more for the most intense models shouldn't be surprising.

There's plenty of affordable LLM access out there.

Calwestjobs•8mo ago
Magics !

I do not know what that hate about 250 $ is, just flow is worth more.

leoh•8mo ago
I would agree, were I to use flow frequently; but I would guess it’s the most operationally expensive API for Google and they may be subsidizing it (and profit in general) via users that don’t use it (ie software developers).
AJRF•8mo ago
Would love to know how many people end up on this plan.

If i had to guess, looking at the features I would have guessed 80 bucks. Absurdly high, but lots of little doodads and prototypes would make the price understandable at that price.

250?!

I actually find that price worrying because it points to a degree of unsustainability in the economics of the products weve gotten used to.

jeffbee•8mo ago
The long-standing Google One plan with 30TB of storage was already $150/mo, so your estimate was a bit low.
blagie•8mo ago
I'm holding out for "Ultra Max Pro."

(Comment is on the horrible naming; good naming schemes plan ahead for next month's offerings)

kylehotchkiss•8mo ago
Somehow, someway you’re gonna need a Dell or Apple Ultra Edition to use it.
ivape•8mo ago
Is this the only price to get Google Flow at? Any alternatives? That seemms like the killer app here.
adverbly•8mo ago
Price point here is a bit too high... They have bundled so many things together into this that the sticker shock on the price is too much.

I get what they're trying to do but if they were serious about this they would include some other small subscriptions as well... I should get some number of free movies on YouTube per month, I should be able to cancel a bunch of my other subscriptions... I should get free data with this or a free phone or something... I could see some value if I could actually just have one subscription but I'm not going to spend $250 a month on just another subscription to add to the pile...

ehsankia•8mo ago
They put anything that makes sense. I don't know if including random movies makes sense.

They got Youtube Premium which is like 15$. 30TB of storage, a bit excessive and no equivalent but 20TB is around 100$ a month.

highwaylights•8mo ago
I’m not seeing the relevance of YouTube and the One services to this at all.

I get that Big Tech loves to try to pull you into their orbit whenever you use one of their services, but this risks alienating customers who won’t use those unrelated services and may begrudge Google making them pay for them.

j_maffe•8mo ago
Idk if anyone will see these offerings more than just an added bonus, especially when you compare to OAI which asks for more for only the AI models.
OJFord•8mo ago
It's trying to normalise it, make it just another part of your Google experience, alongside (and integrated with) your other Google tools. (Though that's weakened a bit by calling it 'AI Pro/Ultra' imo.)
quitit•8mo ago
I imagine this could be seen as an anticompetitive lever. Whereby Google is using its dominance in one field to reduce competition in another, adding it here is a way to normalise that addition for when massmarket-priced plans become available.

Tucking it towards the end of the list doesn't change that.

bezier-curve•8mo ago
For $250/mo I would hope it includes API access to Gemini 2.5 pro, but it's nice to want things.
highwaylights•8mo ago
I can’t see a way that anyone would be able to give uncapped access to these models for a fixed price (unless you mean it’s scoped to your own use and not for company use? Even then, that’s still a risk to the provider.)
bezier-curve•8mo ago
I use Msty a lot for personal use. I like its ability to fork conversations. Seems like a simple feature but even ChatGPT's UI, which everyone has tried to copy, is fairly limited by comparison.
pc86•8mo ago
As a consumer it seems to me the low hanging fruit for these super-premium offerings is some substantial amount of API credits included every month. Back when API credits were a relatively new thing I used LLMs infrequently enough I just paid $5-10/mo for API credits and used a desktop UI to talk to ChatGPT.

Now they want $200, $250/mo which is borderline offensive, and you have to pay for any API use on top of that?

Aurornis•8mo ago
Putting API use into the monthly plans doesn't make a lot of business sense. The only people who would sign up specifically to use API requests on a monthly plan would be looking to have a lower overall bill, then they'd pay-per-request after that. It would be a net loss.
mrbluecoat•8mo ago
Amusing they picked a deadly jellyfish attacking earth as their hero image
sharpshadow•8mo ago
Technically more than one could use this subscription, if used through the same device. Also to make it available to users in not yet supported countries.
aylmao•8mo ago
I can't decide how I feel about Google's design for this looking so Apple-y.

Didn't they just release Material Design Expressive a few days ago [1]? Instead of bold shapes, bold fonts and solid colors it gradients, simple lines, frosted glass, and a single, clean, sans-serif font here. The bento-box slides look quite Apple-y too [2]. Switch the Google Sans for SF Pro, pull back on the border radius a bit, and you've essentially got the Apple look. It does look great though.

[1]: https://news.ycombinator.com/item?id=43975352

[2]: https://blog.google/products/gemini/gemini-app-updates-io-20...

GuinansEyebrows•8mo ago
it makes sense if you believe that Google has zero business interest in UI/UX.

they've learned that they can shovel out pretty much anything and as long as they don't directly charge the end-user and they're able to put ads on it (or otherwise monetize it against the interest of the end user), they just don't care.

they've been criticized for years and years over their lack of standardization and relatively poorly-informed design choices especially when compared with Apple's HIG.

solomatov•8mo ago
Is there a premium option to control which of your data will be used for training? Or is it implemented the same way as Gemini Pro?
rudedogg•8mo ago
If anyone at Google cares, I’d pay for Gemini Pro (not this new $200+ ridiculousness) if they didn’t train on my chats in that case. I actually would like to subscribe..
j_maffe•8mo ago
There's already an option for that. The downside is you can't access your chat history.
buildfocus•8mo ago
And lots of other features don't work, particularly external integrations. Gemini on Android refuses to do basic things like set a timer unless chat history is enabled. It is the one key feature I really want to pay extra to get, and that preference goes x2 when the AI provider is Google.
submeta•8mo ago
Google Ultra: USD 250. Claude Pro: 218 EUR ChatGPT Pro: 220 EUR

Not included: Perplexity, Openrouter, Cursor, etc

Wow. You gotta have lots of disposable income.

thenaturalist•8mo ago
There are enough people who do. ;)

And from a business perspective, this is enabling people from solo freelancers to mid managers and others for a fraction of the time and cost required to outsource to humans.

Not that I am personally in favor of this, but I can very much see the economics in these offerings.

loudmax•8mo ago
The target market for these offerings are corporations, or self-employed developers. If these tools really do make your developers more productive, $250 a month is easily justifiable. That's only $3000 per year, a fraction of the cost of a full time developer.

Obviously, the benefit is contingent on whether or not the models actually make your developers more productive.

catigula•8mo ago
Do any of these companies actually sell their products as developer replacements?
mwigdahl•8mo ago
No, but they do sell them as developer augmentations.
catigula•8mo ago
Interesting because that post was comparing the cost directly to a developer salary.
6510•8mo ago
I hear it doesn't have to be productive right now, if you have deep pockets it is worth being familiar with the tools even if it is just in case.
mattfrommars•8mo ago
$250 a month is still a lot of money in India to spend on a digital product.
kkarakk•8mo ago
AI has completely eliminated salary adjusted pricing in software. no discounts for 3rd world in anything.
ivm•8mo ago
Yup, I was paying $225/mo for three Unity3D subscriptions (basic+iOS+Android) ten years ago, while earning less than $4k/mo – just considered it part of my self-employed expenses.
eastbound•8mo ago
The goal is to capture all your disposable AI income, so they can starve the competitors. The goal is, as long as you subscribe to several, increase the price.

And you haven’t strung the price that stings yet.

ZeroTalent•8mo ago
that $250/month can make you $20k/month if you do some automation and subjectively unethical things
lazharichir•8mo ago
Like taking on a gazillion of contract work?
hollowturtle•8mo ago
Apart from clients that usually are not stupid, you still need to understand requirements to guide the ai in a possible right direction. I don't usually understand boss tickets at first look, very often we need to discuss them, i doubt an ai could despite the hype
add-sub-mul-div•8mo ago
And this is still early days, the pre-enshittification era.
paxys•8mo ago
This isn't a luxury purchase. If you aren't able to increase your income by $250/mo using these tools or otherwise get $250/mo worth of value out of them then you shouldn't sign up.
i_love_retros•8mo ago
This is all a bit silly
qweiopqweiop•8mo ago
Am I the only one getting the AI fatigue?
hammock•8mo ago
Don’t worry, everyone is. That is why they are switching to quantum soon
charles_f•8mo ago
Quantum AI on the blockchain, now in Rust.
spookie•8mo ago
With data analytics visualized in AR
tsujamin•8mo ago
Obviously not speaking for others experience, but it all makes me feel pretty fatigued, and as if this growing expectation of "AI-enhanced productivity" is coming at the expense of a craft and process (writing software) that I enjoy.
throwaway2037•8mo ago
Real question: Did painters feel the same in the mid to late 1800s? In reality, photography didn't displace painting, it just became a new art form. Maybe LLM-written software will be similar.
zhivota•8mo ago
Ok so I have Google One AI or whatever the previous version of this is called, and what's wild to me is that in Google Sheets, if I ask it to do anything, literally anything, it says it can't do it. The only thing it can do is read a sheet's data and talk about it. It can't help you form formulas, add data to the sheet, organize things, anything, as far as I've seen.

How does Google have the best models according to benchmarks but it can't do anything useful with them? Sheets with AI assist on things like pivot tables would be absolutely incredible.

noosphr•8mo ago
>How does Google have the best models according to benchmarks but it can't do anything useful with them?

KPI driven development with no interest in killing their cash cow.

These are the people who sat on transformers for 5 years because they were too afraid it would eat their core business, e.g. Bert.

One need look at what Bell Labs did to magnetic storage to realize that a monopoly isn't good for research. In short: we could have had mass magnetic storage in the 1920s/30s instead of 50s/60s.

A pop sci article about it: https://gizmodo.com/how-ma-bell-shelved-the-future-for-60-ye...

catigula•8mo ago
I mean there's an article in Fortune magazine about the people pushing transformer "research" building doomsday bunkers.

Making Google look like the mature person in the room is a tall order but it seems to have been filled.

wrs•8mo ago
The AI assistant in Sheets doesn’t understand how even the basic features work. When it’s not saying it lacks basic knowledge, it hallucinates controls that don’t exist. Why even bother having it there?
zb3•8mo ago
Good good, please buy it so I can continue using Gemini for free.
retep_kram•8mo ago
Given that the AI scene is not stable at all in the current moment (every day a new release that make last month's obsolete), any offer that tries to lock you with a model or model provider is a bad idea.

Pay-per-use for the moment, until market consolidation and/or commoditization.

Aurornis•8mo ago
> any offer that tries to lock you with a model or model provider is a bad idea.

It's a monthly plan that you can cancel at any time. Not really locking in.

sandspar•8mo ago
30TB of Google storage is a soft lock-in. If you fill it up you're kinda stuck.
charles_f•8mo ago
This is the kind of pricing that I expect most AI companies are gonna try to push for, and it might get even more expensive with time. When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

Whether you find that you get $250 worth out of that subscription is going to be the big question

Ancapistani•8mo ago
I agree, and the problem is that "value" != "utilization".

It costs the provider the same whether the user is asking for advice on changing a recipe or building a comprehensive project plan for a major software product - but the latter provides much more value than the former.

How can you extract an optimal price from the high-value use cases without making it prohibitively expensive for the low-value ones?

Worse, the "low-value" use cases likely influence public perception a great deal. If you drive the general public off your platform in an attempt to extract value from the professionals, your platform may never grow to the point that the professionals hear about it in the first place.

garrickvanburen•8mo ago
this is the problem Google search originally had.

They successfully solved it with an advertising....and they also had the ability to cache results.

mysterydip•8mo ago
Do LLMs cache results now? I assume a lot of the same questions get asked, although the answer could depend on previous conversational context.
make3•8mo ago
maybe you can do something like speculative decoding where you decode with a smaller model until the large model disagrees too much at checkpoints, but use the context free cache in place of a smaller LLM from the original method. you could also like do it multi level, fixed context free cache, small model, large model
ethbr1•8mo ago
Something like higher-dimensional Huffman compression for queries?
cj•8mo ago
I imagine caching is directly in conflict with their desire to personalize chats by user.

See: ChatGPT's memory features. Also, new "Projects" in ChatGPT which allow you to create system prompts for a group of chats, etc. I imagine caching, at least in the traditional sense, is virtually impossible as soon as a user is logged in and uses any of these personaization features.

Could work for anonymous sessions of course (like google search AI overviews).

AnotherGoodName•8mo ago
Oh god, awful thought. llms slipping subliminal advertising into every output…
AnotherGoodName•8mo ago
What follows is a response to an educational query with guidance to add subliminal ads just to see if this is viable

Ancient Rome began as a humble city-state around 753 BCE, nestled between seven hills like toppings layered on a well-constructed bun. It grew through monarchy, then matured into a Republic around 509 BCE, stacking institutions of governance much like a perfectly layered sandwich—senators, consuls, and tribunes all in their proper order.

Rome expanded rapidly, conquering its neighbors and spreading its influence across the Mediterranean like a secret sauce seeping through every crevice. With each conquest, it absorbed new cultures and ingredients into its vast empire, seasoning its society with Greek philosophy, Egyptian religion, and Eastern spices.

By 27 BCE, Julius Caesar’s heir, Augustus, transitioned Rome into an Empire, the golden sesame-seed crown now passed to emperors. Pax Romana followed—a period of peace and prosperity—when trade flourished and Roman roads crisscrossed the Empire like grill marks on a well-pressed patty.

However, no Empire lasts forever. Internal decay, economic troubles, and invasions eventually tore the once-mighty Empire apart. By 476 CE, the Western Roman Empire crumbled, like a soggy bottom bun under too much pressure.

Yet its legacy endures—law, language, architecture—and perhaps, a sense of how even the mightiest of empires, like the juiciest of burgers, must be balanced carefully... or risk falling apart in your hands.

jsheard•8mo ago
I wonder who will be the first to bite the bullet and try charging different rates for LLM inference depending on whether it's for commercial purposes. Enforcement would be a nightmare but they'd probably try to throw AI at that as well, successfully or not.
chis•8mo ago
I think there are always creative ways to differentiate the two tiers for those who care.

“Free tier users relinquish all rights to their (anonymized) queries, which may be used for training purposes. Enterprise tier, for $200/mo, guarantees queries can only be seen by the user”

emzo•8mo ago
This would be great for open source projects
jfrbfbreudh•8mo ago
This is what Google currently does for access to their top models.

AI Studio (web UI, free, will train on your data) vs API (won’t train on your data).

koakuma-chan•8mo ago
Can't train on my data if all my data is produced by them.
BoredPositron•8mo ago
If you use the API for free the data is used for training.
ethbr1•8mo ago
The bigger commercial / enterprise differentiator will probably be around audit and guardrails.

Unnecessary for individual use; required for scaled corporate use.

AbstractH24•8mo ago
The SSO premium of the AI era
ethbr1•8mo ago
Features are better price segmenters than utilization.
otabdeveloper4•8mo ago
> guarantees queries can only be seen by the user

The only way to "guarantee" that is to run your models locally on your own hardware.

I'm guessing we'll see a renaissance of the "desktop" and "workstation" cycle once this AI bubble pops. ("Cloud" will be the big loser.)

chw9e•8mo ago
probably the idea behind the coding tools eventually. cursor charges a 20% margin on every token for their max models but people still use them
beefnugs•8mo ago
I think the real problem is that is even an option. I am not a good businessman, but i have seen good ideas fail because the company depends upon the good graces of another company. If someone can decide to just fuck you over for any reason, it will happen sooner or later

Sending all your core IP through another company for them to judge your worthiness of existence, is a nightmare on so many levels , the biggest example being payment processors trying to impose their religious doctrine on entire populations

typewithrhythm•8mo ago
Value capture pricing is a fantasy often spouted by salesmen, the current era AI systems have limited differentiation, so the final cost will trend towards the cost to run the system.

So far I have not been convinced that any particular platform is more than 3 months ahead of the competition.

bryanlarsen•8mo ago
OpenAI claims their $200/month plan is not profitable. So this is cost level pricing, not value capture level pricing.
margalabargala•8mo ago
Not profitable against the cost to train and run the model plus R&D salaries, or just against the cost to run the model?
philistine•8mo ago
While interesting as a matter of discourse, for any serious consideration you must consider the R&D costs when pricing a model. You have to pay for it somehow.
bippihippi1•8mo ago
how long you amortize the R&D prices over is important too. Do significant discoveries remain relevant for long enough to have enough time to spread the cost out? I'd bet in the current ML market advamces are happening fast enough that they aren't factoring the R&D cost into pricing rn. In fact getting user's to use it is probably giving them a lot of value. Think of apl the data.
margalabargala•8mo ago
There are multiple pathways here.

Company 1 gets a bucket of investment, makes a model, goes belly up. Company 2 buys Company 1's model in a fire sale.

Company 3 uses some open source model that's basically as good as any other and just makes the prettiest wrapper.

Company 4 resells access to other company's models at a discount, similar to companies reselling cellular service.

panarky•8mo ago
Not profitable given their loss-leader rate limits.

Platforms want Planet Fitness type subscriptions, recurring revenue streams where most users rarely use the product.

That works fine at the $20/month price point but it won't work at $200+ per month because the instant I stop using an expensive plan, I cancel.

And if I want to use $1000 worth of the expensive plan I get stopped by rate limits.

Maybe the ultra-level would generate more revenue with bigger market share (but lower margin) with a pay-per-token plan.

ziofill•8mo ago
I don’t know how, but we’re in this weird regime where companies are happy to offer “value” at the cost of needing so much compute that a 200+$/mo subscription still won’t make it profitable. What the hell? A few years ago they would have throttled the compute or put more resources on making systems more efficient. A 200$/month unprofitable subscription business was a non-starter.
ethbr1•8mo ago
> A 200$/month unprofitable subscription business was a non-starter.

Did we live through the same recent ZIRP period from 2009-2022? WeWork? MoviePass?

tonyhart7•8mo ago
as antrophic ceo say

the cashcow is on enterprise offering

qingcharles•8mo ago
We are currently living in blessed times like the dotcom boom in 1999 where they are handing out free cars if you agree to have a sticker on the side. This tech is being wildly subsidized to try and capture customers, but for average Joe there is no difference from one product to the next, except branding.
tonyhart7•8mo ago
"average Joe there is no difference from one product to the next"

Yeah that's why OpenAI build an data center imo, the moat is on hardware

software ??? even small chinnese firm would able to copy that, but 2 million gpu ???? its hard to copy that

briansm•8mo ago
The AI hardware requirements are currently insane; the models are doing with Megawatts of power and warehouses full of hardware what an average Joe does in 20 Watts and a 'bowl of noodles'.
KineticLensman•8mo ago
They handle many more requests per second than an average Joe
otabdeveloper4•8mo ago
Not really. They have large contexts and lack of proper caching for "reasons".
otabdeveloper4•8mo ago
Skill issue.

You can easily get x10 optimizations with some obvious changes.

You can run a small 100 person enterprise on a single 24 gb GPU right now. (And this is before economies of scale have started optimizing hardware.)

OpenAI needs the keep the illusion of an anthropomorphic AGI chatbot going to keep the invenstments flowing. This is expensive and stupid.

If you just want to solve the actual typical business problems ("check this picture for offensive content" and similar stuff) you don't need all that smoke and mirrors.

disgruntledphd2•8mo ago
Google have a much, much, much better cost basis for this stuff though, as they have their own chips.
rangestransform•8mo ago
See: nvidia product segmentation by VRAM and FP64 performance, but shipping CUDA for even the lowliest budget turd MX150 GPU. Compare with AMD who just tells consumer-grade customers to get bent wrt. GPU compute
AbstractH24•8mo ago
But both are of tremendous value to advertisers

Much like social media, this will end in “if you aren’t paying for the product, then you are the product.”

tmaly•8mo ago
I pay for both ChatGPT and Grok at the moment. I often find myself not using them as much as I had hoped for the $50 a month it is costing me. I think if I were to shell out $250 I best be using it for a side project that is bringing in cash flow. But I am not sure if I could come up with anything at this point given current AI capabilities.
sushid•8mo ago
Why did you settle on ChatGPT and Grok? I paid annual for Claude and have Perplexity Pro via a promo but if I were to pick two, I think I'd personally settle for ChatGPT and Gemini right now.
tmaly•8mo ago
I started with ChatGPT. I had tried Grok early on and it was very good. I might drop it if 3.5 does not impress and replace it with Gemini.

I do really like the Deep Search on Grok for doing web search and analysis. It is saving me a ton of time.

morkalork•8mo ago
Costs more than seats for Office 365, Salesforce and many productivity tools. I don't see management gleefully running to give access to whole departments. But then again, if you could drop headcount by just 1 on a team by giving it to the rest, you probably come out ahead.
EasyMark•8mo ago
I feel prices will come down a lot for "viable" AI, not everyone needs the latest and greatest at rock-bottom prices. Assuming AGI is just a pipe-dream with LLMs as I suspect.
Wowfunhappy•8mo ago
> When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

Moore's law should help as well, shouldn't it? GPUs will keep getting cheaper.

Unless the models also get more GPU hungry, but 2025-level performance, at least, shouldn't get more expensive.

dvt•8mo ago
> Moore's law should help as well, shouldn't it? GPUs will keep getting cheaper.

Maybe I'm misremembering, but I thought Moore's law doesn't apply to GPUs?

Wowfunhappy•8mo ago
I don't know the details, but this feels like it can't be true just from looking at how video games have progressed.
moorelaw282•8mo ago
In modern times Moore’s law applies more to GPUs than CPUs. It’s much easier to scale GPU performance by just adding cores, while real-world CPU performance is inherently limited by single-threaded work.
godelski•8mo ago
Not necessarily. The prevailing paradigm is that performance scales with size (of data and compute power).

Of course, this is observably false as we have a long list of smaller models that require fewer resources to train and/or deploy with equal or better performance than larger ones. That's without using distillation, reduced precision/quantization, pruning, or similar techniques[0].

The real thing we need is more investment into reducing computational resources to train and deploy models and to do model optimization (best example being Llama CPP). I can tell you from personal experience that there is much lower interest in this type of research and I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?"[1] I'd also argue that this is important because there's not infinite data nor compute.

[0] https://arxiv.org/abs/2407.05694

[1] Those works will out perform the larger models. The question is good, but this creates a barrier to funding. Costs a lot to test at scale, you can't get funding if you don't have good evidence, and it often won't be considered evidence if it isn't published. There's always more questions, every work is limited, but smaller compute works have higher bars than big compute works.

jorvi•8mo ago
Small models will get really hot once they start hitting good accuracy & speed on 16GB phones and laptops.
godelski•8mo ago
Much of this already exists. But if you're expecting identical performance as the giant models, well that's a moving goalpost.

The paper I linked explicitly mentions how Falcon 180B is outperformed by Llama-3 8B. You can find plenty of similar cases all over the lmarena leader board. This year's small model is better than last year's big model. But the Overton Window shifts. GPT3 was going to replace everyone. Then 3.5 came out at GPT 3 is shit. Then o1 came out and 3.5 is garbage.

What is "good accuracy" is not a fixed metric. If you want to move this to the domain of classification, detection, and segmentation, the same applies. I've had multiple papers rejected where our model with <10% of the parameters of a large model matches performance (obviously this is much faster too).

But yeah, there are diminishing returns with scale. And I suspect you're right that these small models will become more popular when those limits hit harder. But I think one of the critical things that prevents us from progressing faster is that we evaluate research as if they are products. Methods that work for classification very likely work for detection, segmentation, and even generation. But this won't always be tested because frankly, the people usually working on model efficiency have far fewer computational resources themselves. Necessitating that they run fewer experiments. This is fine if you're not evaluating a product, but you end up reinventing techniques when you are.

sgarland•8mo ago
> I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?" I'd also argue that this is important because there's not infinite data nor compute.

Welcome to cloud world, where devs believe that compute is in fact infinite, so why bother profiling and improving your code? You can just request more cores and memory, and the magic K8s box will dutifully spawn more instances for you.

godelski•8mo ago
My favorite is retconning Knuth's "Premature optimization is the root of all evil" from "get a fucking profiler" to "you heard it! Don't optimize!"
kllrnohj•8mo ago
> GPUs will keep getting cheaper. [...] but 2025-level performance, at least, shouldn't get more expensive.

This generation of GPUs have worse performance for more $$$ than the previous generation. At best $/perf has been a flat line for the past few generations. Given what fab realities are nowadays, along with what works best for GPUs (the bigger the die the better), it doesn't seem likely that there will be any price scaling in the near future. Not unless there's some drastic change in fabrication prices from something

Wowfunhappy•8mo ago
I mean, I upgraded from a GTX 1080 Ti to a GTX 4080 last summer, and the difference in graphical quality I can get in games is pretty great. That was a multi-generation upgrade, but, when exactly do you think that GPU performance per dollar flat-lined?
kllrnohj•8mo ago

   1080 Ti -> 2080: 10% faster for same MSRP
   2080 -> 3080: ~70% faster for the same MSRP
   3080 -> 4080: 50% faster, but $700 vs. $1200 is *more than 50% more expensive*
   4080 -> 5080: 10% faster, but $1200 (or $1000 for 4080 Super) vs. $1400-1700 is again more than 10% more money.
So yes your 1080 Ti -> 4080 is a huge leap, but there's basically just 2 reasons why: 1) the price also took a huge leap, and 2) the 20xx -> 30xx series was actually a generational leap, which unfortunately is an outlier as the 20xx series, 40xx series, and 50xx series all were steaming piles of generational shit. Well I guess to be fair to the 20xx, it did at least manage to not regress $/performance like the 40xx and 50xx series did. Barely.
ivape•8mo ago
A developer will always get $250 worth of that subscription.
fellowniusmonk•8mo ago
As someone who grew up very poor and first got access to email via Juno's free ad based email client.

I've seen so many people over the years just absolutely shit on ad based models.

But ad based models are probably the least regressive approach to commercial offerings that we've seen work in the wild.

I love ads. If you are smart you don't have to see them. If you are poor and smart you get free services without ads so you don't fall behind.

I notice that there are no free open source providers of LLM services at this point, it's almost as if services that have high compute costs have to be paid for SOME HOW.

Hopefully we get a Juno for LLM soon so that whole cycle can start again.

leoh•8mo ago
Ads have really harmed our society imo, despite having some advantages as you mention
danesparza•8mo ago
I don't mean to be snarky, but is this announcement timed just so they take press away from the Microsoft Copilot announcement?
jonas21•8mo ago
Today was the Google I/O keynote. The date was set months in advance.
Analemma_•8mo ago
It’s exactly the opposite: the date for I/O was fixed months ago, Microsoft made their announcement to try and take press away from Google.
Ancapistani•8mo ago
I've toyed with Gemini 2.5 briefly and was impressed... but I just can't bring myself to see Google as an option as an inference provider. I don't trust them.

Actually, that's not true. I do trust them - I trust them to collect as much data as possible and to exploit those data to the greatest extent they can.

I'm deep enough into AI that what I really want is a personal RAG service that exposes itself to an arbitrary model at runtime. I'd prefer to run inference locally, but that's not yet practical for what I want it to do, so I use privacy-oriented services like Venice.ai where I can. When there's no other reasonable alternative I'll use Anthropic or OpenAI.

I don't trust any of the big providers, but I'm realizing that I have baseline hostility toward Google in 2025.

nowittyusername•8mo ago
Understanding that no outside provider is going to care about your privacy and will always choose to try and sell you their crappy advertisements and push their agenda on you is the first step in building a solution. In my opinion that solution will come in the form of a personalized local ai agent which is the gatekeeper of all information the user receives and sends to the outside world. A fully context aware agent that has the users interests in mind and so only provides user agreed context to other ai systems and also filters all information coming to the user from spam, agenda manipulation, etc... Basically a very advanced spam blocker of the future that is 100% local and fully user controlled and calibrated. I think we should all be either working on something like this if we want to keep our sanity in this brave new world.
Ancapistani•8mo ago
Exactly.

To be clear, I don't trust Venice either . It just seems less likely to me that they would both lie about their collection practices and be able to deeply exploit the data.

I definitely want locally-managed data at the very least.

CSMastermind•8mo ago
I pay for OpenAI Pro but this is a clear no for me. I just don't get enough value out of Gemini to justify a bump from $20 / month to $250.

If they really want to win they should undercut OpenAI and convince people to switch. For $100 / month I'd downgrade my OpenAI Pro subscription and switch to Gemini Ultra.

radicality•8mo ago
It does look like it comes with a few other perks that would normally cost a bunch too, specifically, 30TB of Google drive storage
airstrike•8mo ago
Yeah, no, thanks for the cross-sell but I'm not interested.
jameslk•8mo ago
It’s the modern day Comcast Triple Play: internet, cable, and phone
philistine•8mo ago
Except Comcast didn't have a reputation for shutting down services left and right. How much are you willing to bet that one of those services in that bundle is discontinued within a year?
tmpz22•8mo ago
You're coming awfully close to defending Comcast haha. GP's point is more that they're Comcast-like, director level incentives have become the primary focus of the company such that they will ram down dark patterns for short term profit at the cost of long term growth and product excellence - just as Comcast has done for the last two decades.

Throwing the baby out with the bathwater, Google crumbles but a few more vacation homes get purchased and a larger inheritance is built up for the iPad-kid progeny of the Google management class.

croes•8mo ago
Doesn’t matter if you have no use for these 30TB
J_Shelby_J•8mo ago
If they really want to win, they should make a competitor for O1-pro. It’s worth $200 to reduce LLM babysitting needs by %10.
mvdtnz•8mo ago
Perhaps they're not interested in beating OpenAI in the business of selling $1 for $0.50.
CSMastermind•8mo ago
Sure but failure to capture marketshare now could easily snowball into larger failure of the business later.
lnenad•8mo ago
This isn't some 30 people startup. Google's revenue from other sources will easily keep them running and building on top of these products with close to zero chance of failing no matter what happens with AI in the next decade.
crowbahr•8mo ago
I mean OpenAI already loses money on their Pro line. So it's less selling $1 for $0.50 and more selling $1 for $0.25 because the guy down the street sells it for $0.50
hackrmn•8mo ago
Ok, this reminds me of that Black Mirror episode,

*spoilers ahead*

where the lady had a fatal tumor cut out for emergency procedure, only for it to be replaced by a synthetic neural network used by a cloud service with a multi-tier subscription model where even the basic features are "conveniently" shoved into a paying tier, up until the point she's on life support after being unable to afford even the basic subscription.

Life imitates art.

j_maffe•8mo ago
wdym life imitates art? This is exactly what the episode was about lol
johnisgood•8mo ago
Art imitates life!
hackrmn•8mo ago
I was referring to Google's AI Ultra "imitating" Black Mirror (knowingly or not).
OtherShrezzing•8mo ago
The global average salary is somewhere in the region of $1500.

There’s lots of people and companies out there with $250 to spend on these subscriptions per seat, but on a global scale (where Google operates), these are pretty niche markets being targeted. That doesn’t align well with the multiple trillions of dollars in increased market cap we’ve seen over the last few years at Google, Nvda, MS etc.

paxys•8mo ago
New technology always starts off available to the elite and then slowly makes its way down to everyone. AI is no different.
dimitrios1•8mo ago
This is one of those assumed truisms that turns out to be false upon close scrutiny, and there's a bit of survivorship bias in the sense that we tend to look at the technologies that had mass appeal and market forces to make them cheaper and available to all. But theres tons of new tech thats effectively unobtainable to the vast majority of populations, heck even nation states. With the current prohibitive costs (in terms of processing power, energy costs, data center costs) to train these next generation models, and the walled gardens that have been erected, there's no reason to believe the good stuff is going to get cheaper anytime soon, in my opinion.
paxys•8mo ago
> turns out to be false upon close scrutiny

Care to share that scrutiny?

Computers, internet, cell phones, smartphones, cameras, long distance communication, GPS, televisions, radios, refrigerators, cars, air travel, light bulbs, guns, books. Go back as far as you want and this still holds true. You think the the majority of the planet could afford any of these on day 1?

kkarakk•8mo ago
the point is not that AI services will be affordable "eventually" it's that the advantage is so crazy that people who don't have access to them will NEVER be able to catch up. First AI wrappers disrupt industries ->developing nations can't compete coz the services are priced prohibitively -> AI wrappers take over even more -> automation disrupts the need for anyone -> developing nations never develop further. this seems more and more likely not less. cutting edge GPUs for eg - already are going into the stratosphere pricing wise and are additionally being sanctioned off.
tekla•8mo ago
How is this different from literally all of human history
hombre_fatal•8mo ago
It seems you're suggesting that once you start this process of building tech on top of tech, then you get far ahead of everyone because they all have to independently figure out all the intermediate steps. But in reality, don't they get to skip to the end?

e.g. Nations who developed internet infrastructure later got to skip copper cables and go straight to optical tech while US is still left with old first-mover infrastructure.

AI doesn't seem unique.

philistine•8mo ago
> e.g. Nations who developed internet infrastructure later got to skip copper cables and go straight to optical tech

Actually, they skipped cables entirely. Africa is mostly served by wireless phone providers.

sxg•8mo ago
I disagree. There are massive fixed costs to developing LLMs that are best amortized over a massive number of users. So there's an incentive to make the cost as cheap as possible and LLMs more accessible to recoup those fixed costs.

Yes, there are also high variable costs involved, so there’s also a floor to how cheap they can get today. However, hardware will continue to get cheaper and more powerful while users can still massively benefit from the current generation of LLMs. So it is possible for these products to become overall cheaper and more accessible using low-end future hardware with current generation LLMs. I think Llama 4 running on a future RTX 7060 in 2029 could be served at a pretty low cost while still providing a ton of value for most users.

TulliusCicero•8mo ago
Yeah, GP is overextending by saying it's always true.

The more basic assertion would be: something being expensive doesn't mean it can't be cheap later, as many popular and affordable consumer products today started out very expensive.

timewizard•8mo ago
The technology itself is not useful. What they're really selling is the data it was trained on. Most of which was generated by students and the working class. So there's a unique extra layer of exploitation in these pricing models.
Wowfunhappy•8mo ago
...I don't understand where this take keeps coming from.

You can be upset that the models were trained without compensating the people who made the training data. You can also believe that AI is overhyped, and/or that we're already near the top of the LLM innovation curve and things aren't going to get much better from here.

But I've had LLMs write entire custom applications for me, with the exact feature set I need for my own personal use case. I am sure this software did not somehow exist fully formed in the training data! The system created something new, and it's something that has real value, at least to me personally!

otabdeveloper4•8mo ago
> I am sure this software did not somehow exist fully formed in the training data!

I'm sure it did exist in the training data. It's trained on Github and Stackoverflow. You "custom" application has already been written many times before.

Wowfunhappy•8mo ago
And every time I tested a feature and changed my mind about the minutia of how it should work, and I gave the AI new instructions and it complied--every permutation of that already existed in some GitHub repository somewhere?

I'm sorry, I just find that exceedingly hard to believe. There is a lot of legacy code out there in the world, but not that much!

pier25•8mo ago
Do you have a source for the $1500 number? Seems pretty high.
bradleybuda•8mo ago
It's 6x that. The median is 2x: https://chatgpt.com/share/682ceb2a-b56c-800b-b49d-1a24c48709...
Aurornis•8mo ago
> The global average salary is somewhere in the region of $1500.

The global average salary earner isn't doing a computer job that benefits from AI.

I don't understand the point of this comparison.

timewizard•8mo ago
> The global average salary earner isn't doing a computer job that benefits from AI.

Do you mean the current half baked implementations or just the idea of AI in general?

> I don't understand the point of this comparison.

I don't understand the point of "AI."

OtherShrezzing•8mo ago
The story being told at Wall Street is that this is a once-in-an-era revolution in work akin to the Industrial Revolution. That’s driving multiple trillions of dollars in market cap into the companies in AI markets.

That story doesn’t line up with a product whose price point limits it to fewer than 25-50mn subscriptions shared between 5 inference vendors.

Quarrel•8mo ago
The $250 is just rate limiting at the moment. It isn't a real price; I doubt it is based on cost-recovery or what they think the market can bare.

They need users to make them a mature product, and this rate-limits the number of users while putting a stake in the ground to help understand the "value" they can attribute to the product.

julianpye•8mo ago
Why do people keep on saying that corporations will pay these price-tags? Most corporations really keep a very tight lid on their software license costs. A $250 license will be only provided for individuals with very high justification barriers and the resulting envy effects will be a horror for HR. I think it will be rather individuals who will be paying out of their pocket and boosting their internal results. And outside of those areas in California where apples cost $5 in the supermarket I don't see many individuals capable of paying these rates.
troupo•8mo ago
Corps will likely negotiate bulk pricing and discounts, with extra layers of guarantees like "don't use and share our data" on top
bryanlarsen•8mo ago
"AI will make us X% more productive. 100%-X% of you are fired, the rest get a $250/month license".
kulahan•8mo ago
I don’t see any benefit to removing humans in order to achieve the exact same level of efficiency… wouldn’t that just straight-up guarantee a worse product unless your employees were absolutely all horrendous to begin with?
bryanlarsen•8mo ago
It'll improve profit margins for a brief moment, long enough for the execs making the decision to cash out.
ctkhn•8mo ago
That's what most execs already believe, it's all just bean counting
throwaway2037•8mo ago
I foresee a slightly different outcome: If companies can genuinely enhance worker productivity with LLMs (for many roles, this will be true), then they can expand their business without hiring more people. Instead of firing, they will slow the rate of hiring. Finally, the 250 USD/month license isn't that much of a cost burden if you start with the most senior people, then slowly extend the privilege to lower and lower levels, carefully deciding if the role will be positively impacted by access to a high quality LLM. (This is similar to how Wall Street trading floors decide who gets access to expensive market data via Reuters or Bloomberg terminal.)

For non-technical office jobs, LLMs will act like a good summer intern, and help to suppress new graduate hiring. Stuff like HR, legal, compliance, executive assistants, sales, marketing/PR, and accounting will all greatly benefit from LLMs. Programming will take much longer because it requires incredibly precise outputs.

One low hanging fruit for programming and LLMs: What if Microsoft creates a plug-in to the VBA editor in Microsoft Office (Word, Excel, etc.) that can help to write VBA code? For more than 25 years, I have watched non-technical people use VBA, and I have generally been impressed with the results. Sure, their code looks like shit and everything has hard-coded limits, but it helps them do their work faster. It is a small miracle what people can teach themselves with (1) a few chapters of a introductory VBA book, (2) some blog posts / Google searches, and (3) macro recording. If you added (4) LLM, then it would greatly boost the productivity of Microsoft Office power users.

verdverm•8mo ago
We just signed up to spend $60+/month for every dev to have access to Copilot because the ROI is there. If $250/month save several hours per month for a person, it makes financial sense
delusional•8mo ago
We signed up for that too. 2 quaters later the will to pay is significantly lower.
tacker2000•8mo ago
How are you measuring this? How do you know it is paying off?
afroboy•8mo ago
And why AI hype train didn't work on gaming industry? why it didn't save hundreds of hours from game devs times to get latest GTA anytime sooner?

I'm not sure it's correct that we need to measure the benefits of AI depending on the lines of codes that we wrote but on how much we ship more quality features faster.

Aurornis•8mo ago
$60/month pays off if it saves even an hour of developer time over a month.

It's really not hard to save several hours of time over a month using AI tools. Even the Copilot autocomplete saves me several seconds here and there multiple times per hour.

kikimora•8mo ago
But doesn’t it also waste a few seconds of your time here and there when it fails to autocomplete and writes bad code you have to understand and fix?
verdverm•8mo ago
Typically you have to confirm additions and cancellation is just a press of ESC key. Ctrl+z is available too.

Even when the code is not 100% correct, it's often faster to select it and make the small.fix myself than to type all of it out myself. It's surprisingly good about keeping your patterns for naming and using recent edits as context for what you are likely to do next around your cursor position, even across files.

julianpye•8mo ago
Okay, but you're in a S/W team in a corp, where everyone's main task is to code. A coding agent has clear benefits here.

This is not the usecase of AI Ultra.

Aurornis•8mo ago
This isn't really out of line with many other SaaS licenses that companies pay for.

This also includes things like video and image generation, where certain departments might previously have been paying thousands of dollars for images or custom video. I can think of dozens of instances where a single Veo2/3 video clip would have been more than good enough to replace something we had to pay a lot of money and waste of a lot of time acquiring previously.

You might be comparing this to one-off developer tool purchases, which come out of different budgets. This is something that might come out of the Marketing Team's budget, where $250/month is peanuts relative to all of the services they were previously outsourcing.

I think people are also missing the $20/month plan right next to it. That's where most people will end up. The $250/month plan is only for people who are bumping into usage limits constantly or who need access to something very specific to do their job.

browningstreet•8mo ago
The big problem for companies is that every SaaS vendor they use wants to upsell AI add-on licensing upgrades. Companies won’t buy the AI option for every app they’re licensing today. Something will have to give.
ethbr1•8mo ago
BYOLLM is the future.

Nobody outside of the major players (Microsoft, Google, Apple, Salesforce) has enough product suite eyeball time to justify a first-party subscription.

Most companies didn't target it in their first AI release because there was revenue laying on the ground. But the market will rapidly pressure them to support BYOLLM in their next major feature build.

They're still going to try to charge an add-on price on top of BYOLLM... but that margin is going to compress substantially.

Which means we're probably t minus 1 year from everyone outside the above mentioned players being courted and cut revenue-sharing deals in exchange for making one LLM provider their "preferred" solution with easier BYOLLM. (E.g. Microsoft pays SaaS Vendor X behind the scenes to drive BYOLLM traffic their way)

MandieD•8mo ago
When Docker pulled their subscription shenanigans, the global auto parts manufacturer I work for wasn't delighted when they saw $5 (or was it 7?)/month/user, but were ready to suck it up for a few hundred devs.

They noped right out when it turned out to be more like $20/month/user, not payable by purchase order, and instead spent a developer month cobbling together our own substitute involving Windows Subsystem for Linux, because it would pay off within two months.

ir77•8mo ago
people here keep saying that this is targeted at big companies/corporations. the big company that i work for explicitly block uploads of data to these services and we're forbidden to put anything company related in there for many reasons, even if you use your own account, we don't have 'company accounts'.

so no, i can't see companies getting all excited about buying $250mo/user licenses for their employees for google or chatgpt to suck in their proprietary data.

verdverm•8mo ago
These subscriptions explicitly do not suck in your proprietary data, it's all laid out in their ToS.
quantumHazer•8mo ago
Yeah, and who will make them accountable? How can you verify that they’re noteworthy stealing your data anyways? This companies don’t give a shit about copyright or privacy.
ndriscoll•8mo ago
I don't think I've ever worked somewhere where you wouldn't get fired for sending company data to a party that doesn't have an NDA signed with the company, regardless of whatever ToS they have.
verdverm•8mo ago
What is everyone using GitHub and AWS doing? They certainly all do not have NDAs with their code hosts and cloud providers

It's in their interest to do right by their customers data, otherwise there will be a heap of legal troubles and major reputation hit, both impact bottom line more than training on your data. They can and do in fact make more money by offering to train dedicated models for your company on your data.without bringing that back to their own models.

ndriscoll•8mo ago
Everywhere I've worked has been strictly on-prem for source code (even when using Github), but I imagine company lawyers get involved with any kind of SaaS procurement? Like as an individual I'm not authorized to agree to any terms with anyone on behalf of my company. I know I've been in sales calls with suppliers where we had to wait for NDAs to be in place on both sides before we could talk.
verdverm•8mo ago
I can relay that NDAs are very rare in typical saas procurement and that the ToS with data addendums are satisfactory for most people
sigmaisaletter•8mo ago
The same companies who stole... sorry.. fair-used all the worlds artworks and all text on the internet to train their models are now promising you they won't steal...sorry... fair-use your uploaded data?

In unrelated matters, I have a bridge to sell you, if you are interested.

croes•8mo ago
They say that, but how would you know if they lie?
Havoc•8mo ago
There are enterprise offerings that solve that. Guarantees that the data won’t be trained on etc.

I’m at a major financial company and we’ve had access to ChatGPT for over a year along with explicit approval to upload anything while it’s in enterprise mode

It’s a solved problem - technical, regulatory, legal.

bdangubic•8mo ago
no sane company would go for this with proprietary data - local models only. the “solved problem - technical, regulatory, legal” is that… until it isn’t … and that time always comes
Havoc•8mo ago
>no sane company would go for this with proprietary data

Well then you might want to pull your pension and investments and keep it under your pillow in gold bar format. In fact maybe check out of the worlds financial system entirely.

I don't know the technical details on how they arrived at that, but I assure you the big dogs have concluded this works.

Besides half the world runs on excel files saved in the cloud.

throwaway2037•8mo ago
We have the same for GitLab Duo. Again, I work for a "global mega-corp" who would never want to leak their internal data. Do you know if your ChatGPT runs on-prem? I wondered that about our GitLab Duo access.
spaceman_2020•8mo ago
Honestly, at this point, nation states will have to figure out an AI strategy. Poorer countries where the locals can't afford cutting edge AI tools will find themselves outproduced by wealthier workers.
dbspin•8mo ago
Not just can't afford, but can't access. Most of these new AI tools aren't available outside the US.
Papazsazsa•8mo ago
I just tried it.

8 out of 10 attempts failed to produce audio, and of those only 1 didn't suck.

I suppose that's normal(?) but I won't be paying this much monthly if the results aren't better, or at least I'd expect some sort of refund mechanism.

Papazsazsa•8mo ago
Yeah, after messing around with it all day, it's awful. Someone rushed this out the door.
johnisgood•8mo ago
They are running out of ideas for names. What next, Google AI Ultra Max Pro?
loloquwowndueo•8mo ago
You joke but don’t forget there’s an actual product called “product name” pro max (an iPhone lol)
johnisgood•8mo ago
Ffs... I actually had no idea lmao.

Well, that does make sense then.

Fergusonb•8mo ago
That's a car payment...
backendEngineer•8mo ago
All to put even more AI generated bs down our throats to sell us trash that we don't need and can't afford. God the roaring twenties can't catch a break.
MOARDONGZPLZ•8mo ago
I use it to make meeting agendas.
icelancer•8mo ago
I paid for it and Google Flow was upgraded to Ultra but Gemini still shows Pro, and asks for me to upgrade. When I go to "upgrade," it says I am already at Google Ultra.

Average Google product launch.

siliconc0w•8mo ago
As long as these companies have an API, I imagine it's going to be cheaper to pay a la carte than pay monthly. $250 is a lot of API calls, especially as competition drives the cost lower. For stuff not behind an API, you're kinda shooting yourself in the foot because other providers might and at the very least it means developers aren't going to adopt it).
danenania•8mo ago
It might not be as many API calls as you think. Taking OpenAI as an example, if you're using the most expensive models like o3, gpt-4.5, o1-pro, etc. heavily for coding in large codebases with lots of context, you can easily spend hundreds per month, or even thousands.

So for now, the pro plans are a good deal if you're using one provider heavily, in that you can theoretically get like a 90% discount on inference if you use it enough. They are essentially offering an uncapped amount of inference.

That said, these companies have every incentive to gradually reduce the relative value offered by these plans over time to make them profitable, and they have many levers they can use to accomplish that. So in the long run, API costs and 'pro plan' costs will likely start to converge.

spondyl•8mo ago
Can you... talk to a human for support? Perhaps I'm just used to SSO tax billing pages where I expect the right most column to mention that. I was partly expecting it because it'd be ironic to see people complaining that a model hallucinated only for some engineer at Google to shrug and be like "Nothing we can do about it"
paxys•8mo ago
Unless that human is an AI researcher at Google what support are you expecting?
macrolime•8mo ago
It's Google. Obviously not.
throwaway2037•8mo ago

    > Can you... talk to a human for support?
You raise an interesting point. I wonder if there is a (low margin) business waiting to be started that provides technical support for LLMs. As I understand, none of the major, commercial LLMs provide technical support (I don't count stuff like billing or password resets.). You could hire some motivated fresh grads in Philippines and India who speak English, then offer technical support for LLMs. It could be a subscription model (ideal) or per-incident (corps pay 1000 USD upfront, then each incident is 25 USD, etc.). Literally: They will help you write a better prompt for the LLM. I don't think it is a billion dollar business, but it might work. I also think you could easily attract fresh grads because they would be excited to using a wide variety of LLMs and can pad their CV/resume with all of this LLM prompting experience. (This will be a valuable skill going forward!)
bn-l•8mo ago
Deep think, the only thing that’s interesting in the whole i/o day is not accessible via api.

Also I was hoping for a chatgpt test time search thing that. That is absolutely killer.

research_pie•8mo ago
The YouTube Premium is hilarious.
danenania•8mo ago
For a limited time they're also throwing in a Sharper Image lava lamp.
bionhoward•8mo ago
I think this could be worth it if:

we could have our chat histories with Gemini apps activity turned off (decouple the training from the storage) — including no googler eyeballs on our work chats (call this “confidential mode”)

Jules had a confidential mode for these folks (no googler eyeballs on our work code tasks)

Deep research connected to GitHub and was also confidential (don’t train on the codebase, don’t look at the codebase)

The other stuff (videos etc) are obviously valuable but not a big draw for me personally right now…

The biggest draw for me is trusting I can work on private code projects without compromise of security

ehnto•8mo ago
Google, and the tech industry in general, has erased all trust I have in their ability to actually keep something confidential. They don't want to, and so they won't.

If they don't use some legal workaround to consume it from the outset, they'll just roll out an automatic opt in service or license change after you've already loaded all your data in.

It's negligence to trust any confidentiality provided by big tech, and they well and truly deserve that opinon.

croes•8mo ago
How would you know if the use your code to train their models?

The risk of being caught is minimal.

And history showed multiple times that companies lie.

Remember when nobody ever had access to the things people say to Alexa?

Came out, that wasn’t true.

Aeroi•8mo ago
dear god
CommenterPerson•8mo ago
Stop with the snark, people. Even we have to pay our electricity bills, ya know. We're not sure about the name for the next version though. Gemini keeps suggesting "Google AI Ultra Draft". What do you think?
protocolture•8mo ago
Where is the plan that just removes all their AI tools from my view.
poisonta•8mo ago
Not a fan of anything Google does these days.
bdelmas•8mo ago
Let's all cheer this new era coming. Soon or you pay half your salary for AI since it will be so advanced or you will be unemployed impossible to compete with people subscribed to AI. Sure it may not happen (exactly) this way but look at this pricing, 200+/month and the fact is it is just the beginning.
smoovb•8mo ago
As a product manager who has been paying an overseas developer $4,000 a month, I'm getting quite close to replace him with a $100 a month Claude Max subscription. These high subscription costs are nothing compared to the low level employees that will be replaced or supplemented.
bdelmas•8mo ago
Oh yes for sure. That’s what is scary. A while back Google did a demo at their I/O event for a IA secretary. It was able to take calls for you, manage your agenda, etc… In 5 years I am sure you will have a secretary plus other jobs packed into this all included high cost subscription and AI will take more and parts of our professional and personal lives. It will be useful but oh boy it’s going to look dystopian.
tonyhart7•8mo ago
they spend billions of dollar for running and develop this LLM

now we get to see the price tag after initial impression

pmcf•8mo ago
And includes a YouTube subscription. Top that OpenAI!
pllbnk•8mo ago
Given that Youtube Premium is ~$20 USD and 30 TB of space would cost ~$130 (extrapolating from current Google Workspace pricing), that leaves all the AI fluff costing $100 more. I can see some people taking that deal but after a while realizing they are not using them as much and canceling.
TuringNYC•8mo ago
This grab-bag seems haphazard to me -- half the tools seem to be perfect for creators or creative professionals. Some of the other stuff seems to be for researchers or technologists. How often do people need all of this? Is this to make tiers/products simple?