frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Grace Blackwell Desktop Supercomputer: First Impressions

https://jasoneckert.github.io/myblog/grace-blackwell/
1•speckx•53s ago•0 comments

Redundancy vs. dependencies: which is worse?

https://www.yosefk.com/blog/redundancy-vs-dependencies-which-is-worse.html
1•gavinhoward•1m ago•0 comments

The Internet Archive Wayback Machine Is Down

https://bsky.app/profile/archive.org/post/3m62vw5dsr22r
1•AdmiralAsshat•1m ago•0 comments

rep+: A Lightweight Burp Suite Repeater Inside Chrome DevTools

https://github.com/bscript/rep
1•bscript•2m ago•1 comments

Sequoia's 'imperial' Roelof Botha pushed out by top lieutenants

https://www.ft.com/content/0f16e194-5e9b-4486-988e-6f90f927b153
1•wslh•3m ago•0 comments

First Air-Breathing Spacecraft

https://rdw.com/newsroom/redwire-awarded-44-million-darpa-contract-to-advance-very-low-earth-orbi...
1•bilsbie•7m ago•0 comments

Hacker News Post Formatter – Preview Your HN Posts

https://www.hnpostformatter.com/
2•janpio•7m ago•0 comments

Show HN: My Light & Spirit – A Clever Indie Puzzle Where Light Becomes Your Tool

https://groober.itch.io/my-light-spirit
1•electrodisk•8m ago•0 comments

China-linked hackers hijack Asus routers

https://www.scworld.com/brief/china-linked-hackers-hijack-thousands-of-asus-routers
2•ilamont•8m ago•0 comments

Blockage Mechanism and Unblocking Technology Around Petroleum Wellbores

https://www.mdpi.com/2079-6412/15/11/1293
1•PaulHoule•9m ago•0 comments

Show HN: Rapid-rs – Zero-config web framework for Rust

https://crates.io/crates/rapid-rs
2•ashish_sharda•9m ago•0 comments

Wearable bandage device around finger lets users feel the digital world

https://www.designboom.com/technology/stretchable-bandage-device-voxelite-finger-users-feel-digit...
2•65•11m ago•0 comments

The evolution of neoclouds and their next moves

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-evolution-of-neoclouds-and-the...
1•gmays•12m ago•0 comments

Show HN: OpenBotAuth – Open-Source Web Bot Auth with Social Registry

https://github.com/OpenBotAuth/openbotauth
2•gauravguitara•14m ago•0 comments

AI Eats the World [video]

https://www.youtube.com/watch?v=niJpDnNtNp4
3•gmays•14m ago•0 comments

Everything you need to know about hard drive vibration

https://www.ept.ca/features/everything-need-know-hard-drive-vibration/
2•asdefghyk•17m ago•1 comments

Show HN: Quick install script for self-hosted Forgejo (Git+CI) server

https://wkoszek.github.io/easyforgejo/
4•wkoszek•18m ago•1 comments

Fast and Specialized Rust CSV Readers Using SIMD Instructions

https://github.com/medialab/simd-csv
2•Yomguithereal•18m ago•0 comments

How much solopreneurs make and how they make their money [research report]

https://www.adrianatica.com/state-of-solopreneurship/
3•adriana_tica•19m ago•1 comments

Move over Harvard and MIT–this university might be winning the AI race

https://fortune.com/2025/11/19/us-china-ai-race-higher-education-tsinghua-university-outpacing-iv...
3•nis0s•19m ago•1 comments

Trillion‑Scale RL in Real Time: Awex Delivers Second‑Level Weight Updates

https://medium.com/@shawn.ck.yang/awex-an-ultra-fast-weight-sync-framework-powering-trillion-scal...
2•chaokunyang•20m ago•2 comments

Saudi Arabia's Prince Has Big Plans, but His Giant Fund Is Low on Cash

https://www.nytimes.com/2025/11/19/business/pif-saudi-arabia-fund-problems.html
4•toomanyrichies•21m ago•2 comments

Foveated streaming is the genius tech behind Valve's Steam Frame

https://www.pcgamer.com/hardware/vr-hardware/foveated-streaming-genius-tech/
3•thunderbong•22m ago•0 comments

Ask HN: What is the best way to see what files are being read in Windows?

2•jacobwilliamroy•22m ago•0 comments

The TikTok 'Ban' Continues to Be One of the Biggest Turds in Tech Policy History

https://www.techdirt.com/2025/11/20/the-tiktok-ban-continues-to-be-one-of-the-biggest-turds-in-te...
2•hn_acker•23m ago•0 comments

Largest US landlord to pay $7M to settle rent‑setting algorithm lawsuit

https://abcnews.go.com/Business/wireStory/largest-us-landlord-pay-7-million-settle-rentsetting-12...
2•senshan•23m ago•1 comments

CNC_Rebel: Command and Conquer Renegade source code enhancement project

https://github.com/winterheart/CnC_Rebel
3•klaussilveira•23m ago•0 comments

Microsoft Embraces AG-UI Protocol

https://learn.microsoft.com/en-us/agent-framework/integrations/ag-ui/
2•swiftlyTyped•24m ago•0 comments

Estes Soyuz II pro series model rocket

https://estesrockets.com/pages/soyuz
2•adam_gyroscope•25m ago•0 comments

I fixed 109 years of open issues with 5 hours of guiding GitHub Copilot

https://github.com/AlaSQL/alasql/releases/tag/v4.10.1
3•mathiasrw•25m ago•1 comments
Open in hackernews

Nano Banana Pro

https://blog.google/technology/ai/nano-banana-pro/
201•meetpateltech•1h ago

Comments

meetpateltech•1h ago
Developer Blog: https://blog.google/technology/developers/gemini-3-pro-image...

DeepMind Page: https://deepmind.google/models/gemini-image/pro/

Model Card: https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...

SynthID in Gemini: https://blog.google/technology/ai/ai-image-verification-gemi...

varbhat•1h ago
Can anyone please explain me the invisible watermarking mentioned in the said promo?
nickdonnelly•1h ago
It's called Synth ID. It's a watermark that proves an image was generated by AI.

https://deepmind.google/models/synthid/

VladVladikoff•1h ago
Super important for Google as a search engine so they can filter out and downrank AI generated results. However I expect there are many models out there which don’t do this, that everyone could use instead. So in the end a “feature” like this makes me less likely to use their model because I don’t know how Google will end up treating my blog post if I decide to include an AI generated or AI edited image.
Filligree•59m ago
It’s required by EU regulations. Any public generator that doesn’t do it, is in violation of that unless it’s entirely inaccessible from the EU…

But of course there’s no way to enforce it on local generation.

airstrike•1h ago
So whoever creates AI content needs to voluntarily adopt this so that Google can sell "technology" for identifying said content?

Not sure how that makes any sense

raincole•1h ago
*by Google's AI.
zamadatix•52m ago
By anybody's AI using SynthID watermarking, not just Google's AI using SynthID watermarking (it looks like partnership is not open to just anyone though, you have to apply).
jsheard•56m ago
In theory, at least. In practice maybe not.

https://i.imgur.com/WKckRmi.png

raincole•52m ago
?

Google doesn't claim that Gemini would call SynthID detector at this point.

Edit: well they actually do. I guess it is not rolled out yet.

jsheard•52m ago
From the OP:

> Today, we are putting a powerful verification tool directly in consumers’ hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.

Re-rolling a few times got it to mention trying SynthID, but as a false negative, assuming it actually did the check and isn't just bullshitting.

> No Digital Watermark Detected: I was unable to detect any digital watermarks (such as Google's SynthID) that would definitively label it as being generated by a specific AI tool.

This would be a lot simpler if they just exposed the detector directly, but apparently the future is doing everything through LLM tool calls.

KolmogorovComp•53m ago
Has anyone found out how to use Synth ID? If I want to if some images are AI, how can I do?
volkk•1h ago
SynthID seems interesting but in classic Google fashion, I haven't a clue on how to use it and the only button that exists is join a waitlist. Apparently it's been out since 2023? Also, does SynthID work only within gemini ecosystem? If so, is this the beginning of a slew of these products with no one standard way? i.e "Have you run that image through tool1, tool2, tool3, and tool4 before deciding this image is legit?"

edit: apparently people have been able to remove these watermarks with a high success rate so already this feels like a DOA product

Razengan•1h ago
Can Google Gemini 3 check Google Flights for live ticket prices yet?

(The Gemini 3 post has a million comments too many to ask this now)

jeffbee•42m ago
https://gemini.google.com/share/19fed9993f06
hbn•1h ago
I wouldn't trust any of the info in those images in the first carousel if I found them in the wild. It looks like AI image slop and I assume anyone who thinks those look good enough to share did not fact check any of the info and just prompted "make an image with a recipe for X"
matsemann•1h ago
Yeah, the weird yellow tint, the kerning/fonts etc still immediately gives it away.

But I wouldn't mind being easily able to make infographics like these, I'd just like to supply the textual and factual content myself.

kccqzy•56m ago
I would do the same. But the reason for that is because I’m terrible at drawing and digital art, so I would need some help with the graphics in an infographics anyways. I don’t really need help with writing text or typesetting the text. I feel like if I were better at creating art I would not want AI involved at all.
jpadkins•1h ago
really missed an opportunity to name it micro banana (or milli banana). Personally I can't wait for mega banana next year.
fouronnes3•1h ago
I guess the true endgame of AI products is naming them. We still have quite a way to go.
awillen•1h ago
Honestly I give Google credit for realizing that they had something that people were talking about and running with it instead of just calling it gemini-image-large-with-text-pro
echelon•39m ago
They tried calling it gemini-2.5-whatever, but social media obsessed over the name "Nano Banana", which was just its codename that got teased on Twitter for a few weeks prior to launch.

After launch, Google's public branding for the product was "Gemini" until Google just decided to lean in and fully adopt the vastly more popular "Nano Banana" label.

The public named this product, not Google. Google's internal codename went virally popular and outstaged the official name.

Branding matters for distribution. When you install yourself into the public consciousness with a name, you'd better use the name. It's free distribution. You own human wetware market share for free. You're alive in the minds of the public.

Renaming things every human has brand recognition of, eg. HBO -> Max, is stupid. It doesn't matter if the name sucks. ChatGPT as a name sucks. But everyone in the world knows it.

This will forever be Nano Banana unless they deprecate the product.

timenotwasted•1h ago
We just need a new AI for that.
riskable•45m ago
Need a name for something? Try our new Mini Skibidi model!
b33j0r•59m ago
This has always been the hardest problem in computer science besides “Assume a lightweight J2EE distribution…”
jedberg•24m ago
I was at a tech conference yesterday, and I asked someone if they had tried nano banana. They looked at me like I was crazy. These names aren't helping! (But honestly I love it, easier to remember than Gemini-2.whatever.
guzik•1h ago
Cool, but it's still unusable for me. Somehow all my prompts are violating the rules, huh?
Filligree•58m ago
Can you give us an example?
guzik•36m ago
'athlete wearing a health tracker under a fitted training top'

Failed to generate content: permission denied. Please try again.

raincole•19m ago
It's not the censorship safeguard. Permission denied means you need a paid API key to use it. It's confusing, I know.

If you triggered the safeguard it'll give you the typical "sorry, I can't..." LLM response.

mudkipdev•47m ago
Are you asking it to recreate people?
guzik•34m ago
No, and no nudity, no reference images. Example: 'athlete wearing a health tracker under a fitted training top'
ASinclair•42m ago
Have some examples?
gdulli•38m ago
In 25 years we'll reminisce on the times when we could find a human artist who wouldn't impose Google's or OpenAI's rules on their output.
guzik•33m ago
the open-source models will catch up, 100%
raincole•19m ago
Open models don't seem to be catching up the LLM-based image gen at this point.

ChatGPT's imagegen has been released for half a year but there isn't anything remotely similar to it in the open weight realm.

eminence32•1h ago
> Generate better visuals with more accurate, legible text directly in the image in multiple languages

Assuming that this new model works as advertised, it's interesting to me that it took this long to get an image generation model that can reliably generate text. Why is text generation in images so hard?

Filligree•1h ago
It’s not necessarily harder than other aspects. However:

- It requires an AI that actually understands English, I.e. an LLM. Older, diffusion-only models were naturally terrible at that, because they weren’t trained on it.

- It requires the AI to make no mistakes on image rendering, and that’s a high bar. Mistakes in image generation are so common we have memes about it, and for all that hands generally work fine now, the rest of the picture is full of mistakes you can’t tell are mistakes. Entirely impossible with text.

Nano Banana Pro seems to somewhat reliably produce entire pictures without any mistakes at all.

tobr•59m ago
As a complete layman, it seems obvious that it should be hard? Like, text is a type of graphic that needs to be coherent both in its detail and its large structure, and there’s a very small amount of variation that we don’t immediately notice as strange or flat out incorrect. That’s not true of most types of imagery.
saretup•1h ago
Interesting they didn’t post any benchmark results - lmarena/artificial analysis etc. I would’ve thought they’d be testing it behind the scenes the same way they did with Gemini 3.
maliker•1h ago
I wonder how hard it is to remove that SynthID watermark...

Looks like: "When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks." From https://spectrum.ieee.org/ai-watermark-remover

mudkipdev•53m ago
We know what it looks like at least https://www.reddit.com/r/nanobanana/comments/1o1tvbm/nano_ba...
willsmith72•59m ago
> Starting to roll out in the Gemini API and Google AI Studio

> Rolling out globally in the Gemini app

wanna be any more vague? is it out or not? where? when?

koakuma-chan•58m ago
I don't see in the ai studio
WawaFin•46m ago
I see it but when I use it says "Failed to count tokens, model not found: models/gemini-3-pro-image-preview. Please try again with a different model."
Archonical•56m ago
Phased rollouts are fairly common in the industry.
ZeroCool2u•53m ago
Already available in the Gemini web app for me. I have the normal Pro subscription.
meetpateltech•45m ago
Currently, it’s rolling out in the Gemini app. When you use the “Create image” option, you’ll see a tooltip saying “Generating image with Nano Banana Pro.”

And in AI Studio, you need to connect a paid API key to use it:

https://aistudio.google.com/prompts/new_chat?model=gemini-3-...

> Nano Banana Pro is only available for paid-tier users. Link a paid API key to access higher rate limits, advanced features, and more.

myth_drannon•58m ago
Adobe's stock is down 50% from last year's peak. It's humbling and scary that entire industries with millions of jobs evaporate in a matter of few years.
riskable•41m ago
On the contrary, it's encouraging to know that maliciously greedy companies like Adobe are getting screwed for being so malicious and greedy :thumbsup:

I had second thoughts about this comment, but if I stopped typing in the middle of it, I would've had to pay a cancellation fee.

cj•40m ago
There's 2 takes here: First take is the AI is replacing jobs by making existing workforce more efficient.

The 2nd take is AI is costing companies so much money, that they need to cut workforce to pay for their AI investments.

I'm inclined to think the latter is represents what's happening more than the former.

theoldgreybeard•52m ago
The interesting tidbit here is SynthID. While a good first step, it doesn't solve the problem of AI generated content NOT having any kind of watermark. So we can prove that something WITH the ID is AI generated but we can't prove that something without one ISN'T AI generated.

Like it would be nice if all photo and video generated by the big players would have some kind of standardized identifier on them - but now you're left with the bajillion other "grey market" models that won't give a damn about that.

morkalork•50m ago
Labelling open source models as "grey market" is a heck of a presumption
theoldgreybeard•49m ago
It's why I used "scare quotes".
bigfishrunning•20m ago
Every model is "grey market". They're all trained on data without complying with any licensing terms that may exist, be they proprietary or copyleft. Every major AI model is an instance of IP theft.
markdog12•47m ago
I asked Gemini "dymamic view" how SynthID works: https://gemini.google.com/share/62fb0eb38e6b
slashdev•45m ago
If there was a standardized identifier, there would be software dedicated to just removing it.

I don't see how it would defeat the cat and mouse game.

paulryanrogers•42m ago
It doesn't have to be perfect to be helpful.

For example, it's trivial to post an advertisement without disclosure. Yet it's illegal, so large players mostly comply and harm is less likely on the whole.

aqme28•33m ago
I don't think it will be easy to just remove it. It's built into the image and thus won't be the same every time.

Plus, any service good at reverse-image search (like Google) can basically apply that to determine whether they generated it.

There will always be a way to defeat anything, but I don't see why this won't work for like 90% of cases.

VWWHFSfQ•25m ago
There will be a model trained to remove synthids from graphics generated by other models
flir•15m ago
> I don't think it will be easy to just remove it.

Always has been so far. You add noise until the signal gets swamped. In order to remain imperceptible it's a tiny signal, so it's easy to swamp.

famouswaffles•6m ago
It's an image. There's simply no way to add a watermark to an image that's both imperceptible to the user and non-trivial to remove. You'd have to pick one of those options.
echelon•45m ago
This watermarking ceremony is useless.

We will always have local models. Eventually the Chinese will release a Nano Banana equivalent as open source.

staplers•43m ago

  have some kind of standardized identifier on them
Take this a step further and it'll be a personal identifying watermark (only the company can decode). Home printers already do this to some degree.
theoldgreybeard•28m ago
yeah, personally identifying undetectable watermarks are kindof a terrifying prospect
overfeed•9m ago
It is terrifying, but inevitable. Perhaps AI companies flooding the commons with excrement wasn't the best idea, now we all have to suffer the consequences.
baby•43m ago
It solves some problems! For example, if you want to run a camgirl website based on AI models and want to also prove that you're not exploiting real people
echelon•30m ago
Your use case doesn't even make sense. What customers are clamoring for that feature? I doubt any paying customer in the market for (that product) cares. If the law cares, the law has tools to inquire.

All of this is trivially easy to circumvent ceremony.

Google is doing this to deflect litigation and to preserve their brand in the face of negative press.

They'll do this (1) as long as they're the market leader, (2) as long as there aren't dozens of other similar products - especially ones available as open source, (3) as long as the public is still freaked out / new to the idea anyone can make images and video of whatever, and (4) as long as the signing compute doesn't eat into the bottom line once everyone in the world has uniform access to the tech.

The idea here is that {law enforcement, lawyers, journalists} find a deep fake {illegal, porn, libelous, controversial} image and goes to Google to ask who made it. That only works for so long, if at all. Once everyone can do this and the lookup hit rates (or even inquiries) are < 0.01%, it'll go away.

It's really so you can tell journalists "we did our very best" so that they shut up and stop writing bad articles about "Google causing harm" and "Google enabling the bad guys".

We're just in the awkward phase where everyone is freaking out that you can make images of Trump wearing a bikini, Tim Cook saying he hates Apple and loves Samsung, or the South Park kids deep faking each other into silly circumstances. In ten years, this will be normal for everyone.

Writing the sentence "Dr. Phil eats a bagel" is no different than writing the prompt "Dr. Phil eats a bagel". The former has been easy to do for centuries and required the brain to do some work to visualize. Now we have tools that previsualize and get those ideas as pixels into the brain a little faster than ASCII/UTF-8 graphemes. At the end of the day, it's the same thing.

And you'll recall that various forms of written text - and indeed, speech itself - have been illegal in various times, places, and jurisdictions throughout history. You didn't insult Caesar, you didn't blaspheme the medieval church, and you don't libel in America today.

shevy-java•21m ago
> What customers are clamoring for that feature? If the law cares, the law has tools to inquire.

How can they distinguish from real people exploited to AI models autogenerating everything?

I mean right now this is possible, largely because a lot of the AI videos have shortcomings. But imagine in 5 years from now on ...

xnx•19m ago
SynthID has been in use for over 2 years.
akersten•18m ago
Some days it feels like I'm the only hacker left who doesn't want government mandated watermarking in creative tools. Were politicians 20 years ago as overreative they'd have demanded Photoshop leave a trace on anything it edited. The amount of moral panic is off the charts. It's still a computer, and we still shouldn't trust everything we see. The fundamentals haven't changed.
mlmonkey•8m ago
You do know that every color copier comes with the ability to identify US currency and would refuse to copy it? And that every color printer leaves a pattern of faint yellow dots on every printout that uniquely identifies the printer?
swatcoder•14m ago
The incentive for commercial providers to apply watermarks is so that they can safely route and classify generated content when it gets piped back in as training or reference data from the wild. That it's something that some users want is mostly secondary, although it is something they can earn some social credit for by advertising.

You're right that there will existed generated content without these watermarks, but you can bet that all the commercial providers burning $$$$ on state of the art models will gradually coalesce around some means of widespread by-default/non-optional watermarking for content they let the public generate so that they can all avoid drowning in their own filth.

mortenjorck•5m ago
Reminder that even in the hypothetical world where every AI image is digitally watermarked, and all cameras have a TPM that writes a hash of every photo on the blockchain, there’s nothing to stop you from pointing that perfectly-verified camera at a screen showing your perfectly-watermarked AI image and taking a picture.

Image verification has never been easy. People have been airbrushed out of and pasted into photos for over a century; AI just makes it easier and more accessible. Expecting a “click to verify” workflow is unreasonable as it has ever been; only media literacy and a bit of legwork can accomplish this task.

dangoodmanUT•51m ago
I've had nano banana pro for a few weeks now, and it's the most impressive AI model I've ever seen

The inline verification of images following the prompt is awesome, and you can do some _amazing_ stuff with it.

It's probably not as fun anymore though (in the early access program, it doesn't have censoring!)

echelon•43m ago
LLMs might be a dead end, but we're going to have amazing images, video, and 3D.

To me the AI revolution is making visual media (and music) catch up with the text-based revolution we've had since the dawn of computing.

Computers accelerated typing and text almost immediately, but we've had really crude tools for images, video, and 3D despite graphics and image processing algorithms.

AI really pushes the envelope here.

I think images/media alone could save AI from "the bubble" as these tools enable everyone to make incredible content if you put the work into it.

Everyone now has the ingredients of Pixar and a music production studio in their hands. You just need to learn the tools and put the hours in and you can make chart-topping songs and Hollywood grade VFX. The models won't get you there by themselves, but using them in conjunction with other tools and understanding as to what makes good art - that can and will do it.

Screw ChatGPT, Claude, Gemini, and the rest. This is the exciting part of AI.

dangoodmanUT•31m ago
I wouldn’t call LLMs a dead end, they’re so useful as-is
echelon•24m ago
LLMs are useful, but they've hit a wall on the path to automating our jobs. Benchmark scores are just getting better at test taking. I don't see them replacing software engineers without overcoming obstacles.

AI for images, video, music - these tools can already make movies, games, and music today with just a little bit of effort by domain experts. They're 10,000x time and cost savers. The models and tools are continuing to get better on an obvious trend line.

refulgentis•34m ago
"Inline verification of images following the prompt is awesome, and you can do some _amazing_ stuff with it." - could you elaborate on this? sounds fascinating but I couldn't grok it via the blog post (like, it this synthid?)
dangoodmanUT•32m ago
It uses Gemini 3 inline with the reasoning to make sure it followed the instructions before giving you the output image
vunderba•11m ago
I'd be curious about how well the inline verification works - an easy example is to have it generate a 9-pointed star, a classic example that many SOTA models have difficulties with.

In the past, I've deliberately stuck a Vision-language model in a REPL with a loop running against generative models to try to have it verify/try again because of this exact issue.

EDIT: Just tested it in Gemini - it either didn't use a VLM to actually look at the finished image or the VLM itself failed.

Output:

  I have finished cross-referencing the image against the user's specific requests. The primary focus was on confirming that the number of points on the star precisely matched the requested nine. I observed a clear visual representation of a gold-colored star with the exact point count that the user specified, confirming a complete and precise match.
ZeroCool2u•49m ago
I tried the studio ghibli prompt on a photo my me and my wife in Japan and it was... not good. It looked more like a hand drawn sketch made with colored pencils, but none of the colors were correct. Everything was a weird shade of yellow/brown.

This has been an oddly difficult benchmark for Gemini's NB models. Googles images models have always been pretty bad at the studio ghibli prompt, but I'm shocked at how poorly it performs at this task still.

jeffbee•46m ago
I wonder ... do you think they might not be chasing that particular metric?
ZeroCool2u•43m ago
Sure! But it's weird how far off it is in terms of capability.
skocznymroczny•32m ago
Could be they are specifically training against it. There was some controversy about "studio ghibli style". Similarly how in the early days of Stable Diffusion "Greg Rutkowski style" was a very popular prompt to get a specific look. These days modern Stable Diffusion based models like SD 3 or FLUX mostly removed references to specific artists from their datasets.
xnx•16m ago
You might try it again with style transfer: 1 image of style to apply to 1 target image
ZeroCool2u•11m ago
This is a good idea, will give it a try!
wnevets•43m ago
does it handle transparency yet?
Shalomboy•42m ago
The SynthID check for fishy photos is a step in the right direction, but without tighter integration into everyday tooling its not going to move the needle much. Like when I hold the power button on my Pixel 9, It would be great if it could identify synthetic images on the screen before I think to ask about it. For what its worth it would be great if the power button shortcut on Pixel did a lot more things.
scottlamb•41m ago
The rollout doesn't seem to have reached my userid yet. How successful are people at getting these things to actually produce useful images? I was trying recently with the (non-Pro) Nano Banana to see what the fuss was about. As a test case, I tried to get it to make a diagram of a zipper merge (in driving), using numbered arrows to indicate what the first, second, third, etc. cars should do.

I had trouble reliably getting it to...

* produce just two lanes of traffic

* have all the cars facing the same way—sometimes even within one lane they'd be facing in opposite directions.

* contain the construction within the blocked-off area. I think similarly it wouldn't understand which side was supposed to be blocked off. It'd also put the lane closure sign in lanes that were supposed to be open.

* have the cars be in proportion to the lane and road instead of two side-by-side within a lane.

* have the arrows go in the correct direction instead of veering into the shoulder or U-turning back into oncoming traffic

* use each number once, much less on the correct car

This is consistent with my understanding of how LLMs work, but I don't understand how you can "visualize real-time information like weather or sports" accurately with these failings.

Below is one of the prompts I tried to go from scratch to an image:

> You are an illustrator for a drivers' education handbook. You are an expert on US road signage and traffic laws. We need to prepare a diagram of a "zipper merge". It should clearly show what drivers are expected to do, without distracting elements.

> First, draw two lanes representing a single direction of travel from the bottom to the top of the image (not an entire two-way road), with a dotted white line dividing them. Make sure there's enough space for the several car-lengths approaching a construction site. Include only the illustration; no title or legend.

> Add the construction in the right lane only near the top (far side). It should have the correct signage for lane closure and merging to the left as drivers approach a demolished section. The left lane should be clear. The sign should be in the closed lane or right shoulder.

> Add cars in the unclosed sections of the road. Each car should be almost as wide as its lane.

> Add numbered arrows #1–#5 indicating the next cars to pass to the left of the "lane closed" sign. They should be in the direction the cars will move: from the bottom of the illustration to the top. One car should proceed straight in the left lane, then one should merge from the right to the left (indicate this with a curved arrow), another should proceed straight in the left, another should merge, and so on.

I did have a bit better luck starting from a simple image and adding an element to it with each prompt. But on the other hand, when I did that it wouldn't do as well at keeping space for things. And sometimes it just didn't make any changes to the image at all. A lot of dead ends.

I also tried sketching myself and having it change the illustration style. But it didn't do it completely. It turned some of my boxes into cars but not necessarily all of them. It drew a "proper" lane divider over my thin dotted line but still kept the original line. etc.

simianparrot•38m ago
What is up with these product names!? Antigravity? Nano Banana?

Not just are they making slop machines, they seem to be run by them.

I am too old for this shit.

evrenesat•37m ago
I've tried to repaint the exterior of my house. More than 20 times with very detailed prompts. I even tried to optimize it with Claude. No matter what, every time it added one, two or three extra windows to the same wall.
cj•32m ago
I tried this in AI studio just now with nano banana.

Results: https://imgur.com/a/9II0Aip

The white house was the original (random photo from Google). The prompt was "What paint color would look nice? Paint the house."

vunderba•28m ago
Guess they ran out of paint - notice the upper window.
cj•16m ago
Oops. Original link wasn't using the Pro version. Edited the comment with an updated link.
swatcoder•19m ago
> (random photo from Google)

Careful with that kind of thing.

Here, it mostly poisons your test, because that exact photo probably exists in the underlying training data and the trained network will be more or less optimized on working with it. It's really the same consideration you'd want to make when testing classifiers or other ML techs 10 years ago.

Most people taking to a task like this will be using an original photo -- missing entirely from any training date, poorly framed, unevenly lit, etc -- and you need to be careful to capture as much of that as possible when trying to evaluate how a model will work in that kind of use case.

The failure and stress points for AI tools are generally kind of alien and unfamiliar because the way they operate is totally different than the way a human operates, and if you're not especially attentive to their weird failure shapes and biases when you want to test them, or you'll easily get false positives (and false negatives) that lead you to misleading conclusions.

cj•15m ago
Yea, the base image was the first google image result for the search term "house". So definitely in the training set.
fumeux_fume•30m ago
I also tried that in the past with poor results. I just tried it this morning with nano banana pro and it nailed it with a very short prompt: "Repaint the house white with black trim. Do not paint over brick."
grantpitt•14m ago
Huh, can you share a link? I tried here: https://gemini.google.com/share/e753745dfc5d
evrenesat•11m ago
https://gemini.google.com/share/79fe1a38e440
vunderba•33m ago
I'll be running it through my GenAI Comparison benchmark shortly - but so far it seems to be failing on the same tests that the original Nano Banana struggled with (such as SHRDLU).

https://genai-showdown.specr.net/image-editing

throwacct•29m ago
Google needs to pace themselves. AI studio, Antigravity, Banana, Banana Pro, Grape Ultra, Gemini 3, etc. This information overload don't do them any good whatsoever.
arecsu•25m ago
Agree. I can't keep up with it, it's hard to grasp my head around them, where to go to actually use them, etc
jasonjmcghee•24m ago
Grape Ultra?
xnx•24m ago
Powell Doctrine, but for AI. No one should dispute that Google is the leader in every(?) category of AI: LLM, image gen, video editing, world models, etc.
abixb•23m ago
I feel it's strategic, like a massive DDoS/"shock and awe" style attack on competitors. Gotta love it as PROsumers though!
shevy-java•20m ago
They are riding the current buzzword wave. It'll eventually subside. And 80% of it will end up on Google's impressive software graveyard:

https://killedbygoogle.com/

reddalo•20m ago
It reminds me of AWS services: I can't tell what they are because they've been named by a monkey with a typewriter.
crazygringo•18m ago
Why? They're mostly different markets. Most people using Nano Banana Pro aren't using Antigravity.

A cluster of launches reinforces the idea that Google is growing and leading in a bunch of areas.

In other words, if it's having so many successes it feels like overload, that's an excellent narrative. It's not like it's going to prevent people from using the tools.

nwsm•13m ago
Google will never beat the "sunset after 2 years" allegations on all products that don't have "Google __" in the name
molticrystal•15m ago
It is going to get really confusing when they release the Nano Banana Pi Pro model. Can't wait until they get to Raspberry.
sib•13m ago
Stock market seems to agree with their strategy....
tnolet•11m ago
Jules, Vertex...
jasonjmcghee•27m ago
Maybe I'm an obscure case, but I'm just not sure what I'd use an image generation model for.

For people that use them (regularly or not), what do you use them for?

vunderba•24m ago
Mostly highly specific images in blog posts but I also use it for occasional comics.

https://mordenstar.com/portfolio/gorgonzo

https://mordenstar.com/portfolio/brawny-tortillas

https://mordenstar.com/portfolio/ms-frizzle-lava

jasonjmcghee•6m ago
I'm kind of reading between the lines, but sounds like "for fun" which makes sense / what I generally expected for why people use it
cj•22m ago
Random examples:

1) I have a tricep tendon injury and ChatGPT wants me to check my tricep reflex. I have no idea where on the elbow you're supposed to tap to trigger the reflex.

2) I'm measuring my body fat using skin fold calipers. Show me were the measurement sites are.

3) I'm going hiking. Remind me how to identify poison ivy and dangerous snakes.

4) What would I look like with a buzz cut?

jasonjmcghee•15m ago
First three are interesting - all question / knowledge based where the answer is a picture. Hadn't really considered this.
paulglx•13m ago
You should never rely on AI to do 1, 2 or 3, especially a sloppy model like this.
hemloc_io•22m ago
porn is probably the a biggest one?

but concept art, try-it-on for clothes or paint, stock art, etc

xnx•21m ago
Nano Banana is more of an image editing model, which probably has more broad use cases for non-generative applications: interior decorating, architecture, picking wardrobes, etc.
jasonjmcghee•8m ago
Yeah... For some reason none of these are use cases in my day to day life. That said, I also don't open Photoshop very often. And maybe that's what this is meant to replace.
shevy-java•23m ago
Not gonna lie - this is pretty cool.

But ... it comes from Google. My goal is to eventually degoogle completely. I am not going to add any more dependency - I am way too annoyed at having to use the search engine (getting constantly worse though), google chrome (long story ...) and youtube.

I'll eventually find solutions to these.

H1Supreme•21m ago
This is really impressive. As a former designer, I'm equally excited that people will be able to generate images like this with a prompt, and sad that there will be much less incentive for people to explore design / "photoshopping" as a craft or a career.

At the end of the day, a tool is a tool, and the computer had the same effect on the creative industry when people started using them in place of illustrating by hand, typesetting by hand, etc. I don't want my personal bias to get in the way too much, but every nail that AI hammers into the creative industry's coffin is hard to witness.

ruralfam•17m ago
Just last night I was using Gemini "Fast" to test its output for a unique image we would have used in some consumer research if there had been a good stock image back in the day. I have been testing this prompt since the early days of AI images. The improvement in quality has been pretty remarkable for the same prompt. Composition across this time has been consistent. What I initially thought was "good enough" now is... fantastic. Just so many little details got more life-like w/ each new generation. Funnily enough, our images must be 3:2 aspect ratio. I kept asking GFast to change its square Fast output to 3:2. It kept saying it would, but each image was square or nearly square. GFast in the end was very apologetic, and said it would alert about this issue. Today I read that GPro does aspect ratios. Tried the same prompt again burning up some "Thinking" credits, and got another fantastically life-like image in 3:2. We have a new project coming up. We have relied entirely on stock or in some cases custom shot images to date. Now, apart from the time needed to get the prompts right whilst meeting with the client, I cannot see how stock or custom images can compete. I mean the GPro images -- again which is very specific to an unusual prompt -- is just "Wow". Want to emphasize again -- we are looking for specific details that many would not. So the thoughts above are specific to this. Still, while many faults can be found with AI, Nano Banana is certainly proven itself to me.
jimlayman•15m ago
Time to expand my creation catalog. Lets see what we can get of out this pro version. It seems this week is for big AI announcements from Google
anentropic•7m ago
Is there an "in joke" to this name that I am too old to get? Or it's just a whimsically random name?