frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Pgbackrest is no longer being maintained

https://github.com/pgbackrest/pgbackrest
85•c0l0•1h ago•28 comments

Fully Featured Audio DSP Firmware for the Raspberry Pi Pico

https://github.com/WeebLabs/DSPi
42•BoingBoomTschak•1d ago•6 comments

Flipdiscs

https://flipdisc.io
338•skogstokig•3d ago•59 comments

I bought Friendster for $30k – Here's what I'm doing with it

https://ca98am79.medium.com/i-bought-friendster-for-30k-heres-what-i-m-doing-with-it-d5e8ddb3991d
872•ca98am79•15h ago•447 comments

TurboQuant: A first-principles walkthrough

https://arkaung.github.io/interactive-turboquant/
190•kweezar•10h ago•42 comments

AI should elevate your thinking, not replace it

https://www.koshyjohn.com/blog/ai-should-elevate-your-thinking-not-replace-it/
560•koshyjohn•15h ago•412 comments

Self-updating screenshots

https://interblah.net/self-updating-screenshots
340•bjhess•1d ago•52 comments

The Prompt API

https://developer.chrome.com/docs/ai/prompt-api
157•gslin•9h ago•84 comments

Moleskine's AI Lord of the Rings collection can only mock

https://cjleo.com/blog/moleskine-ai-lord-of-the-rings-collection-can-only-mock/
16•lentil_soup•2h ago•7 comments

It's OK to abandon your side-project (2024)

https://robbowen.digital/wrote-about/abandoned-side-projects/
102•hisamafahri•3h ago•49 comments

Three constraints before I build anything

https://jordanlord.co.uk/blog/3-constraints/
238•nervous_north•1d ago•41 comments

Rust Memory Management: Ownership vs. Reference Counting

https://slicker.me/rust/ownership_and_borrowing_vs_reference_counting.html
34•vinhnx•2d ago•11 comments

Fast16: High-precision software sabotage 5 years before Stuxnet

https://www.sentinelone.com/labs/fast16-mystery-shadowbrokers-reference-reveals-high-precision-so...
277•dd23•15h ago•56 comments

A Guide to CubeSat Mission and Bus Design

https://pressbooks-dev.oer.hawaii.edu/epet302/
41•o4c•1d ago•2 comments

Branimir Lambov from IBM on Cassandra

https://theconsensus.dev/p/2026/04/26/branimir-lambov-from-ibm-on-cassandra.html
4•eatonphil•22h ago•0 comments

Bob Odenkirk would like to remind you that life is a meaningless farce

https://www.nytimes.com/2026/04/25/magazine/bob-odenkirk-interview.html
78•wslh•23h ago•69 comments

SWE-bench Verified no longer measures frontier coding capabilities

https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/
319•kmdupree•22h ago•170 comments

Box to save memory in Rust

https://dystroy.org/blog/box-to-save-memory/
143•emschwartz•3d ago•40 comments

When the cheap one is the cool one

https://arun.is/blog/cheap-cool/
141•ddrmaxgt37•1d ago•76 comments

Sawe becomes first athlete to run a sub-two-hour marathon in a competitive race

https://www.bbc.com/sport/athletics/articles/crm1m7e0zwzo
422•berkeleyjunk•15h ago•277 comments

Electrostatics and High Voltage Links

http://amasci.com/static/electrostatic1.html
9•ludicrousdispla•3d ago•1 comments

FreeBSD Device Drivers Book

https://github.com/ebrandi/FDD-book
96•myth_drannon•13h ago•18 comments

Mystery Cpuid Bit

http://www.os2museum.com/wp/mystery-cpuid-bit/
21•userbinator•2d ago•2 comments

Quirks of Human Anatomy

https://www.sdbonline.org/sites/fly/lewheldquirk/figlegq6.htm
140•gurjeet•2d ago•79 comments

EvanFlow – A TDD driven feedback loop for Claude Code

https://github.com/evanklem/evanflow
78•evanklem2004•10h ago•39 comments

Magic: The Gathering took me from N2 to Japanese fluency

https://www.tokyodev.com/articles/how-magic-the-gathering-took-me-from-n2-to-japanese-fluency
144•pwim•3d ago•66 comments

Chernobyl wildlife forty years on

https://www.bbc.com/future/article/20260424-chernobyl-wildlife-forty-years-on
120•reconnecting•16h ago•67 comments

An AI agent deleted our production database. The agent's confession is below

https://twitter.com/lifeof_jer/status/2048103471019434248
735•jeremyccrane•19h ago•871 comments

France's Mistral Built a $14B AI Empire by Not Being American

https://www.forbes.com/sites/iainmartin/2026/04/16/how-frances-mistral-built-a-14-billion-ai-empi...
24•rzk•1h ago•3 comments

Running Bare-Metal Rust Alongside ESP-IDF on the ESP32-S3's Second Core

https://tingouw.com/blog/embedded/esp32/run_rust_on_app_core
81•MrBuddyCasino•3d ago•12 comments
Open in hackernews

Building an agentic image generator that improves itself

https://simulate.trybezel.com/research/image_agent
67•palashshah•11mo ago
Hey HN! We recently graduated from YC, and have been building customer personas for large e-commerce companies. We recently expanded into the image generation space, and have been working on research about how to automatically improve the quality of generated images.

Comments

average_r_user•11mo ago
Quite interesting, do you have some documentation of your platform and capabilities? Your landing page is quite synthetic
palashshah•11mo ago
hey! we're working with an initial set of customers, and plan to launch full capabilities soon. stay tuned :)
ramesh31•11mo ago
This is a wonderful writeup of building a simple agentic system in general. What OP describes is more or less the bare minimum you should be doing at this point to get good (consistent) results from an LLM; single-shot prompting is a thing of the past.
palashshah•11mo ago
appreciate the compliment! yep, it's definitely necessary and is the bare minimum for building image generation systems in production.
shmoogy•11mo ago
I'm surprised you landed on using o3 as the judge - we found it way too expensive. I use llm as a judge for generating color variations of products, definitely hoping for some improvements - it can be brutal to get non hallucinated features along with proper final rendering.
omneity•11mo ago
Have you tried open weights vision models such as Qwen VL, MiniCPM, PaliGemma...?

I'm also curious how usable are simpler vision models such as Florence in case you explored this direction.

palashshah•11mo ago
we're currently in the process of doing this. i think something that could potentially work is to iterate upon the initial image composition / structure using cheaper models, and then upscale at the end. this way you're saving on that iteration cost, but eventually land on a higher-scale image.
shmoogy•11mo ago
I actually haven't but nova from Amazon was surprisingly good at things like bounding boxes compared to some others You kind of have to test and measure so many different aspects to get the best at specific tasks Thanks for the idea
elif•11mo ago
This is great and provides a good starting point for any similar efforts.

However I think the temptation to lean all tasks on AI is perhaps a little naive if not lazy.

For mask generation, there is really not much reason to use AI. In this example, simple stochastic blob detection, a trivial function you could get from openCV or ask a college sophomore to write would generate much better quality masks.

palashshah•11mo ago
totally agreed here. i think my goal primarily with the mask generation was to test out how effective openai's capabilities were.

we're currently working on pipelines that limit the the involvement of AI to various tasks. for example, when generating an ad there's usually logo, some banner text, and background image.

we can use gpt-image-1 to generate the background image, another LLM to identify the coordinates of where we place the logo, and just add the logo onto the image. this is just one example!

jackphilson•11mo ago
Why do you agree? I think we should outsource as much as we can to abstraction. We've been doing it forever.
dandelany•11mo ago
"Simple stochastic blob detection" is an abstraction. You write (or import) a function where the the gnarly logic lives and call `detectBlobs()`. "Use an abstraction" doesn't mean you should use the same abstraction for every task, you should use the right tool for the job.
mentalgear•11mo ago
Again another example of "the unreasonable effectiveness of LLMs in a loop". At with time, the tasks for loop become bigger and more complex, until we find ourselves "outlooped" at least job wise.
ramoz•11mo ago
Nice retrospective but I guess this process is no longer needed as model's get better; esp as they start enabling features like consistent subjects. Seems like a lot of overhead to correct text for inspirational images, but I can imagine you need to always present some form of _quality_ to your clients.

Feel like control nets and some minimal photoshop work would've been better.

palashshah•11mo ago
totally. it got to a point where most of the text generated in our images was incorrect, and so it wasn't a great look showing that to our clients.

we're actually working on some form of what you described where we take images generated from LLMs + add consistent logos discretely rather than generatively.

abshkbh•11mo ago
Palash this is a great post, I learnt a lot as an image gen noob! Keep writing more :)
palashshah•11mo ago
this is incredible to hear! i plan to keep writing on a weekly basis, and will be posting them on twitter.
t_mann•11mo ago
I was kind of hoping this would be in the 'Dreambooth mold' of finetuning open weights models. I have used that with some success some ~2 years ago, does anyone know what improvements there have been in that direction since Dreambooth?
zahlman•11mo ago
It's frankly amazing to me that "ask another LLM to evaluate the image" actually produces useful feedback that results in actual improvement from the first LLM.

But then, I guess it's not much different of an idea from the earlier use of GANs, or of telling LLMs to "stop hallucinating", etc.

palashshah•11mo ago
totally. the way i think about it (purely based on intuition) is that asking an LLM to do understanding + image generation is too complex for it to be effective. if we separate out the tasks into discrete steps, the evaluation becomes better, and the generation simply becomes instruction following.
jacob019•11mo ago
This is all edited with gpt-image-1? The revised images are amazing. Were example logos provided or is it just working off of it's knowledge of a well known brand?