frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
10•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

FLUX.1-Krea and the Rise of Opinionated Models

https://www.dbreunig.com/2025/08/04/the-rise-of-opinionated-models.html
88•dbreunig•6mo ago

Comments

TheSilva•6mo ago
All but the last example look better (to me) on Krea than ChatGPT-4.1.

The problem with AI images, in my opinion, is not the generated image (that can be better or worse) but the prompt and instructions given to the AI and their "defaults".

So many blog posts and social media updates have that horrible (again, to me) feel and look of overly plastic vibe, like a cartoon that has been burn... just like "needs more JPEG" but "needs more AI-vibe".

gchadwick•6mo ago
I'd argue the last one looks better as well, at least if you're considering what looks more 'real'. The ChatGPT one looks like it could have been a shot from a film, the Krea one looks like a photo someone took off their phone of a person heading into a car park on their way back from a party dressed as a super hero (which I think far better fits the vibe of the original image).
TheSilva•6mo ago
My problem with the last one is that the person is not walking directly into the door hence giving an unrealistic vibe that the ChatGPT one does not have.
horsawlarway•6mo ago
Sure, it looks like he's walking toward the control panel on the right of the door.

Personally - I think it looks considerably better than the GPT image.

vunderba•6mo ago
Yeah I see that a lot. Blog usage of AI pics seem to fall into two camps:

1. The image just seems to be completely unrelated to the actual content of the article

2. The image looks like it came out of SD 1.5 with smeared text, blur, etc.

resiros•6mo ago
I look forward for the day someone trains a model that can do good writing, without emdashes, it's not but and all of the AI slop.
astrange•6mo ago
You want a base model like text-davinci-001. Instruct models have most of their creativity destroyed.
Gracana•6mo ago
How do you use the base model?
astrange•6mo ago
OpenAI Playground still has it. Otherwise go out and find one.
Gracana•5mo ago
I mean in terms of prompting. What methods do you employ to get useful results out of a model that is not tuned for particular form of response?
Gracana•5mo ago
Seems simple enough, all the base models do is continuation, but you can do few-shot prompting to establish a pattern if you want. I will have to give it a try.
1gn15•6mo ago
Try one of the fine-tunes from https://allura.moe/. Or use an autocomplete model. Mistral and Qwen have them.
MintsJohn•6mo ago
This is what finetuning has been all about since stable diffusion 1.5 and especially SDXL. And even something StabilityAI base models excelled at in the open weights category. (Midjourney has always been the champion, but proprietary)

Sadly with SAI going effectively bankrupt things changed, their rushed 3.0 model was broken beyond repair and the later 3.5 just unfinished or something (the api version is remarkably better), gens full of errors and artifacts even though the good ones looked great. It turned out hard to finetune as well.

In the mean time flux got released, but that model can be fried (as in one concept trained in) but not finetuned (this krea flux is not based on the open weights flux). Add to that that as models got bigger training/finetuning now costs an arm and a leg, so here we are, a year after flux got released a good finetune is celebrated as the next new thing :)

vunderba•6mo ago
Agreed. From the article:

> Model builders have been mostly focused on correctness, not aesthetics. Researchers have been overly focused on the extra fingers problem.

While that might be true for the foundational models - the author seems to be neglecting the tens of thousands of custom LoRAs to customize the look of an image.

> Users fight the “AI Look” with heavy prompting and even fine-tuning

IMHO it is significantly easier to fix an aesthetic issue than an adherence issue. You can take a poor quality image, use ESRGAN upscalers, img2img using it as a ControlNet, run it through a different model, add LoRAs, etc.

I have done some nominal tests with Krea but mostly around adherence. I'd be curious to know if they've reduced the omnipresent bokeh / shallow depth of field given that it is Flux based.

dragonwriter•6mo ago
> Model builders have been mostly focused on correctness, not aesthetics. Researchers have been overly focused on the extra fingers problem.

> While that might be true for the foundational models

Its possibly true [0] of the models from the big public general AI vendors (OpenAI, Google), its defintely not true of MJ (which, if it has an aesthetic bias to what the article describes as “the AI look” it is largely because that was a popular actively sought and prompted for look in early AI image gen to avoid the flatness bias of early models and MJ leaned very hard into biasing toward what was popular aesthetically in that and other areas as it developed. Heck, lots of SD finetunes actively sought to reproduce MJ aesthetics for a while.)

[0] but I doubt it, and I think they have also been actively targeting aesthetics as well as correctness, and the post even hints at at least part of how that reinforced the “AI look” — the focus on aesthetics meant more reliance on the LAION Aesthetics dataset to tune the models understanding of what looked good, transferring the biases of that dataset into models that were trying to focus on aesthetics.

vunderba•6mo ago
Definitely. It's been a while since I used midjourney, but I imagine that style (and sheer speed) are probably the last remaining use cases of MJ today.
dvrp•6mo ago
It is not just a fine-tune.
joshdavham•6mo ago
> Researchers have been overly focused on the extra fingers problem

A funny consequence of this is that now it’s really hard to get models to intentionally generate disfigured hands (six fingers, missing middle finger).

washadjeffmad•6mo ago
A casualty of how underbaked data labelling and training are/were. The blindspots are glaring when you're looking for them, but the decreased overhead of training LoRA now means we can locally supplement a good base model on commodity hardware in a matter of hours.

Also, there's a lot of "samehand" and hand hiding in BFL and other models. Part of the reason I don't use any MaaS is how hard they were focusing on manufacturing superficial impressions over increasing fundamental understanding and direction following. Kontext is a nice deviation, but it was already achievable through captioning and model merges.

jrm4•6mo ago
So, question -- does the author know that this post is merely about "what is widely known about" vs. "what is actually possible?"

Which is to say -- if one is in the business or activity of "making AI images go a certain way" a quick perusal of e.g. Civitai has about a million solutions to the "problem" of "all the AI art looks the same?"

dbreunig•6mo ago
I’m aware of LoRA, Civitai, etc. I don’t think they are “widely known” beyond AI imagery enthusiasts.

Krea wrote a great post, trained the opinions in during post-training (not during LoRA), and I’ve been noticing larger labs doing similar things without discussing it (the default ChatGPT comic strip is one example). So I figured I’d write it up for a more general audience and ask if this is the direction we’ll go for qualitative tasks beyond imagery.

Plus, fine-tuning is called out in the post.

zamadatix•6mo ago
I don't think there is such a thing as a general audience for AI imagery discussion yet, only enthusiasts. The closest thing might be the subset of folks who saw ChatGPT can make an anime version of their photo and tried it out or the large amount of folks that have heard the artist's pushback about the tools in general but not actually used them. They have no clue about any of the nuances discussed in the article though.
petralithic•6mo ago
AI imagery users are all enthusiasts, there aren't yet casual users in a "wide" general capacity.
pwillia7•6mo ago
Wan 2.2 is a video model people have been using to do text to image recently that I think solves this problem way better than Krea in the base model. -- https://www.reddit.com/r/comfyui/comments/1mf521w/wan_22_tex...

As others have said, you can fine-tune any model with a pretty small data set of images and captions and make your generations not look like 'AI' or all look the same.

Here's one I made a while back trained on Sony HVS HD video demos from the 80s/90s -- https://civitai.com/models/896279/1990s-analog-hd-or-4k-sony...

mh-•6mo ago
o/t: your astrophotography LoRA is very cool, I came across it before. thanks for making it!

(for others: https://civitai.com/models/890536/nasa-astrophotography-or-f...)

pwillia7•6mo ago
Thanks!
dvrp•6mo ago
We've noticed that Wan 2.2 (available on Krea) + Krea 1 refinement yields _beautiful_ results. Check this from our designer, for instance: https://x.com/TitusTeatus/status/1952645026636554446

(Disclaimer: I am the Krea cofounder and this is based on a small sample size of results I've seen).

mh-•6mo ago
> prompts in alt

First pic (blonde woman with eyes closed) has alt text that begins:

> Extreme close-up portrait of a black man’s face with his eyes closed

copypasta mistake or bad prompt adherence? haha.

petralithic•6mo ago
I don't know, those all still look like AI, as in, too clean.
dragonwriter•6mo ago
So, the one thing I notice is that in every trio of original image, GPT-4.1 image, and Krea image where the author says GPT-4.1 exhibits the AI look and Krea avoids it (except the first with the cat), comparing the original inage to the Krea image shows Krea retains all the described hallmarks of the AI look that are present in the GPT image, but just toned down a little bit (in the first, it lacks the obvious bokeh because it avoids showing anything at a much different distance than the main subject, which is for that aesthetic issue what avoiding showing hands is for dealing with the correctness issue of bad hands.)
demarq•6mo ago
> retains all the described hallmarks of the AI look that are present in the GPT image, but just toned down a little bit

Not sure what you were expecting. That sounds like the model is avoiding what it was built to avoid?

This model is not new tech just a change in bias.

It’s doing what it says on the can.

cirrus3•6mo ago
I did a lot of testing with Krea. The results were certainly very different than flux-dev, less "ai-like" in some ways and the details were way better, but very soft and bit washed out and more ai-like in other ways.

I did a 50% mix of flux-dev-krea and flux-dev and it is my new favorite base model.

dvrp•6mo ago
Hi there! Thank you for the glowing review! I'm the cofounder of Krea and I'm glad you liked Sangwu's blog post. The team is reading it.

You'll probably get a lot of replies around how this model is a just a fine-tune and a potential disregard for LoRAs, as if we didn't know about them. While the reality is that we have thousands of them running in our platform. Sadly there's simply so much a LoRA and a fine-tune can do before you run into issues that can't be solved until you apply more advanced techniques such as curated post-training runs (including reinforcement learning-based techniques such as Diffusion-PPO[1]), or even large-scale pre-training.

-

[1]: https://diffusion-ppo.github.io

dang•6mo ago
Recent and related:

Releasing weights for FLUX.1 Krea - https://news.ycombinator.com/item?id=44745555 - July 2025 (107 comments)