frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•5m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•7m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
1•mfiguiere•12m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•15m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•16m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•32m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•36m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•41m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•42m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•43m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•48m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•51m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•54m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
4•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
4•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
29•SerCe•1h ago•23 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments
Open in hackernews

FLUX.1-Krea and the Rise of Opinionated Models

https://www.dbreunig.com/2025/08/04/the-rise-of-opinionated-models.html
88•dbreunig•6mo ago

Comments

TheSilva•6mo ago
All but the last example look better (to me) on Krea than ChatGPT-4.1.

The problem with AI images, in my opinion, is not the generated image (that can be better or worse) but the prompt and instructions given to the AI and their "defaults".

So many blog posts and social media updates have that horrible (again, to me) feel and look of overly plastic vibe, like a cartoon that has been burn... just like "needs more JPEG" but "needs more AI-vibe".

gchadwick•6mo ago
I'd argue the last one looks better as well, at least if you're considering what looks more 'real'. The ChatGPT one looks like it could have been a shot from a film, the Krea one looks like a photo someone took off their phone of a person heading into a car park on their way back from a party dressed as a super hero (which I think far better fits the vibe of the original image).
TheSilva•6mo ago
My problem with the last one is that the person is not walking directly into the door hence giving an unrealistic vibe that the ChatGPT one does not have.
horsawlarway•6mo ago
Sure, it looks like he's walking toward the control panel on the right of the door.

Personally - I think it looks considerably better than the GPT image.

vunderba•6mo ago
Yeah I see that a lot. Blog usage of AI pics seem to fall into two camps:

1. The image just seems to be completely unrelated to the actual content of the article

2. The image looks like it came out of SD 1.5 with smeared text, blur, etc.

resiros•6mo ago
I look forward for the day someone trains a model that can do good writing, without emdashes, it's not but and all of the AI slop.
astrange•6mo ago
You want a base model like text-davinci-001. Instruct models have most of their creativity destroyed.
Gracana•6mo ago
How do you use the base model?
astrange•6mo ago
OpenAI Playground still has it. Otherwise go out and find one.
Gracana•5mo ago
I mean in terms of prompting. What methods do you employ to get useful results out of a model that is not tuned for particular form of response?
Gracana•5mo ago
Seems simple enough, all the base models do is continuation, but you can do few-shot prompting to establish a pattern if you want. I will have to give it a try.
1gn15•6mo ago
Try one of the fine-tunes from https://allura.moe/. Or use an autocomplete model. Mistral and Qwen have them.
MintsJohn•6mo ago
This is what finetuning has been all about since stable diffusion 1.5 and especially SDXL. And even something StabilityAI base models excelled at in the open weights category. (Midjourney has always been the champion, but proprietary)

Sadly with SAI going effectively bankrupt things changed, their rushed 3.0 model was broken beyond repair and the later 3.5 just unfinished or something (the api version is remarkably better), gens full of errors and artifacts even though the good ones looked great. It turned out hard to finetune as well.

In the mean time flux got released, but that model can be fried (as in one concept trained in) but not finetuned (this krea flux is not based on the open weights flux). Add to that that as models got bigger training/finetuning now costs an arm and a leg, so here we are, a year after flux got released a good finetune is celebrated as the next new thing :)

vunderba•6mo ago
Agreed. From the article:

> Model builders have been mostly focused on correctness, not aesthetics. Researchers have been overly focused on the extra fingers problem.

While that might be true for the foundational models - the author seems to be neglecting the tens of thousands of custom LoRAs to customize the look of an image.

> Users fight the “AI Look” with heavy prompting and even fine-tuning

IMHO it is significantly easier to fix an aesthetic issue than an adherence issue. You can take a poor quality image, use ESRGAN upscalers, img2img using it as a ControlNet, run it through a different model, add LoRAs, etc.

I have done some nominal tests with Krea but mostly around adherence. I'd be curious to know if they've reduced the omnipresent bokeh / shallow depth of field given that it is Flux based.

dragonwriter•6mo ago
> Model builders have been mostly focused on correctness, not aesthetics. Researchers have been overly focused on the extra fingers problem.

> While that might be true for the foundational models

Its possibly true [0] of the models from the big public general AI vendors (OpenAI, Google), its defintely not true of MJ (which, if it has an aesthetic bias to what the article describes as “the AI look” it is largely because that was a popular actively sought and prompted for look in early AI image gen to avoid the flatness bias of early models and MJ leaned very hard into biasing toward what was popular aesthetically in that and other areas as it developed. Heck, lots of SD finetunes actively sought to reproduce MJ aesthetics for a while.)

[0] but I doubt it, and I think they have also been actively targeting aesthetics as well as correctness, and the post even hints at at least part of how that reinforced the “AI look” — the focus on aesthetics meant more reliance on the LAION Aesthetics dataset to tune the models understanding of what looked good, transferring the biases of that dataset into models that were trying to focus on aesthetics.

vunderba•6mo ago
Definitely. It's been a while since I used midjourney, but I imagine that style (and sheer speed) are probably the last remaining use cases of MJ today.
dvrp•6mo ago
It is not just a fine-tune.
joshdavham•6mo ago
> Researchers have been overly focused on the extra fingers problem

A funny consequence of this is that now it’s really hard to get models to intentionally generate disfigured hands (six fingers, missing middle finger).

washadjeffmad•6mo ago
A casualty of how underbaked data labelling and training are/were. The blindspots are glaring when you're looking for them, but the decreased overhead of training LoRA now means we can locally supplement a good base model on commodity hardware in a matter of hours.

Also, there's a lot of "samehand" and hand hiding in BFL and other models. Part of the reason I don't use any MaaS is how hard they were focusing on manufacturing superficial impressions over increasing fundamental understanding and direction following. Kontext is a nice deviation, but it was already achievable through captioning and model merges.

jrm4•6mo ago
So, question -- does the author know that this post is merely about "what is widely known about" vs. "what is actually possible?"

Which is to say -- if one is in the business or activity of "making AI images go a certain way" a quick perusal of e.g. Civitai has about a million solutions to the "problem" of "all the AI art looks the same?"

dbreunig•6mo ago
I’m aware of LoRA, Civitai, etc. I don’t think they are “widely known” beyond AI imagery enthusiasts.

Krea wrote a great post, trained the opinions in during post-training (not during LoRA), and I’ve been noticing larger labs doing similar things without discussing it (the default ChatGPT comic strip is one example). So I figured I’d write it up for a more general audience and ask if this is the direction we’ll go for qualitative tasks beyond imagery.

Plus, fine-tuning is called out in the post.

zamadatix•6mo ago
I don't think there is such a thing as a general audience for AI imagery discussion yet, only enthusiasts. The closest thing might be the subset of folks who saw ChatGPT can make an anime version of their photo and tried it out or the large amount of folks that have heard the artist's pushback about the tools in general but not actually used them. They have no clue about any of the nuances discussed in the article though.
petralithic•6mo ago
AI imagery users are all enthusiasts, there aren't yet casual users in a "wide" general capacity.
pwillia7•6mo ago
Wan 2.2 is a video model people have been using to do text to image recently that I think solves this problem way better than Krea in the base model. -- https://www.reddit.com/r/comfyui/comments/1mf521w/wan_22_tex...

As others have said, you can fine-tune any model with a pretty small data set of images and captions and make your generations not look like 'AI' or all look the same.

Here's one I made a while back trained on Sony HVS HD video demos from the 80s/90s -- https://civitai.com/models/896279/1990s-analog-hd-or-4k-sony...

mh-•6mo ago
o/t: your astrophotography LoRA is very cool, I came across it before. thanks for making it!

(for others: https://civitai.com/models/890536/nasa-astrophotography-or-f...)

pwillia7•6mo ago
Thanks!
dvrp•6mo ago
We've noticed that Wan 2.2 (available on Krea) + Krea 1 refinement yields _beautiful_ results. Check this from our designer, for instance: https://x.com/TitusTeatus/status/1952645026636554446

(Disclaimer: I am the Krea cofounder and this is based on a small sample size of results I've seen).

mh-•6mo ago
> prompts in alt

First pic (blonde woman with eyes closed) has alt text that begins:

> Extreme close-up portrait of a black man’s face with his eyes closed

copypasta mistake or bad prompt adherence? haha.

petralithic•6mo ago
I don't know, those all still look like AI, as in, too clean.
dragonwriter•6mo ago
So, the one thing I notice is that in every trio of original image, GPT-4.1 image, and Krea image where the author says GPT-4.1 exhibits the AI look and Krea avoids it (except the first with the cat), comparing the original inage to the Krea image shows Krea retains all the described hallmarks of the AI look that are present in the GPT image, but just toned down a little bit (in the first, it lacks the obvious bokeh because it avoids showing anything at a much different distance than the main subject, which is for that aesthetic issue what avoiding showing hands is for dealing with the correctness issue of bad hands.)
demarq•6mo ago
> retains all the described hallmarks of the AI look that are present in the GPT image, but just toned down a little bit

Not sure what you were expecting. That sounds like the model is avoiding what it was built to avoid?

This model is not new tech just a change in bias.

It’s doing what it says on the can.

cirrus3•6mo ago
I did a lot of testing with Krea. The results were certainly very different than flux-dev, less "ai-like" in some ways and the details were way better, but very soft and bit washed out and more ai-like in other ways.

I did a 50% mix of flux-dev-krea and flux-dev and it is my new favorite base model.

dvrp•6mo ago
Hi there! Thank you for the glowing review! I'm the cofounder of Krea and I'm glad you liked Sangwu's blog post. The team is reading it.

You'll probably get a lot of replies around how this model is a just a fine-tune and a potential disregard for LoRAs, as if we didn't know about them. While the reality is that we have thousands of them running in our platform. Sadly there's simply so much a LoRA and a fine-tune can do before you run into issues that can't be solved until you apply more advanced techniques such as curated post-training runs (including reinforcement learning-based techniques such as Diffusion-PPO[1]), or even large-scale pre-training.

-

[1]: https://diffusion-ppo.github.io

dang•6mo ago
Recent and related:

Releasing weights for FLUX.1 Krea - https://news.ycombinator.com/item?id=44745555 - July 2025 (107 comments)