frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
620•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
921•xnx•18h ago•547 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•23 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•26 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
38•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
218•isitcontent•12h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
209•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
319•vecti•14h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
368•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
357•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
476•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•159 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
401•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
84•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
12•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
243•i5heu•15h ago•186 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•19 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
139•vmatsiiako•17h ago•62 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
279•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
130•SerCe•8h ago•116 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
175•limoce•3d ago•95 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
30•denysonique•9h ago•6 comments
Open in hackernews

Show HN: I built a playground to showcase what Flux Kontext is good at

https://fluxkontextlab.com
72•Zephyrion•7mo ago
Hi HN,

After spending some time with the new `flux kontext dev` model, I realized its most powerful capabilities aren't immediately obvious. Many people might miss its true potential by just scratching the surface.

I went deep and curated a collection of what I think are its most interesting use cases – things like targeted text removal, subtle photo restoration, and creative style transfers.

I felt that simply writing about them wasn't enough. The best way to understand the value is to see it and try it for yourself.

That's why I built FluxKontextLab (https://fluxkontextlab.com).

On the site, I've presented these curated examples with before-and-after comparisons. More importantly, there's an interactive playground right there, so you can immediately test these ideas or your own prompts on your own images.

My goal is to share what this model is capable of beyond the basics.

It's still an early project. I'd love for you to take a look and share your thoughts or any cool results you generate.

Comments

shekhar101•7mo ago
I tried a picture with instructions and it says "something went wrong". I would love to try and see how well it works for my use case.
Zephyrion•7mo ago
I'm so sorry you ran into that, and thank you for reporting it. This is exactly the kind of feedback I need at this early stage.

You're right, my backend logs show that most requests are succeeding, which means there must be an error happening somewhere between the front-end and the server that I'm not catching properly yet.

Based on this, implementing a more robust error logging system is now my top priority. I'll get on it right away so I can find and fix these issues for everyone. Thanks again for giving it a try.

mg•7mo ago
Cool. Are you running the model on your own server?
Zephyrion•7mo ago
Thanks! Not yet. To get this launched quickly and validate the idea, I'm currently using a cloud API to handle the inference.

However, my plan is to eventually deploy the model on my own server. I'll be sure to document the entire process—from setup to optimization—and share it as a detailed guide on the site for anyone interested!

vunderba•7mo ago
Kontext's ability to make InstructPix2Pix [1] level changes to isolated sections of an image without affecting the rest of the image is a game changer. Saves a ton of time without needing to go through the effort of masking/inpainting.

About a month ago I put together a quick before/after set of images that I used Kontext to edit. It even works on old grainy film footage.

https://specularrealms.com/ai-transcripts/experiments-with-f...

> My goal is to share what this model is capable of beyond the basics.

You might be interested to know that it looks like it has limited support for being able to upload/composite multiple images together.

https://fal.ai/models/fal-ai/flux-pro/kontext/max/multi

[1] https://github.com/timothybrooks/instruct-pix2pix

mpeg•7mo ago
I had a project for a big brand a couple years ago where we experimented with genai and inpainting and it was a huge hassle to get it working right, required a big comfy pipeline with masking, then inpanting, then doing relighting to make it all look natural, etc.

It's crazy how fast genai moves, now you can do all that with just flux and the end result looks extremely high quality

regulalegier•6mo ago
This is a great demo! Kontext's precision with localized edits is impressive – especially how it handles grainy footage without artifacts.

Your multi-image compositing experiments reminded me of how we built https://flux-kontext.io/ to solve a similar problem: enabling real-time collaborative AI edits where multiple users can tweak different image sections simultaneously while seeing live previews. The context preservation feels almost like magic when you see it in action.

Would love to compare notes on your masking-free approach – we've found that combining InstructPix2Pix-style changes with layer-aware diffusion (like in your film example) reduces hallucination by ~40% in our tests. Any plans to open-source the training pipeline?

winterrx•7mo ago
Cool, but how are you paying for the inference if it's free?
Zephyrion•7mo ago
That's a great question. Right now, the inference costs are coming out of my own pocket. My main goal was to let people experience the model's potential without any barriers.

To keep the project sustainable in the long run, I'm exploring some options, like potentially offering a paid tier for heavy users or more advanced features. For now, I'm focused on improving the core experience and will do my best to keep costs low so it remains accessible to as many people as possible.

Timwi•7mo ago
How about a download button so I can run it locally and not cause you any costs?
merelysounds•7mo ago
I was able to try it with one image, I liked the results and wanted to try something more complex. However, I started getting error messages: "Processing Failed An unexpected error occurred. Please try again.", retrying didn't help.
Zephyrion•7mo ago
Glad you liked the first result. And ugh, sorry you hit that error on the second go. You're right, that "unexpected error" message is pretty useless. Better error handling is at the top of my to-do list now.
roenxi•7mo ago
One of the issues with staggered releases is they are easy to forget about. BFL have apparently made the weights to the Kontext Dev model available, but because it didn't happen when they announced Kontext I'd compeltely forgotten about it.

These models look fantastic, we've finally got something solid in the public sphere that goes beyond stable diffusion style word vomit for prompting. It was obviously coming sooner or later, but happily it seems to be here. It is unfortunate for the public that, as far as I can see, they didn't actually open the weights up since they aren't free for commercial use.

dragonwriter•6mo ago
> These models look fantastic, we've finally got something solid in the public sphere that goes beyond stable diffusion style word vomit for prompting.

Stable diffusion models later than 1.x do that, even, e.g., SDXL finetunes that are heavily trained on supporting controlled vocabulary tags for precision support (and benefit from) natural language prompting; and many of the newer “open” models (many of which are a better approximation of open than Flux) even use the same text encoder as Flux (and some use LLMs like Llama).

BFL is really good at promotion, though; it would be nice if open models with sinilar functionality like Omnigen2 got a fraction of the attention non-open Kontext gets.

fazza999•7mo ago
Keeps saying "something went wrong" for a variety of images / requests. Can't say I'm hugely impressed. MidJourney does it flawlessly.
chrismorgan•7mo ago
> Restore and colorize old photos with AI precision

I’ve got to admit, I chuckled to myself at the absurdity of the phrase “AI precision”, given how badly these things are known to go off the rails. Sure, sure, things have improved a lot in the last few years, and Kontext’s limitations make such problems far less likely to occur, but still, permit me to be amused. :-)

… but then too, do compare https://fluxkontextlab.com/pages/home/showcase/2/1.jpg and https://fluxkontextlab.com/pages/home/showcase/2/0.webp closely, there are material differences. A few of the most notable ones: the picture is reframed, with a significant amount invented at the bottom (which has realism concerns that you can see when you actually examine it); fog effects have been reduced (perhaps implied by “restore … its clear texture”, which seems a weird instruction to me); and something’s gone wrong with the right wing of the pigeon at the bottom that’s facing the camera.

I think it would be nice to, in each case, align the two as well as possible (even the Product Display example) and present them in such a way that you can rigorously compare the beginning and end points, and see what modifications have been made, intended and unintended.

bsenftner•7mo ago
So, are you like a multi-millionaire's kid or something, how are you paying for / able to afford giving away public access to this? This is written like it's an individual doing this alone, not the case?
cantoranpoirer•6mo ago
This is a fantastic showcase! The before/after comparisons really highlight Flux Kontext's precision – especially the text removal examples, which are cleaner than most inpainting tools I've tested.

Your playground reminds me of how we're using Flux Konkext for real-time collaborative editing (try dragging the 'context strength' slider while multiple users tweak prompts simultaneously – magic happens).

https://flux-kontext.io/

Would love to compare notes on the style transfer parameters you're using. The subtlety in your examples is exactly what most implementations miss!