frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•4m ago•0 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•4m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•9m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•13m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•14m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•16m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•17m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•20m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•31m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•37m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•41m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•50m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•57m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments
Open in hackernews

FLUX.1 Kontext [Dev] – Open Weights for Image Editing

https://bfl.ai/announcements/flux-1-kontext-dev
137•minimaxir•7mo ago

Comments

minimaxir•7mo ago
The new non-commercial license is a bit of a doozy: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/...
samtheprogram•7mo ago
If I’m understanding this correctly, you can’t run this in a commercial setting, even if you’re not creating a derivative but simply generating outputs?
smerrill25•7mo ago
I believe you can buy licensing? But def not the same as 'Open Weights..'
StevenWaterman•7mo ago
"Weights available" perhaps
stefan_•7mo ago
The same people that claim using all of humanities creation is fair use want you to pay for a bunch of MatMul inputs that are unrecognizable to anyone after quantizing them yourself.
cchance•7mo ago
Stupid question, whats to stop someone from quantizing it, shit even just barely finetuning it for 1 step and calling it something different, no ones actually checking WTF these models are based on when they're released, especially for the source models, especially if the release is not around the same time of release as the base, i'm 99% sure someone could fine tune SD3.5 a bit and release it today as Frizz 1.0 and people would just take it as a new model using the same layer structure as SD3.5 lol
elpocko•7mo ago
Not impossible but you'd gonna have to do a bit more than that. Most people are ignorant, but not all of them. An experienced user can tell what model family was used from a bunch of generated images. Also, no one would believe a nobody who just showed up claiming to have trained a brand new diffusion model.
SV_BubbleTime•7mo ago
I forget which, but some HiDream maybe was called out for this when it happened to generate basically the same dude in front of the same archway when compared against flux.
liuliu•7mo ago
HiDream is a separate architecture. OTOH, it might be finetuned on FLUX generated data, we will never know.
doctorpangloss•7mo ago
HiDream is trained on AI generated outputs.
SV_BubbleTime•7mo ago
Yea, flux ones.
liuliu•7mo ago
There is a simple method to detect this: taking a model "claimed" to be trained scratch, taking the model you suspected is the original, generate a new model = claimed_model * 0.5 + suspected_model * 0.5.

If the claimed_model is trained from scratch, the new model will have 0 capability (basically generate gibberish words or noise). If it is a derivative of the suspected model, it will do something sensible.

It is a bit more interesting for diffusion model because you can fine-tune to a different objective, making this investigation harder to do, but not impossible.

doctorpangloss•7mo ago
FLUX watermarks its outputs.

Additionally, certain prompts will produce nonsensical but specific outputs known only to BFL.

BoredPositron•7mo ago
There is no watermarking in flux. The only artifacts that remain are vae artifacts. The vae is Apache licensed and used by many models now. So you can't identify the specific model.
doctorpangloss•7mo ago
I guess you’re in for a surprise if you make a giant application that uses FLUX dev!
BoredPositron•7mo ago
lmao
whywhywhywhy•7mo ago
The double standard is frankly disgusting.

I'm actually all for open training but I think it's only fair you treat the model as your treated the life's work of others.

ronsor•7mo ago
Quite frankly, I still believe that these model licenses are dubiously enforceable at best, and I'm skeptical that models are copyrightable at all.
Hizonner•7mo ago
License for what? They don't have a copyright except maybe in the easily reimplemented Python code.

Model weights are not copyrightable creative works, no matter how much various companies wish they were.

At least they're not copyrightable until either legislatures extend the list of what's copyrightable, or courts have definitively show their willingness to reinterpret the words in the existing definitions far outside of their established legal meanings, their established meanings in common speech, and/or any sane analogy to those established meanings.

Yes, I am aware that collections and databases are copyrightable. Models don't have the elements required for a copyrightable collection or database. I'm also aware that software is copyrightable. Models don't have the elements required for copyrightable software. They just flat out aren't works of authorship in any way. How much effort goes into creating them is irrelevant; that's not part of what defines a copyrightable work.

kristopolous•7mo ago
I was at a hackathon with this thing last weekend in SF at bfl. It's a pretty good system.
HanClinto•7mo ago
What sorts of things were built with it?
kristopolous•7mo ago
I think this should work: https://docs.google.com/spreadsheets/d/1cxh9oA1ZHkzGRMKutVNb...

I was at the top of the list ... pitched it poorly. That night I made a party game to practice: https://pitchanary.com/

The rules might need some work.

HanClinto•7mo ago
Wow, this is a seriously good turnout for the hackathon. Thank you for posting this list, it's fun to look through these!
kristopolous•7mo ago
It made me realize that the more I believe in the quality of what I'm producing, the more I try to let the product speak for itself and the less I explain it and the poorer I do.

It's no stretch to say that the hackathons I won, all the projects were janky and the hackathons I lost, all the products worked well and did exactly what I said.

whatevsmate•7mo ago
Neat, I plan to check this out.

I really want an AI to jam with on a canvas rather than to just have it generate the final results.

I have been hoping someone would pick up on the time series forecasting innovations in the LLM space, combine them with data from e.g. the Google quick draw dataset, and turn that into a real-time “painting partner” experience, kind of like chatting with an LLM through brush strokes.

vunderba•7mo ago
Using the kontext models in Fal.ai shows you a nice slider of the before and after edits and has a button that lets you set the edited image as the new source so you can continue to make changes.

Now that BFL has released a dev model, I'd love to see a Kontext plugin for Krita given that it already has one for Stable Diffusion though!

https://github.com/Acly/krita-ai-diffusion

dragonwriter•7mo ago
The Krita plugin is a bridge to ComfyUI which can already run Flux and presumably will have native support for Kontext (dev) within a week or so, and the plugin already has limited support for using Flux, so Kontext in the existing plugin (rather than requiring a new one) seems a fairly reasonable expectation.
dragonwriter•7mo ago
> ComfyUI which can already run Flux and presumably will have native support for Kontext (dev) within a week or so

This was pessimistic, native support today, with workflow and pointer to an alternate fp8 model download for people that can't run the full fp16 checkpoint.

https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux...

rushingcreek•7mo ago
This is awesome, and kudos to BFL for releasing the weights. The financial sustainability of open-source is hard to get right, and giving academics this model for free while charging a reasonable licensing fee for startups is something I think makes sense if it allows BFL and others to continue releasing open-weight models.
doctorpangloss•7mo ago
Would it be financially sustainable if BFL had to pay for express permission for all the image and derived-from-video content it uses? (No)
rushingcreek•7mo ago
I think this is a separate issue. No model provider currently obtains express permission from the content they train on. But some model providers, like BFL, can choose to give back to the open-source/weights community even when they don't have to. I think this outcome is strictly better than them choosing not to give back, which they totally could have done.
vunderba•7mo ago
Here's hoping the distilled [Dev] model can hold up reasonably well against the larger pro/max models which in a lot of ways can completely replace the relatively old-school inpainting techniques of Stable Diffusion.

Some before/after experiments with editing images using Kontext:

https://specularrealms.com/ai-transcripts/experiments-with-f...

thetoon•7mo ago
What amount of VRAM is this supposed to work with?
SV_BubbleTime•7mo ago
Today… about 18-20GB.

Tomorrow… like 4GB if you have an hour.

dragonwriter•7mo ago
> Today… about 18-20GB.

There's an FP8 version that's the default for the ComfyUI template that's in the release that just came out with Kontext support that I've seen reports of running in 12GB or less, and which I'm running at this moment in 16GB.

treesciencebot•7mo ago
One interesting feature that gets enabled with open weights is adding new capabilities (tasks) to these editing models. They generalize quite well with low samples (30 ish). We talk about it here https://blog.fal.ai/announcing-flux-1-kontext-dev-inference-...
qingcharles•7mo ago
Absolutely. This is the version of Kontext that everyone has been waiting for. It's far more useful now. This is the first of the new generation of imagegens that allows training. Can't do that with Gemini, GPT, MJ etc.
oTsanony•7mo ago
Yo guys, I think I might’ve found a chill and straightforward way to openly generate NSFW stuff using flux1-context on ComfyUI.
popalchemist•7mo ago
License is a major bummer.