frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Decade of Slug

https://terathon.com/blog/decade-slug.html
203•mwkaufma•2h ago•13 comments

Python 3.15's JIT is now back on track

https://fidget-spinner.github.io/posts/jit-on-track.html
111•guidoiaquinti•2h ago•24 comments

Microsoft's 'unhackable' Xbox One has been hacked by 'Bliss'

https://www.tomshardware.com/video-games/console-gaming/microsofts-unhackable-xbox-one-has-been-h...
404•crtasm•6h ago•165 comments

Get Shit Done: A Meta-Prompting, Context Engineering and Spec-Driven Dev System

https://github.com/gsd-build/get-shit-done
42•stefankuehnel•1h ago•19 comments

Kagi Small Web

https://kagi.com/smallweb/
639•trueduke•11h ago•180 comments

It Took Me 30 Years to Solve This VFX Problem – Green Screen Problem [video]

https://www.youtube.com/watch?v=3Ploi723hg4
73•yincrash•4d ago•23 comments

Toward automated verification of unreviewed AI-generated code

https://peterlavigne.com/writing/verifying-ai-generated-code
62•peterlavigne•1d ago•49 comments

Node.js needs a virtual file system

https://blog.platformatic.dev/why-nodejs-needs-a-virtual-file-system
168•voctor•6h ago•150 comments

'The Secret Agent': Exploring a Vibrant, yet Violent Brazil (2025)

https://theasc.com/articles/the-secret-agent-cinematography
90•tambourine_man•5h ago•36 comments

Edge.js: Run Node apps inside a WebAssembly sandbox

https://wasmer.io/posts/edgejs-safe-nodejs-using-wasm-sandbox
44•syrusakbary•3h ago•14 comments

Torturing Rustc by Emulating HKTs

https://www.harudagondi.space/blog/torturing-rustc-by-emulating-hkts/
9•g0xA52A2A•3d ago•1 comments

Java 26 is here

https://hanno.codes/2026/03/17/java-26-is-here/
94•mfiguiere•2h ago•66 comments

Ryugu asteroid samples contain all DNA and RNA building blocks

https://phys.org/news/2026-03-ryugu-asteroid-samples-dna-rna.html
137•bookofjoe•9h ago•87 comments

Finding a CPU Design Bug in the Xbox 360 (2018)

https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/
141•mariuz•4d ago•41 comments

Spice Data (YC S19) Is Hiring a Product Specialist

https://www.ycombinator.com/companies/spice-data/jobs/P0e9MKz-product-specialist-new-grad
1•richard_pepper•4h ago

OpenSUSE Kalpa

https://kalpadesktop.org/
103•ogogmad•7h ago•69 comments

Show HN: Crust – A CLI framework for TypeScript and Bun

https://github.com/chenxin-yan/crust
46•jellyotsiro•16h ago•19 comments

Meta and TikTok let harmful content rise to drove engagement, say whistleblowers

https://www.bbc.com/news/articles/cqj9kgxqjwjo
55•1vuio0pswjnm7•1h ago•29 comments

The Plumbing of Everyday Magic

https://plumbing-of-everyday-magic.hyperclay.com/
29•hannahilea•4d ago•2 comments

FFmpeg 8.1

https://ffmpeg.org/index.html#pr8.1
291•gyan•6h ago•45 comments

Honda is killing its EVs

https://techcrunch.com/2026/03/14/honda-is-killing-its-evs-and-any-chance-of-competing-in-the-fut...
110•sylvainkalache•2d ago•123 comments

Unsloth Studio

https://unsloth.ai/docs/new/studio
53•brainless•5h ago•4 comments

Reverse-engineering Viktor and making it open source

https://matijacniacki.com/blog/openviktor
139•zggf•13h ago•59 comments

Show HN: Horizon – GPU-accelerated infinite-canvas terminal in Rust

https://github.com/peters/horizon
21•petersunde•3h ago•8 comments

Leanstral: Open-source agent for trustworthy coding and formal proof engineering

https://mistral.ai/news/leanstral
728•Poudlardo•1d ago•177 comments

GPT‑5.4 Mini and Nano

https://openai.com/index/introducing-gpt-5-4-mini-and-nano
170•meetpateltech•4h ago•102 comments

Font Smuggler – Copy hidden brand fonts into Google Docs

https://brianmoore.com/fontsmuggler/
141•lanewinfield•4d ago•69 comments

Show HN: Antfly: Distributed, Multimodal Search and Memory and Graphs in Go

https://github.com/antflydb/antfly
66•kingcauchy•5h ago•21 comments

Show HN: March Madness Bracket Challenge for AI Agents Only

https://www.Bracketmadness.ai
54•bwade818•8h ago•32 comments

Building a Shell

https://healeycodes.com/building-a-shell
143•ingve•11h ago•34 comments
Open in hackernews

It Took Me 30 Years to Solve This VFX Problem – Green Screen Problem [video]

https://www.youtube.com/watch?v=3Ploi723hg4
66•yincrash•4d ago

Comments

superjan•1h ago
Watched this a few days ago. The video is light on technical details, except maybe that they used CGI to generate training data.
rhdunn•14m ago
The idea behind a greenscreen is that you can make that green colour transparent in the frames of footage allowing you to blend that with some other background or other layered footage. This has issues like not always having a uniform colour, difficulty with things like hair, and lighting affecting some edges. These have to be manually cleaned up frame-by-frame, which takes a lot of time that is mostly busy work.

An alternative approach (such as that used by the sodium lighting on Mary Poppins) is that you create two images per frame -- the core image and a mask. The mask is a black and white image where the white pixels are the pixels to keep and the black pixels the ones to discard. Shades of gray indicate blended pixels.

For the mask approach you are filming a perfect alpha channel to apply to the footage that doesn't have the issues of greenscreen. The problem is that this requires specialist, licensed equipment and perfect filming conditions.

The new approach is to take advantage of image/video models to train a model that can produce the alpha channel mask for a given frame (and thus an entire recording) when just given greenscreen footage.

The use of CGI in the training data allows the input image and mask to be perfect without having to spend hundreds of hours creating that data. It's also easier to modify and create variations to test different cases such as reflective or soft edges.

Thus, you have the greenscreen input footage, the expected processed output and alpha channel mask. You can then apply traditional neural net training techniques on the data using the expected image/alpha channel as the target. For example, you can compute the difference on each of the alpha channel output neurons from the expected result, then apply backpropagation to compute the differences through the neural network, and then nudge the neuron weights in the computed gradient direction. Repeat that process across a distribution of the test images over multiple passes until the network no longer changes significantly between passes.

Springtime•1h ago
In an earlier video they made a couple years back about Disney's sodium vapor technique Paul Debevec suggested he was considering creating a dataset using a similar premise: filming enough perfectly masked references to be able to train models to achieve better keying. So it was interesting seeing Corridor tackle this by instead using synthetic data.
somat•1h ago
With regards to the sodium vapor process, an idea has been percolating in the back of my head ever since I saw that video. But I don't really have the budget to try it out.

theory: make the mask out of non-visable light

illuminate the backing screen in near Infra-Red light. (after a bit of thought I chose near-IR as opposed to near-UV for hopefully obvious reasons)

point two cameras at a splitting prism with a near IR pass filter(I have confirmed that such thing exists and is commercially available)

Leave the 90 degree(unaltered path) camera untouched, this is the visible camera.

Remove the IR filter from the 180 degree(filter path) camera, this is the mask camera.

Now you get a perfect non-color shifting mask(in theory), The splitting prism would hurt light intake. It might be worth it to try putting the cameras really close together , pointed same direction, no prism, and see if that is close enough.

diacritical•47m ago
Don't humans and other warm objects also radiate IR?
somat•39m ago
That is far-IR, thermal stuff, Near-IR, 700 nanometer-ish is right below red in human vision.

Camera sensors can pick up a little near-IR so they have have a filter to block it. If that filter was removed and a filter to block visable light was used in place you would have a camera that can only see non-visable light. Poorly, the camera was not engineered to operate in this bandwidth, but it might be good enough for a mask. A mask that does not interfere with any visible colors.

overvale•44m ago
Debevec tried a version of this: https://arxiv.org/abs/2306.13702
actionfromafar•35m ago
I'll do you one better, which requires no special cameras (most have IR filters) nor double cameras or prisms.

Shoot the scene in 48 or 96 fps. Sync the set lighting to odd frames. Every odd frame, the set lights are on. Every even frame, set lights are off.

For the backing screen, do the reverse. Even frames, the backing screen is on. Odd frames, backing screen is off.

There you go. Mask / normal shot / Mask normal shot / Mask ... you get the idea.

Of course, motion will cause normal image and mask go out of sync, but I bet that can be remedied by interpolating a new frame between every mask frame. Plus, when you mix it down to 24fps you can introduce as much motion blur and shutter angle "emulation" as you want.

ryandamm•26m ago
This is called “ghost frame” and already exists in Red cameras and virtual production wall tools like Disguise.
amluto•5m ago
Surely this makes your actors feel sick? And wouldn’t it make your motion blur look dashed and also cause artifacts at the edge of the mask if there’s a lot of motion?
wiml•12m ago
This approach was used in the 1950s/60s with ultraviolet light (rather than IR) to create a traveling matte. I'm not sure why visible-light techniques won out. Easier to make sure that the illumination is set up correctly, maybe?
vsviridov•1h ago
The community has managed to drastically lower hardware requirements, but so far I think only Nvidia cards are supported, so as an AMD owner I'm still missing out :(
Computer0•1h ago
Looking forward to trying it out, 8gb of vram or unified memory required!
IshKebab•1h ago
Pretty impressive results! Seems like someone has even made a GUI for it: https://github.com/edenaion/EZ-CorridorKey

Still Python unfortunately.

BoredPositron•18m ago
Like 90% of the other tooling in VFX...
diacritical•1h ago
From ~04:10 till 05:00 they talk about sodium-vapor lights and how Disney has the exclusive rights to use it. From what I read the knowledge on how to make them is a trade secret, so it's not patented. Seems weird that it would be hard to recreate something from the 1950's.

I also wonder how many hours were wasted by people who had to use inferior technology because Disney kept it secret. Cutting out animals and objects from the background 1 frame at a time seems so mindnumbingly boring.

jasonwatkinspdx•38m ago
Yeah, that's just nonsense. We used sodium vapor monochromatic bulbs in my high school physics class to duplicate the double slit experiment.

I suspect the real reason is that digital green screen in the hands of experienced people is "good enough" vs the complication of needing a double camera and beam splitting prism rig and such.

meatmanek•11m ago
The lights are relatively easy to get. iirc (it's been a bit since I watched their full video on the subject[1]) the hard part to find was the splitter that sends the sodium-vapor light to one camera and everything else to another camera.

1. https://www.youtube.com/watch?v=UQuIVsNzqDk

comex•1h ago
See also this video comparing Corridor Key to traditional keyers:

https://www.youtube.com/watch?v=abNygtFqYR8

ralusek•1h ago
I'm a software engineer that, like the vast majority of you, uses AI/agents in my workflow every day. That being said, I have to admit that it feels a little weird to hear someone who does not write code say that they built something, without even mentioning that they had an agent build it (unless I missed that).
jrm4•26m ago
I mean, the heading of the video says "he solved the problem," which I think is wise to pay a lot of attention to.
amelius•6m ago
There's still a bug: the glass with water does not distort the checker pattern in the background at 24:12.