frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•17s ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•5m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•11m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•12m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•17m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•19m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•25m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•28m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•29m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•33m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•34m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•35m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•38m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•40m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•41m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•43m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•45m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•47m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•50m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•55m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•56m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments
Open in hackernews

Researchers develop a camera that can focus on different distances at once

https://engineering.cmu.edu/news-events/news/2025/12/19-perfect-shot.html
91•gnabgib•1mo ago

Comments

krackers•1mo ago
Isn't this the lytro camera?
analog31•1mo ago
The article mentions a spatial light modulator, which I believe the Lytro camera did not have.
Forgeties79•1mo ago
The image(s) were also trash unfortunately and a PITA to process. Barely usable even in ideal circumstances.
NooneAtAll3•1mo ago
eh??

Processing was as simple as "click on the thing you want in focus". and 4MP was just fine for casual use it was targetting

Forgeties79•1mo ago
You couldn’t offload and use the refocus feature without their software and 4MP was half what smartphones were doing in 2012 (rapidly increasing above 8MP after that). Fixed lens so you couldn’t improve the image quality with glass - that’s totally understandable given the product but it is still a limitation to the visual quality.

That’s a bad recipe for casual and professional users alike. Can’t ingest into your workflow quickly, images are low res, can’t improve the image, and your smartphone was better just missing one, admittedly neat, feature. If that existed in phones people would use it like crazy I imagine.

Too narrow of a use case IMO, too many compromises for one feature, hence why it failed.

analog31•1mo ago
I looked into the details at the time, and as I recall the camera had lower resolution than the 4 MP of the sensor because of the microlens array. A lot was written about this at the time.

I remember a friend, who was a photography buff, was quite excited about the camera. But he didn't actually buy one.

NooneAtAll3•1mo ago
sensor was 40MP, image was 4MP
cycomanic•1mo ago
I doubt resolution is a limiting factor LCoS (transmissive ones are lower resolution typically though, but you could build reflective) phase modulators are available in 4k resolution (and maybe even higher). And I don't know if you need that much resolution because the regions you're trying to focus are quite broad, in fact I suspect the resolution of your phase modulator would not limit resolution but the max distance between focused regions, because it would set the max phase ramp you can achieve.

Loss, i.e. equivalent aperture is a different matter and I think this would imply quite a light loss.

Forgeties79•1mo ago
> I doubt resolution is a limiting factor LCoS (transmissive ones are lower resolution typically though, but you could build reflective) phase modulators are available in 4k resolution (and maybe even higher)

We’re talking about a specific camera, the lytros, which had a 4MP resolution. I’m not saying there was a limitation in the technology broadly speaking. Just that this camera was not worth it for the time. It’s sacrificed too much for one feature and at $400 it just didn’t sell

stevenjgarner•1mo ago
I believe the lytro camera was a plenoptic, or light field, camera. Light field cameras capture information about the intensity together with the direction of light emanating from a scene. Conventional cameras record only light intensity at various wavelengths.

While conventional cameras capture a single high-resolution focal plane and light field cameras sacrifice resolution to "re-focus" via software after the fact, the CMU Split-Lohmann camera provides a middle ground, using an adaptive computational lens to physically focus every part of the image independently. This allows it to capture a "deep-focus" image where objects at multiple distances are sharp simultaneously, maintaining the high resolution of a conventional camera while achieving the depth flexibility of a light field camera without the blur or data loss.

Something I find interesting is that while holograms and the CMU camera both manipulate the "phase" of light, they do so for opposite reasons: a hologram records phase to recreate a 3D volume, whereas the CMU camera modulates phase to fix a 2D image.

mastazi•1mo ago
Interesting. So if I understand correctly, it's like a nonlinear version of a "tilt lens"? https://en.wikipedia.org/wiki/Tilt%E2%80%93shift_photography...
stevenjgarner•1mo ago
Yes, I love the analogy. You can think of the CMU Split-Lohmann system as a "per-pixel tilt-shift lens" or a "freeform focal surface" camera.
fainpul•1mo ago
Light field cameras are mentioned under "related work":

https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...

hbarka•1mo ago
I remember Lytro. There was a lot of fanfare behind that company and then they fizzled. They had a lauded CEO/founder and their website demonstrated clearly how the post-focus worked. It felt like they were going to be the next camera revolution. Their rise and demise story would make a good Isaacson-style documentary.
chychiu•1mo ago
I think the product was just too early for its time, and there is not much demand for it. For what it's worth, the founder (Ren Ng) went back to academia, and was highly instrumental in computer vision research, e.g. being the PI on the paper for NeRF: (https://dl.acm.org/doi/abs/10.1145/3503250)
dale_glass•1mo ago
I don't think it was quite too early, it just makes tradeoffs that are undesirable.

Lytro as I understand it, trades a huge amount of resolution for the focusing capability. Some ridiculous amount, like the user gets to see just 1/8th of the pixels on the sensor.

In a way, I'd say rather than too early it was too late. Because autofocus was already quite good and getting better. You don't need to sacrifice all that resolution when you can just have good AF to start with. Refocusing in post is a very rare need if you got the focus right initially.

And time has only made that even worse. Modern autofocus is darn near magic, and people love their high resolution photos.

amelius•1mo ago
There is a limit to the resolution needed by consumers, so in that sense maybe they were too early.
dale_glass•1mo ago
I'd argue the opposite, consumers need more resolution than pros.

A pro will show up with a 300mm f/2.8, a tripod, a camera with good AF and high ISO, and the skills, plan and patience to catch birds in flight.

But all that stuff is expensive. The consumer way to approximate the lack of a good lens is a small, high res sensor. That only works in bright light, but you can get good results with affordable equipment in the right conditions. Greatly reducing the resolution is far from optimal when you can't have a big fancy lens to compensate.

And where is focus the hardest? Mostly where you want to have high detail. Wildlife, macro, sports.

blincoln•1mo ago
I find it very useful for wildlife photos. Autofocus never seems to work well for me on e.g. birds in flight.

It's also possible to generate a depth map from a single shot, to use as a starting point for a 3D model.

They're pretty neat cameras. The relatively low output resolution is the main downside. They would also have greatly benefited from consulting with more photographers on the UI of the hardware and software. There's way too much dependency on using the touchscreen instead of dedicated physical controls.

dale_glass•1mo ago
> I find it very useful for wildlife photos. Autofocus never seems to work well for me on e.g. birds in flight.

The more recent cameras can detect birds specifically and are great at tracking them.

> It's also possible to generate a depth map from a single shot, to use as a starting point for a 3D model.

That is true, but is a very niche need. Wonderful if you do need it, but it's a small market.

hyperific•1mo ago
If I recall correctly they got scooped up by Google and their team was merged into various Google teams. I was disappointed to hear of their fizzling as well. They were just starting to dive into serious movie production light field cameras when it happened. They had an incredible tech demo on their website showcasing its power. I can't seem to locate the original but there are bits of it in the linked video.

https://youtu.be/4qXE4sA-hLQ?si=QsEG2PtAmVjIfwDA

ThePowerOfFuet•1mo ago
Here's that same link without Google's creepy tracking linking you to everyone who clicks on it:

https://youtu.be/4qXE4sA-hLQ

Qbit_Enjoyer•1mo ago
As soon as I saw the headline, I began thinking about microphotography- no more blurry microbes! I could get excited for something like this.
pazimzadeh•1mo ago
If your samples are fixed, you can take a z-stack spanning the entire area you want to capture and then use max intensity projection to collapse them all into one clear image.

But yeah this new camera would be good for living microbes.

achille•1mo ago
Paper has some more useful examples:

https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...

John7878781•1mo ago
It's not even loading for me (probably because it's a huge file).
DarkSucker•1mo ago
The paper describes a split Alvarez (Lohmann) lens [1,2] with a phase modulator between them. I didn't do the math, but it looks like the phase modulator is optically equivalent to a mechanical shift of the Alvarez lenses over regions of the field of view. Alvarez lenses have higher aberrations, and are relatively bulky, compared to normal lenses. AR was referenced in the paper, but this lens will be hard to make compact, and have great image quality, over large fields of view.

1. https://www.laserfocusworld.com/optics/article/16555776/alva... 2. https://pdfs.semanticscholar.org/55af/9b325ba16fa471e55b2e49...

m463•1mo ago
I wonder if this camera might somehow record depth information, or be modified to do such a thing.

That would make it really useful, maybe replacing carmera+lidar.

schobi•1mo ago
It even requires depth information -

While this methods has no post processing, it requires a pre processing step to pre-calture the scene, segment it, estimate depth an compute the depth map.

feverzsj•1mo ago
I also like my 3d games without DOF.
schobi•1mo ago
It is a new neat idea to selectively adjust focus distance for different regions of the scene!

- processing: while there is no post processing, it needs scene depth information which requires pre computation, segmentation and depth estimation. Not a one-shot technique and quality depends on computational depth estimates being good

- no free lunch. The optical setup needs to trade in some light for this cool effect to work. Apart from the limitations of the prototype, how much loss is expected in theory? How does this compare to a regular camera setup with lower aperture? F/36 seems excessive for comparison.

- resolution - what resolutions have been achieved? (maybe not the 12 MPixels of the sensor? For practical or theoretical reasons? ) What depth range can the prototype capture? "photo of Paris Arc de triumphe displayed on a screen". This is suspiciously omitted

- how does the bokeh look like when out of focus? At the edge of an object? The introduction of weird or unnatural artifacts would seriously limit the acceptance

Don't get me wrong - nice technique! But to my liking the paper is omitting fundamental properties

breadwinner•1mo ago
How is this different from using a small aperture size?

When you reduce aperture size the depth of field increases. So for example when you use f/16 pretty much everything from a few feet to infinity is in focus.

malfist•1mo ago
Is that actually true? I do astrophotography through an f/10 telescope and its focus is very sensitive. I use a focuser that moves the camera 0.04 microns per step.

Not doubting you, just asking to understand. Astrophotography doesn't always behave the same as terrestrial photography

ruined•1mo ago
in addition to aperture, percieved depth of field greatly depends on:

- focal length (wider is deeper)

- crop factor (higher is deeper)

- subject distance (farther is deeper)

compared to your telescope, any terrestrial photography is likely at the opposite extremes, and at a disadvantage everywhere but subject distance.

but, focus is most mechanically sensitive near infinity. adjustment creates an asymptotically larger change in the focal plane as infinity is approached.

in a point-and-shoot camera with a wide lens at f16, "infinity" basically means across the street.

oxw•1mo ago
Very small apertures reduce image quality due to diffraction, which this avoids

Last page in the paper has a comparison between their approach and f/32 https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...

RicoElectrico•1mo ago
Ricoh has some nice EDoF cameras that would make for great QR scanners. They do it the classical way, though. That is, distance dependent chromatic aberration. In the industrial context you could use a telecentric lens, but it necessarily needs to have large input aperture to have a reasonable field of view.
mcdeltat•1mo ago
Finally we can shoot macro without f/128
asasidh•1mo ago
We had Lytro many many years ago. https://en.wikipedia.org/wiki/Lytro
mrlonglong•1mo ago
Wish my old eyes could do that.
rajnathani•1mo ago
> Autonomous vehicles might see their surroundings with unprecedented clarity.

This is a pretty good point, which gets me to wonder whether the developers of autonomous vehicles use variable focus adjustments as a part of their ML stack? Or simply set the focal point to infinity.

mxkopy•1mo ago
The image is a byproduct in autonomous driving. Successful implementations (Waymo) use lidar, which doesn’t need focal adjustments. If for some reason quality RGB pixels are needed (e.g. for entity recognition) then they will probably focus on moving objects. This paper ties in nicely with lidar since it apparently needs depth information to work which is exactly what lidar provides