frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic acquires Bun

https://bun.com/blog/bun-joins-anthropic
979•ryanvogel•3h ago•474 comments

Claude 4.5 Opus' Soul Document

https://simonwillison.net/2025/Dec/2/claude-soul-document/
158•the-needful•2h ago•75 comments

Paged Out

https://pagedout.institute
57•varjag•1h ago•2 comments

We're Committing $6.25B to Give 25M Children a Financial Head Start

https://www.onedell.com/investamerica/
17•duck•25m ago•4 comments

Amazon launches Trainium3

https://techcrunch.com/2025/12/02/amazon-releases-an-impressive-new-ai-chip-and-teases-a-nvidia-f...
71•thnaks•2h ago•28 comments

I designed and printed a custom nose guard to help my dog with DLE

https://snoutcover.com/billie-story
279•ragswag•2d ago•37 comments

Delty (YC X25) Is Hiring

https://www.ycombinator.com/companies/delty/jobs/aPWMaiq-full-stack-software-engineer
1•lalitkundu•34m ago

OpenAI declares 'code red' as Google catches up in AI race

https://www.theverge.com/news/836212/openai-code-red-chatgpt
264•goplayoutside•6h ago•322 comments

Free static site generator for small restaurants and cafes

https://lite.localcafe.org/
23•fullstacking•1h ago•9 comments

Learning music with Strudel

https://terryds.notion.site/Learning-Music-with-Strudel-2ac98431b24180deb890cc7de667ea92
320•terryds•6d ago•78 comments

100k TPS over a billion rows: the unreasonable effectiveness of SQLite

https://andersmurphy.com/2025/12/02/100000-tps-over-a-billion-rows-the-unreasonable-effectiveness...
200•speckx•3h ago•71 comments

Cursed circuits: charge pump voltage halver

https://lcamtuf.substack.com/p/cursed-circuits-charge-pump-voltage
33•surprisetalk•2h ago•4 comments

All about automotive lidar

https://mainstreetautonomy.com/blog/2025-08-29-all-about-automotive-lidar/
34•dllu•1d ago•15 comments

Zig's new plan for asynchronous programs

https://lwn.net/SubscriberLink/1046084/4c048ee008e1c70e/
155•messe•7h ago•114 comments

Solving the Partridge Packing Problem Using MiniZinc

https://zayenz.se/blog/post/partridge-packing/
13•mzl•6d ago•0 comments

Mistral 3 family of models released

https://mistral.ai/news/mistral-3
574•pember•6h ago•178 comments

The Junior Hiring Crisis

https://people-work.io/blog/junior-hiring-crisis/
144•mooreds•3h ago•170 comments

YesNotice

https://infinitedigits.co/docs/software/yesnotice/
129•surprisetalk•1w ago•48 comments

Addressing the adding situation

https://xania.org/202512/02-adding-integers
235•messe•10h ago•79 comments

Code Wiki: Accelerating your code understanding

https://developers.googleblog.com/en/introducing-code-wiki-accelerating-your-code-understanding/
25•geoffbp•6d ago•6 comments

Advent of Compiler Optimisations 2025

https://xania.org/202511/advent-of-compiler-optimisation
307•vismit2000•11h ago•53 comments

Nixtml: Static website and blog generator written in Nix

https://github.com/arnarg/nixtml
72•todsacerdoti•6h ago•31 comments

Python Data Science Handbook

https://jakevdp.github.io/PythonDataScienceHandbook/
173•cl3misch•8h ago•37 comments

Apple Releases Open Weights Video Model

https://starflow-v.github.io
414•vessenes•16h ago•146 comments

Lowtype: Elegant Types in Ruby

https://codeberg.org/Iow/type
54•birdculture•4d ago•27 comments

What will enter the public domain in 2026?

https://publicdomainreview.org/features/entering-the-public-domain/2026/
459•herbertl•18h ago•321 comments

A series of vignettes from my childhood and early career

https://www.jasonscheirer.com/weblog/vignettes/
126•absqueued•9h ago•84 comments

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

https://github.com/marmotdata/marmot
82•charlie-haley•6h ago•19 comments

YouTube increases FreeBASIC performance (2019)

https://freebasic.net/forum/viewtopic.php?t=27927
150•giancarlostoro•2d ago•38 comments

Beej's Guide to Learning Computer Science

https://beej.us/guide/bglcs/
336•amruthreddi•2d ago•126 comments
Open in hackernews

All about automotive lidar

https://mainstreetautonomy.com/blog/2025-08-29-all-about-automotive-lidar/
34•dllu•1d ago

Comments

CGMthrowaway•36m ago
Adding a comment here with some info on LIDAR human safety, since many are asking.

There are two wavelengths of interest used:

  a) 905 nm/940 nm (roof and bumpers): 70–100 µJ per pulse max, regulated by IEC 60825 since this WL is focused on the retina
  b) 1550 nm systems (the Laser Bear Honeycomb): 8–12 mJ per pulse allowed (100x more photons since this WL stays the cornea)
The failure mode of these LIDARs can be akin to a weapon. A stuck mirror or frozen phased array turns into a continuous-wave pencil beam. A 1550 nm LIDAR leaking 1W continuous will raise corneal temperature >5C in 100ms. The threshold for cataract creation is only 4C rise in temp. A 905 nm Class 1 system stuck in one pixel gives 10 mW continuous on retina, capable of creating a lesion in 250ms or less.

20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory. The current regs (IEC 60825-1:2014) assume single-source exposure. There is no standard for multi-source, multi-axis, moving-platform overlay.

Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury

addaon•34m ago
> There are two wavelengths of interest used

Ouster uses (or at least used to use, not sure if they still do) 840 nm. Much higher quantum efficiency for standard silicon receivers, without having to play games with stressed silicon and stuff; but also much better focusing by the retina, so lower power permitted.

krackers•19m ago
I was always curious about this, it's impossible to find any safety certifications or details about the lidars used by e.g. Waymo. Are we supposed to just trust that they didn't cut corners, especially given the financial incentives to convince people that lidar is necessary (because there's a notable competitor that doesn't use it).

To date most class-1 lasers have also been hidden/enclosed I think (and there is class 1M for limited medical use), so I'm not convinced that the limits for long-term daily exposure have been properly studied.

addaon•4m ago
A quick note about units -- you correctly quote the limits as an energy-per-pulse limit. The theory behind this is that pulses are short enough that rotation during a pulse is negligible, so they tend to hit a single point (on the retina, at focusable frequencies; the cornea itself for longer wave lengths), and the absorption of that energy is what causes damage. But LiDAR range is determined not by energy per pulse, but by power. This drives a desire for minimum-time pulses, often < 10 ns -- if you can halve your pulse length, you can increase your range substantially while still being eye-safe. GaNFETs are one of the enabling technologies for pulsed lidar, since they're really the only way out there to steer tens of amps in single-digit nanoseconds. Even once you've solved generating short pulses, though, you still need to interpret short responses. Which drives either a need for very fast ADCs (gigasample+), or TDCs, which are themselves fascinating components.
addaon•36m ago
Having built a LiDAR system for an autonomy company in the past, this is a great write-up, but it omits what I found to be one of the more interesting challenges. For our system (bistatic, discrete edge-emitting laser diodes and APDs; much like a Velodyne system at high level), we had about an inch of separation between our laser diodes and our photodiodes. With 70 A peak currents through the laser diodes. And nanoamp sensitivity in the photodiodes. EMI is... interesting. Many similar lidars ignore the problem by blanking out responses very close to firing time, giving a minimum range sensitivity, and by waiting for maximum delay to elapse before firing the next salvo -- but this gives a maximum fire rate that can be an issue. For example, a 32 channel system running at 20 kHz/channel would be limited to ~200 m range (468 m round trip delay, some blanking time needed)... so to get both high rate (horizontal resolution) and high channel count (vertical resolution), you need to be able to ignore your own cross-talk and be able to fire when beams are in flight.
newpavlov•27m ago
>we had about an inch of separation between our laser diodes and our photodiodes

Why can't you place them further away from each other using an additional optical system (i.e. a mirror) and adjusting for the additional distance in software?

addaon•26m ago
You can, but customers like compact self-contained units. All trade offs.

Edit: There's basically three approaches to this problem that I'm aware of. Number one is to push the cross-talk below the noise floor -- your suggestion helps with this. Number two is to do noise cancellation by measuring your cross-talk and deleting it from the signal. Number three is to make the cross-talk signal distinct from a real reflection (e.g. by modulating the pulses so that there's low correlation between an in-flight pulse and a being-fired pulse). In practice, all three work nicely together; getting the cross-talk noise below saturation allows cancellation to leave the signal in place, and reduced correlation means that the imperfections of the cancellation still get cleaned up later in the pipeline.

jandrese•14m ago
200m range seems adequate for passenger vehicle use. Even at 100kph that's over 7 seconds to cover the distance even if you aren't trying to slow down. I think there is diminishing returns with chasing even longer ranges. Even fully loaded trucks are expected to stop in about 160m or so.
addaon•10m ago
Yep, 200 m is pretty close to standard. Which is why 32 channel and 20 kHz is a pretty common design point. But customers would love 64 channel and 40 kHz, for example. Also, it's worth noting that if your design range is 200 m -- your beam doesn't just magically stop beyond that. While the inverse square law is on your side in preventing a 250 m target from interfering with the next pulse, a retro-reflector at 250 m can absolutely provide a signal that aliases with a ~16 m signal (assuming 234 m time between pulses) on the next channel under the right conditions. This is an edge case -- but it's one that's observable under steady-state conditions, it's not just a single pulse that gets misinterpreted.
Animats•18m ago
No mention of flash LIDAR, which really ought to be seen more for the short-range units for side and rear views.

Interference between LIDARs can be a problem, mostly with the continuous-wave emitters. Pulsed emitters are unlikely to collide in time, especially if you put some random jitter in the pulse timing to prevent it. The radar people figured this out decades ago.

rappatic•9m ago
In the current state of self-driving tech, lidar is clearly the most effective and safest option. Yet companies like Tesla refuse to integrate lidar, preferring to rely solely on cameras. This is partially to keep costs down. But this means the Tesla self-driving isn't quite as good as Waymo, which sits pretty comfortably at level 4 autonomy.

But humans have no lidar technology. We rely almost solely on sight for driving (and a tiny bit on sound I guess). Hence in principle it should be possible for cars to do so too. My question is this: at what point, if at all, will self-driving get good enough to make automotive lidar redundant? Or will it always be able to make the self-driving 1% better than just cameras?

convenwis•7m ago
There are unquestionably some cases where Lidar adds actual data that cameras can't see and is relevant to driving accuracy. So the real question is whether there are cases where Lidar actually hurts. I think that is possible but unlikely to be the case.
readthenotes1•6m ago
Many humans do a really bad job at driving, so I'm not sure we should try to emulate that.

And it is certain that in India they use sound sound for echolocation.

rappatic•3m ago
> Many humans do a really bad job at driving, so I'm not sure we should try to emulate that

Agreed, but there are still really good human drivers, who still operate on sight alone. It's more about the upper bound, not the human average, that can be achieved with only sight.

Barathkanna•1m ago
I learned a lot from this article. The breakdown of the different LiDAR types and how they fit into real automotive sensor stacks was especially helpful. Nice to see a clear explanation without the usual hype or ideology around cameras vs. LiDAR.