https://cleantechnica.com/2025/03/20/lidars-wicked-cost-drop...
Meanwhile visible light based tech is going up in price due to competing with ai on the extra gpu need while lidar gets the range/depth side of things for free.
Ideally cars use both but if you had to choose one or the other for cost you’d be insane to choose vision over lidar. Musk made an ill timed decision to go vision only.
So it’s not a surprise to see the low end models with lidar.
-- but I'm not sure how to get data on ex. how much Tesla is charged for a Nvidia whatever or what compute Waymo has --
My personal take is Waymo uses cameras too so maybe we have to assume the worst case, +full cost of lidar / +$130
The problem with Tesla is, that they need to combine the outputs of those camera's into a 3d view, what takes a LOT more processing power to judge distances. As in needing more heavy models > more GPU power, more memory needed etc. And still has issues like a low handing sun + white truck = lets ram into that because we do not see it.
And the more edge cases you try to filter out with cameras only setups, the more your GPU power needs increase! As a programmer, you can make something darn efficient but its those edge cases that can really hurt your programs efficiency. And its not uncommon to get 5 to 10x performance drops, ... Now imagine that with LLM image recognition models.
Tesla's camera only approach works great ... under ideal situations. The issue is those edge cases and not ideal situations. Lidar deals with a ton of edge cases and removes a lot of the progressing needed for ideal situations.
>I'd love to take on this challenge: the article they linked shows the cost add for LIDAR (+$130)
The article claims that, but when you actually try to follow the source it fails the fact check.Must be solved problem and something you should buy already? Right?
If there are single bulbs displaying red, green and yellow please give clear examples.
Potentially as extraneous as range to a surface that a camera can’t tell apart from background.
More to the point, everyone but Tesla is doing cameras plus Lidar. It’s increasingly looking like the correct bet.
At what proportion? Is it mostly lidar or mostly cameras? Or 50/50?
> Potentially as extraneous as range to a surface that a camera can’t tell apart from background.
I guess yeah for backside of the car you'd probably better off measuring actual actions.
How about when you come 4 way stop. LIDAR is useless as it wouldn't recognize anyones turn signals.
eg A driving decision system needs to know object distances AND traffic light colours. It doesn't particularly need to know the source of either. You could have a camera-only system that accurately determines colour and fuzzy-determines distance. Or you could have a LIDAR-only system that accurately determines distance and fuzzy-determines colour.
Or you use both, get accurate LIDAR-distance and accurate camera-colour and skip all the fuzzy-determination steps. Or keep the fuzzy stuff and build a layer of measurement-agreement for redundancy.
So then the question becomes, what's your proportion when deciding whether to stop at a traffic light? Is it mostly light colour or mostly distance to other objects? Or 50/50?
I'd say it's 100/100.
Like them, I don't understand what you're asking by "proportion". Bits/second? Sensor modules/vehicle? Features detected by the AI?
Because it really feels like you're just JAQing off, either to come to some "gotcha" moment or a pseudo-intellectual exercise. In either case, the questions feel bad faith.
What proportion of your vision is rods or cones? Depends on the context. You can do without one. But it’s better with both.
> How about when you come 4 way stop. LIDAR is useless as it wouldn't recognize anyones turn signals
Bad example. 99% of a 4-way stop is remembering who moved last, who moves next by custom and who may jump the line. What someone is indicating is, depending on where you are, between mildly helpful and useless.
With vision you rely on external source or flood light. Its also how our civilization is designed to function in first place.
Anyway, the whole self driving obsession is ridiculous because being driven around in a bad traffic isn’t that much better than driving in bad traffic. It’s cool but can’t beat a the public infrastructure since you can’t make the car dissipated when not in use.
IMHO, connectivity to simulate public transport can be the real sweet spot, regardless of sensor types. Coordinated cars can solve traffic and pretend to be trains.
Agreed that public transportation is usually the best option in either case, though.
Right now I don't even have a car but for getting around outside of the city it's difficult sometimes.
Avoiding potholes is the hardest part of driving, really.
It's not that it's hard but I just hate it.
All reasons why I think public transit is the better solution over self driving cars. They're generally much safer, and also you get to do something while you're on the go. Pretty nifty, I think.
I love public transport and an added benefit is: I don't have to go back to where I left it. I often take a metro from A to B, walk to C and then get a bus back to A or something. Can't do that with a car, as such I tend to walk a lot more now. Because it's a hassle-free option now. The world seems more open for exploration when I don't have to worry about returning to the car, or having a drink, or the parking meter expiring. I really don't get that people consider cars freedom.
Of course once you go outside the city it's a different story, even here in Europe. Luckily I don't need to go there so much. But that's something that should be improved. On the weekend here in the city the metro runs 24/7 and the regional trains really should too but they don't.
you know a lot about the light you are sending, and what the speed of light is, so you can filter out unexpected timings, and understand multiple returns
Coordinated cars won't work unless all cars are built the same and all maintained 100% the same and regularly inspected. You can't have a car driving 2 inches from the car in front, if it can't stop just as fast as the car in front. People already neglect their cars, change brake compounds, and get stuck purchasing low quality brake parts due to lack of availability of good components.
Next time you see some total beater driving down the road, imagine that car 2 inches off your rear bumper, not even a computer can make up for poor maintenance. Imagine that 8000lb pickup with it's cheap oversized tires right in your rearview mirror with it's headlights in your face. It's not going to be able to stop either.
The good news is they're all commodity hardware prices now.
Tesla removing radar and parking ultrasonic sensors was a self own. Computer vision inference is pretty bad when all the camera sees is a while wall when backing up.
Fog - Radar will perceive the car. Multi car crash, long range radar picks it up.
Bright glare from sun, lidar picks it up. Lidar misses something, camera picks it up.
Waymo has the correct approach on perception. Jam with sensor so they have superhuman vision of environment around.
If you charged car makers $20m per pedestrian killed by their cars regardless of fault you'd probably see much safer designs.
This is an extremely optimistic view on how companies work
A big reason car companies don't worry much about killing pedestrians at the moment is it costs them ~$0.
About half our road fatalities are pedestrians. About 80% of those are intoxicated with alcohol. When you're driving at 40mph, at night, and some drunk guy chooses to cross the road, no amount of safety features or liabilities can save him.
Sure, cars can be safer for light collisions with pedestrians where the car is going slowly. Especially in the US where half the cars have a very high hood. But where I live the problem is not safer cars, it's drunk pedestrians.
Here in the UK we have a standard 30mph built up area limit, dropping to 20mph in most residential area.
Result - a massive reduction in serious injuries and fatalities, especially in car - pedestrians collisions.
And sometimes at places that aren’t even a cross walk.
Im in DTLA frequently and I am almost even developing a secondary instinct to cover my brake and have an extra look around when a Waymo stops in a street.
Because it may be dropping off or picking up a rider or it saw something or someone I didn’t. Just happened Saturday in fact. I saw it do an abrupt stop when I was yielding to it at a “T” intersection and expected it to have the right of way and keep going. I didn’t proceed until I could figure out WHY it had just stopped, like “okay WHERE’S the passenger”
and then five or so people started running across the street in front of it that I would not have seen if that Waymo wasn’t there and I was clear to turn left.
As an added bonus it stayed stopped after they all crossed and I decided to be a jerk and turn left in front of it. It stayed stopped for me too. There’s no driver in it. It ain’t mad. XD
I have a good eye for spotting uber drivers who are about to load or unload too, Especially if they have some common sense and are trying to line up to do that so their passenger can get on or off curbside. A Waymo is just.. way more immediately identifiable that I can react that much faster to it or just be like.. alright. I’ll take a cue from it, it’s usually right.
And hell even if it’s wrong, maybe this isn’t a good time to pull out in front of it anyway!
Whoever was in control. This isn’t some weird legal quagmire anymore, these cars are on the road.
And will continue to be until every municipality implements laws about it.
It’s not a conundrum as much as an implementation detail. We’ve decided to hold Waymo accountable. We’re just ticking the boxes around doing that (none of which involve confusion around Waymo being responsible).
The nature of the punishment does not necessarily follow the same rules as for human incompetence, e.g. if the error occurs due to some surprising combination of circumstances that no reasonable tester would have thought to test, which I can't really give an example of because anything I can think of is absolutely something a reasonable tester would have thought to test, but for the sake of talking about it without taking this too seriously consider if a celebrity is crossing a road while a large poster of their own face is right behind them.
Don't get me wrong, perfection should be the long term goal. However I will settle for less than perfection today so long as it is better.
Though better is itself hard to figure out - drunk (or otherwise impaired drivers) are a significant factor in car deaths, as is bad weather when self driving currently doesn't operate at all. Statistics do need to make sure self driving cars are better than non-impaired drivers in all situations where humans driver before they can claim better. (I know some data is collected, but so far I haven't seen any independent analysis. The potentially biased analysis looks good though - but again it is missing all weather conditions)
The benefits of self-driving should be inrefutable before requiring it. At least x10 better than human drivers.
Right now… Tesla likes to show off stats that suggest accidents go down while their software is active, but then we see videos like this, and go "no sane human would ever do this", and it does not make people feel comfortable with the tech: https://electrek.co/2025/05/23/tesla-full-self-driving-veers...
Every single way the human vision system fails, if an AI also makes that mistake, it won't get blamed for it. If it solves every single one of those perception errors we're vulnerable to (what colour is that dress, is that a duck or a rabbit, is that an old woman close up facing us or a young woman from a distance looking away from us, etc.) but also brings in a few new failure modes we don't have, it won't get trusted.
This is basically what we have (for reasonable definitions of full).
> So it’s not a surprise to see the low end models with lidar.
They could be going for a Tesla-esque approach, in that by equipping every car in the fleet with lidar, they maximise the data captured to help train their models.
And if he still doesn’t realize and admit he is wrong then he is just plain dumb.
Pride is standing in the way of first principles.
Same deal with his comments about how all anti-air military capability will be dominated by optical sensors.
And he’s not wrong that roads and driving laws are all built around human visual processing.
The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
So to clarify, it wasn’t entirely a lidar problem it was an need to call home to navigate.
And people die all the time.
> The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Huh? Waymo is responsible for injury, so all their cars called home at the same time DOS themselves rather than kill someone.
Tesla makes no responsibility and does nothing.
I can’t see the logic the brings vision only as having anything to do lights out. At all.
Yes... but people can only focus on one thing at a time. We don't have 360 vision. We have blind spots! We don't even know the exact speed of our car without looking away from the road momentarily! Vision based cars obviously don't have these issues. Just because some cars are 100% vision doesn't mean that it has to share all of the faults we have when driving.
That's not me in favour of one vs the other. I'm ambivalent and don't actually care. They can clearly both work.
Most of them cannot drive a car. People have crashes for so many reasons.
They do, but the rate is extremely low compared to the volume of drivers.
In 2024 in the US there were about 240 million licensed drivers and an estimated 39,345 fatalities, which is 0.016% of licensed drivers. Every single fatality is awful but the inverse of that number means that 99.984% of drivers were relatively safe in 2024.
Tesla provided statistics on the improvements from their safety features compared to the active population (https://www.tesla.com/fsd/safety) and the numbers are pretty dramatic.
Miles driven before a major collision
699,000 - US Average
972,000 - Tesla average (no safety features enabled)
2.3 million - Tesla (active safety features, manually driven)
5.1 million - Tesla FSD (supervised)
It's taking something that's already relatively safe and making it approximately 5-7 times safer using visual processing alone.
Maybe lidar can make it even better, but there's every reason to tout the success of what's in place so far.
Comparing the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?
Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries? Including accidents that were sufficiently serious that airbags could not or were unable to deploy?
I have no doubt that there are ways to take issue with the stats. I'm sure we could look at accidents from 11pm - 6am compared to the volume of drivers on the road as well.
In aggregate, the stats are the stats though.
Sure, and using lidar means you can use it anywhere a person can go in any other technology too.
The absolute genius made sure that he can't back out without making it bleedingly obvious that old cars can never be upgraded for a LIDAR-based stack. Right now he's avoiding a company-killing class action suit by stalling, hoping people will get rid of HW3 cars, (and you can add HW4 cars soon too) and pretending that those cars will be updated, but if you also need to have LIDAR sensors, you're massively screwed.
Tesla specifically decided not to use the taxi-first approach, which does make sense since they want to sell cars. One of the first major failures of their approach was to start selling pre-orders for self driving. If they hadn’t, they would not have needed to promise it would work everywhere, and could have pivoted to single city taxi services like the other companies, or added lidar.
But certainly it all came from Musk’s hubris, first to set out to solve the self driving in all conditions using only vision, and then to start selling it before it was done, making it difficult to change paths once so much had been promised.
This had happened a load of times with him. It seemed to ramp up around paedo sub, and I wonder what went on with him at that time.
History is replete with smart people making bad decisions. Someone can be exceptionally smart (in some domains) and have made a bad decision.
> He seemed to have drank his own koolaid back then.
Indeed; but he was on a run of success, based on repeatedly succeeding deliberately against established expertise, so I imagine that Koolaid was pretty compelling.
I hate Elon's personality and political activity as much as anyone, but it is clear from technical PoV that he did logical things. Actually, the fact that he was mistaken and still managed to not bankrupt Tesla is saying something about his skills.
Robot arms are neither a low-volume unique/high-cost market (SpaceX), nor a high-volume/high-margin business (Tesla). On top of that it's already a quite crowded space.
Even ignoring various current issues with Lidar systems that aren’t fundamental limitations, large amounts of road infrastructure is just designed around vision and will continue to be for at least another few decades. Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.
Personally I don’t buy the argument that it has to be one or the other as Tesla have claimed, but between the two, vision is the only one that captures all the data sufficient to drive a car.
But if you use "roomba" as a generic term for robot vacuum then yes, Chinese Ecovacs and Xiaomi introduced lidar-based robot vacuums in 2015 [2].
[1] https://www.theverge.com/news/627751/irobot-launches-eight-n...
[2] https://english.cw.com.tw/article/article.action?id=4542
My ex got a Roomba in the early 2010s and it gave me an irrational but everlasting disdain for the company.
They kept mentioning their "proprietary algorithm" like it was some amazing futuristic thing but watching that thing just bump into something and turn, bump into something else and turn, bump into something again and turn again, etc ... it made me hate that thing.
Now when my dog can't find her ball and starts senselessly roaming in all the wrong directions in a panic, I call it Roomba mode.
My neighbour used to park like that; "thats what the bumpers are for - bumping"
I think fsd should be both at minimum though. No reason to skimp on a niw inexpensive sensor that sees things vision alone doesn’t.
> Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.
Actually, given that basically every meaningful LIDAR on the market gives an "intensity" value for each return, in surprisingly many cases you could get this kind of imaging behavior from LIDAR so long as the point density is sufficient for the features you wish to capture (and point density, particularly in terms of points/sec/$, continues to improve at a pretty good rate). A lot of the features that go into making road signage visible to drivers (e.g. reflective lettering on signs, cats eye reflectors, etc) also result in good contrast in LIDAR intensity values.
It's like having 2 pilots instead of 1 pilot. If one pilot is unexpectedly defective (has a heart attack mid-flight), you still have the other pilot. Some errors between the 2 pilots aren't uncorrelated of course, but many of them are. So the chance of an at-fault crash goes from p and approaches p^2 in the best case. That's an unintuitively large improvement. Many laypeople's gut instinct would be more like p -> p/2 improvement from having 2 pilots (or 2 data streams in the case of camera+LIDAR).
In the camera+LIDAR case, you conceptually require AND(x.ok for all x) before you accelerate. If only one of those systems says there's a white truck in front of you, then you hit the brakes, instead of requiring both of them to flag it. False negatives are what you're trying to avoid because the confusion matrix shouldn't be equally weighted given the additional downside of a catastrophic crash. That's where two somewhat independent data streams becomes so powerful at reducing crashes, you really benefit from those ~uncorrelated errors.
This is not going to be true for a very long time, at least so long as one's definition of "vision" is something like "low-cost passive planar high-resolution imaging sensors sensitive to the visual and IR spectrum" (I include "low-cost" on the basis that while SWIR, MWIR, and LWIR sensors do provide useful capabilities for self-driving applications, they are often equally expensive, if not much more so, than LIDARs). Camera sensors have gotten quite good, but they are still fundamentally much less capable than the human eyes plus visual cortex in terms of useful dynamic range, motion sensitivity, and depth cues - and human eyes regularly encounter driving conditions which interfere or prohibit safe driving (e.g. mist/ fog, heavy rain/snow, blowing sand/dust, low-angle sunlight at sunrise/sunset/winter). One of the best features of LIDAR is that it is either immune or much less sensitive to these phenomena at the ranges we care about for driving.
Of course, LIDAR is not without its own failings, and the ideal system really is one that combines cameras, LIDARs, and RADARs. The problem there is that building automotive RADAR with sufficient spatial resolution to reliably discriminate between stationary obstacles (e.g. a car stalled ahead) and nearby clutter (e.g. a bridge above the road) is something of an unsolved problem.
By the way, Tesla engineers secretly trained their vision systems using LIDAR data because that's how you get training data. When Elon Musk found out, he fired them.
Finally, your premise is nonsensical. Using end to end learning for self driving sounds batshit crazy to me. Traffic rules are very rigid and differ depending on the location. Tesla's self driving solution gets you ticketed for traffic violations in China. Machine learning is generally used to "parse" the sensor output into a machine representation and then classical algorithms do most of the work.
The rationale for being against LIDAR seems to be "Elon Musk said LIDAR is bad" and is not based on any deficiency in LIDAR technology.
Imagine if the watch simply tells you if it is safe to jump into the pool (depending on the time it may or may not have water). If watches conflict, you still win by not jumping.
So in a way, yes.
The main difference is that "don't know the time" is a trivial consequence, but "crash into a white truck at 70mph" is non-trivial.
But it's the same statistical reasoning.
I know there are theoretical and semi-practical ways of reading those indicators with features that are correlated with the visual data, for example thermoplastic line markings create a small bump that sufficiently advanced lidar can detect. However, while I'm not a lidar expert, I don't believe using a completely different physical mechanism to read that data will be reliable. It will surely inevitably lead to situations where a human detects something that a lidar doesn't, and vice versa, just due to fundamental differences in how the two mechanisms work.
For example, you could imagine a situation where the white lane divider thermoplastic markings on a road has been masked over with black paint and new lane markings have been painted on - but lidar will still detect the bump as a stronger signal than the new paint markings.
Ideally while humans and self driving coexist on the same roads, we need to do our best to keep the behaviour of the sensors to be as close to how a human would interpret the conditions. Where human driving is no longer a concern, lidar could potentially be a better option for the primary sensor.
Conflicting lane marking due to road work/changes is already a major problem for visual sensors and human drivers, and something that fairly regularly confuses ADAS implementations. Any useful self-driving system will already have to consider the totality of the situation (apparent lane markings, road geometry, other cars, etc) to decide what "lane" to follow. Arguably a "geometry-first" approach with LIDAR-only would be more robust to this sort of visual confusion.
The focus shouldn't be on which sensor to use. If you are going to use humans as examples, just take the time to think how a human drives. We can drive with one eye. We can drive with a screen instead of a windshield. We can drive with a wiremesh representation of the world. We also use audio signals quite a bit when when driving as well.
The way to build a self driving suite is start with the software that builds your representation of the world first. Then any sensor you add in is a fairly trivial problem of sensor fusion + Kalman filtering. That way, as certain tech gets cheaper or better or more expensive and worse, you can just easily swap in what you need to achieve x degree of accuracy.
We truly have no understanding of how the human brain really models the world around us and reasons over motion, and frankly anyone claiming to is lying and trying to sell something. "But humans can do X with just Y and Z..." is a very seductive idea, but the reality is "humans can do X with just Y, Z, and an extremely complex and almost entirely unknown brain" and thus trying to do X with just Y and Z is basically a fool's errand.
> ...builds your representation of the world first...
So far, I would say that one of the very few representations that can be meaningfully decoupled from the sensors in use is world geometry, and even that is a very weak decoupling because the ways you performantly represent geometry are deeply coupled with the capabilities of your sensors (e.g. LIDAR gives you relatively sparse points with limited spatial consistency, cameras give you dense points with higher spatial consistency, RADAR gives you very sparse targets with velocity). Beyond that, the capabilities of your sensors really define how you represent the world.
The alternative is that you do not "represent" the world but instead have that representation emerge implicitly inside some huge neural net model. But those models and their training end up even more tightly coupled to the type of data and capabilities of your sensors and are basically impossible to move to new sensor types without significant retraining.
> Then any sensor you add in is a fairly trivial problem of sensor fusion + Kalman filtering
"Sensor fusion" means everything and nothing; there are subjects where "sensor fusion" is practically solved (e.g. IMU/AHRS/INS accelerometer+gyro+magnetometer fusion is basically accepted as solved with EKF) and there are other areas where every "fusion" of multiple sensors is entirely bespoke.
Yes, humans don’t have built in lidar. But humans do use tools to augment their capabilities. The car itself is one example. Birds don’t have jet engines, props, or rotors… should we not use those?
I mean, you have to have vision to drive. What are you getting at? You can't have a lidar only autonomous vehicle.
>Lidars come down in price ~40x.
Is that really true? Extraordinary claims require extraordinary proof.Ars cites this China Daily article[0], which gives no specifics and simply states:
>A LiDAR unit, for instance, used to cost 30,000 yuan (about $4,100), but now it costs only around 1,000 yuan (about $138) — a dramatic decrease, said Li.
How good are these $138 LiDARs? Who knows, because this article gives no information.This article[1] from around the same time gives more specifics, listing under "1000 yuan LiDARs" the RoboSense MX, Hesai Technology ATX, Zvision Technologies ZVISION EZ5, and the VanJee Technology WLR-760.
The RoboSense MX is selling for $2,000-3,000, so it's not exactly $138. It was going to be added to XPENG cars, before they switched away from LiDAR. Yikes.
The ATX is $1400, the EZ5 isn't available, and the WLR-760 is $3500. So the press release claims of sub-$200 never really materialized.
Furthermore, all of these are low beam count LiDARs with a limited FOV. These are 120°x20°, whereas Waymo sensors cover 360°x95° (and it still needs 4 of them).
It seems my initial skepticism was well placed.
>if you had to choose one or the other for cost you’d be insane to choose vision over lidar
Good luck with that. LiDAR can't read signs.[0] https://global.chinadaily.com.cn/a/202503/06/WS67c92b5ca310c...
[1] https://finance.yahoo.com/news/china-beijing-international-a...
The argument is that humans provide a proof-of-concept that vision + neural net can drive a car, because (for some reason) some people doubt this is possible. However there's no need for a "proof of concept" that mechanical legs can be built, because everyone already know that building mechanical legs is possible.
Waymo has certainly had its share of issues lately on the "practicality" axis. The cost of (actually good enough) LiDAR hardware doesn't help practicality either.
They surely have as Waymo have a much better safety record than Tesla and that's even with Tesla hiding and obscuring their crash data - probably outright lying about most crashes if Musk has any input to their processes.
The big advantage of Lidar is that it can easily pick out obstructions and that's a good ability for a driver to have, whereas Teslas can get fooled by shadows etc. It seems obvious to me that having cameras and Lidar allows for much better analysis of traffic and pedestrians.
>They surely have as Waymo have a much better safety record than Tesla
Tesla is already closing in on Waymo to within a factor of 2[0], and this is less than 6 months in, with Waymo having 125 million autonomous miles and Tesla having under a million. Bet on whichever horse you want, but this trend is not looking good for Waymo. >It seems obvious to me
Oh thank goodness! Fortunately this style of argument has never been wrong before. ;)[0] https://mashable.com/article/tesla-robotaxis-with-human-safe...
LIDAR is also straight up worthless without an unholy machine learning pipeline to massage its raw data into actual decisions.
Self-driving is an AI problem, not a sensor problem - you aren't getting away from AI no matter what you do.
The US car manufacturers are cooked.
Edit: Holden Spark.
For the model 3 it’s USD$8000 cheaper like for like.
[0]: https://en.wikipedia.org/wiki/Chevrolet_Spark#Discontinuatio...
Biden put a 100% tariff on Chinese cars and then Trump added tariffs on inputs.
Americans are getting screwed!
Once FSD, we will make rules about the software that will have the effect of excluding Chinese companies. I seriously doubt that I'll see Chinese cars here in my lifetime.
And somehow US consumers feel comfortable paying more for worse cars.
We saw that during the 80-s, with the Japanese cars.
I wouldn't want to own it in a very dense city, but there are only a couple of those in the US. Most US cities even at their densest locations are fine with a half ton.
What car has that? Please do not spread misinformation.
The Lightning taillights are expensive, a couple grand directly from Ford, primarily because of the integrated blind spot radar. That is the part that needs to be re-paired to the truck if you replace it, the taillights themselves are same as they ever were. Most of the time when someone breaks a taillight they just grab one from eBay and swap over the BLIS because it wasn't damaged.
Also, expensive taillights and headlights are 1) not unique to the Lightning, and 2) not unique to Ford.
I'm not entirely convinced Ford would have discontinued the BEV if the F150 aluminum manufacturer hadn't caught on fire a few times over the space of a month or so. Ford really needs to go for maximum margin trucks when they cannot produce all that they want, so it made sense to put the Lightning BEV on indefinite hold.
BYD Dolphin is right on the edge of being a CUV. They can trivially scale it up a bit. It'll be more expensive, but not by much.
It's baffling and a complete self goal.
The GMC dealership near me is spilling full-size++ pick-ups and enormous Suburban/Tahoe/whatevers out of it's lot and onto the grass. The average sticker is ~$48K/~$750 per month and, depending on driving habits, it can cost hundreds of dollars per week to run these vehicles. That's to say nothing of insurance, maintenance and the cost of replacing those monster truck tires every 2-3 years.
Compare all that to a BYD you could realistically buy outright for $10-15K and charge in your driveway every night.
Tariffs alone can't keep out cheap foreign products.
So a better way to put it is "protects US automakers in the US." And that assumes NA manufacturers would be unaffected by declining sales abroad.
I don't know what the real barrier to success will be, but I don't think it will be blindness. It may be difficulty competing on labor cost, but that's a good case for carefully applied tariffs to keep competition fair.
If these cars are to be sold in western markets, there needs to be strong regulation. Absolutely no digital data connections, for starters.
If the tech industry has taught us anything, it's that big money is still as irresponsible and greedy as ever.
I suppose that one small bit of hope is that one of the most obvious bad actors in general happened to be opposed to Lidar, and might like to screw competitors with a scandal. So the news might come out, after much tragic damage is done.
Everyone is accustomed to cars malfunctioning, in numerous ways.
An intuition from an analogy that should be recognizable to HN...
Everyone is accustomed to data breaches of everything, and thinks it's just something you have to live with. But the engineers in a position to warn that a given system is almost guaranteed to have data breaches... don't warn. And don't even think that it's something to warn about. And if they did warn, they'd be fired or suppressed. And their coworkers would wonder what was wrong with them, torpedoing their career over something that's SOP, and that other engineers will make happen anyway. Any security effort is on reactive mitigation, theatre, CYA, and regulatory capture to escape liability.
I'd like to think that automotive engineers are much more ethical than tech industry, but two things going on:
(1) we're seeing a lot of sketchy tech in cars, like surveillance, and unsafe use of touchscreens;
(2) anything "AI" in a car is presumably getting culture influence from tech industry.
So I wouldn't trust automakers on anything intersecting with tech industry.
Laser eye safety risk is very measurable and well classified.
For SUVs, maybe it could be blended in with a roof air scoop, like on some off-road trucks. Or a light bar.
Where is the LiDAR on the Atto 1? In the grille? How much worse is the field of view?
American product design is obsessed with appearance and finish. Products end up costing 3 times more and functionality is degraded.
Under that model, LIDAR training data is easy to generate. Create situations in a lab or take recordings from real drives, label them with the high-level information contained in them and train your models to extract it. Making use of that information is the next step but doesn't fundamentally change with your sensor choice, apart from the amount of information available at different speeds, distances and driving conditions
put the car in a video game and raytrace what the lidar would see
https://www.carscoops.com/2025/11/volvo-says-sayonara-to-lid...
> In a statement, a Volvo Cars USA spokesperson added the decision was made “to limit the company’s supply chain risk exposure, and it is a direct result of Luminar’s failure to meet its contractual obligations to Volvo Cars.”
Joking aside, this BYD Seagull, or Atto 1 in Australia (AUD$24K) and Dolphin Surf in Europe (£18K in the UK), is one the cheapest EV cars in the world and selling at around £6K in China. It's priced double in Australia and triple in the UK compared to its original price in China. It's also one of China best selling EV cars with 60K unit sold per month on average.
Most of the countries scrambling to block its sales to protect their own car industry or increase the tariff considerably.
It's a game changing car and it really deserve the place in EV car world Hall of Fame, as one of the legendary cars similar Austin 7, the father of modern ICE car including BMW Dixi and Datsun Type 11.
[1] BYD_Seagull:
https://en.wikipedia.org/wiki/BYD_Seagull
[2] Austin 7:
Austin 7 and its derivatives (notably Dixi that kickstarted the highly successful BMW car business), dictated and popularized the modern car architecture, interfaces and controls stereotype as we know today. In order to drive old cars prior to Austin 7, we probably need a manual before we can drive them except the Cadillac Type 53 car, the original car that heavily inspired the Austin 7.
Austin 7 is the lightest car and cheapest proper car of its generation, and even by today's standard and inflation. As crazy as it sounds you can even drive it now in the UK road without any modification [2].
It become the template of modern cars, made popular in the UK, Germany and Japan, and then the rest of the world since these three countries are major manufacturers of modern cars.
The lighweight and low cost price of the baby Seagull (smallest BYD), is very similar to Baby Austin (popular name for Austin 7 in the UK) innovation criteria.
[1] Jeremy Clarkson and James May Find the First Car [video]:
https://news.ycombinator.com/item?id=46409075
[2] Everyone should try this! 1924 Austin Seven - no synchromesh, uncoupled brakes, in the rain! [video]:
Last week I saw some old Rolls Royce that I absolutely could not guess even the decade of. The carriage looked 1930s but the interior looked 1950s - until I noticed what might have been spark advance levers on the steering wheel. It's a super luxury vehicle with super conservative styling, so I really don't know if it had a luxury interior for it's time or a classic exterior for it's time.
Aww
The number of times I need to do this in daily driving is approximately zero.
Maybe you don't 0-60 often, but 0-30 has to be a bit more common?
https://www.amnesty.org/en/latest/news/2024/10/human-rights-...
> No demonstration of alignment (0-22 points)
What does "no demonstration of alignment" mean in this context?
Eastern companies often don't proactively demonstrate compliance beyond what's legally required, especially to Western NGOs. Does this lack of demonstration actually prove they're violating human rights?
We're going to look so backwards and "soviet" after a while.
But assistive devices are well embedded. reversing tones. rear vision cameras.
So, adding something which can do side knock, pavement risk, sideswipe, blind spot, or 'pace to car in front' type stuff is a bit obvious if you ask me, and if it's optional, then all I want is the minimal wiring harness cost amortized out so retrofit isn't too hard.
I hope BYD also continues to do "real switches" and "smaller TV dashboard" choices because I'm not a fan of touch screen, and large screen.
And, I should say, I’m a terrible owner. This car had (at most) 10 maintenance checks (and oil changes) in its life. Emphasis in “at most”.
I intend to buy a new one in about 3 years and there’s no chance in hell I’m going for something shiny that breaks after 5 years like this fully made in China stuff (even Teslas are cumbersome to maintain according to statistics).
I want a car to last at least 15 years with very little servicing, not some disposable tech gadget that I can’t be sure it will work next month without some shop time.
P.S. The car is a Mazda 2.
That's just not true. They absolutely don't. There's no chance in hell most (or any for that matter) Chinese carmaker has better quality than a Volvo, or a BMW, or a Mercedes or an Audi, or, etc, etc
cadamsdotcom•3w ago
senti_sentient•3w ago
iknowstuff•3w ago
refulgentis•3w ago
Just too much real world data.
(i.e. scaled paid service, no drivers, multiple cities, for 1 year+)
DustinBrett•3w ago
refulgentis•3w ago
DustinBrett•3w ago
refulgentis•3w ago
You’re smart Darren, and so are other people, you should assume I knew the cars have remote backup operators. Again, you’re smart, you also know why that doesn’t mitigate having a scaled robotaxi service vs. nothing
I doubt you’ll chill out but here’s a little behind the scenes peek for you that also directly address what you’re saying: a big turning point for me was getting a job at Google and realizing Elon was talking big game but there’s 100,000 things you gotta do to run a robot taxi service and Tesla was doing none of them. The epiphany came when I saw the windshield wipers for cameras and lidar.
You might note even a platonically ideal robotaxi service would always have capacity for remote operation.
DustinBrett•3w ago
My replies are at the same level as that which I respond to, never aggressive IMO.
And if you "knew" something about the relevant topic and leave it out, that in itself is part of the dishonesty.
So once you got a job at Google then you felt Waymo was better, hmmm.
Tesla has a robot taxi service that in some cases has nobody in the car. Also everyone that owns a Tesla has experienced FSD in which it goes from A to B without being touched which is the same as it driving by itself. A person just went cross country and back with this. So to say Tesla is doing none of the 100,000 things you think are required, I think that says more about what someone at Google thinks is needed vs what is happening on the ground.
I am not against remote operation in some cases, but those suggesting Waymo has solved this need to admit that it relies heavily on them for basic decision, like what to do when the power goes out at intersections.
imtringued•3w ago
DustinBrett•3w ago
Mawr•3w ago
No, some edge case that made the cars fail safely during a power outage doesn't compare. If that's the best you can come up with, you've got nothing.
DustinBrett•3w ago
iknowstuff•3w ago
Robotaxi is a separate product. They are fantastic at driving but until they remove supervisors it’s a moot comparison
refulgentis•3w ago
terminalshort•3w ago
Rebelgecko•3w ago
DustinBrett•3w ago
iknowstuff•3w ago
And your stats comparing to waymo are made up and debunked in the very reddit thread they came from
cyberax•3w ago
So Tesla is in a weird state right now. Tesla's highway assist is shit, it's worse than Mercedes previous generation assist after Tesla switched to the end-to-end neural networks. The new MB.Drive Assist Pro is apparently even better.
FSD attempts to work in cities. But it's ridiculously bad, it's worse than useless even in simple city conditions. If I try to turn it on, it attempts to kill me at least once on my route from my office to my home. So other car makers quite sensibly avoided it, until they perfected the technology.
ronnier•3w ago
terminalshort•3w ago
qwerpy•3w ago
FireBeyond•3w ago
That's why Tesla's stats are BS. "All drivers, all conditions, all vehicles, all roads" versus "Where FSD is even functional".
iknowstuff•3w ago
https://m.youtube.com/watch?v=2MTmSgVYTTQ
https://m.youtube.com/results?search_query=Fsd+14+blizzard
Any more questions?
cyberax•3w ago
In cities, it's just shit. If you're using it without paying attention, your driving license has to be revoked and you should never be allowed to drive.
durandal1•3w ago
qwerpy•3w ago
cyberax•3w ago
Tesla FSD gives up with the red-hands-of-death panic at this spot: https://maps.app.goo.gl/Cfe9LBzaCLpGSAr99 (edit: fixed the location)
It also misinterprets this signal: https://maps.app.goo.gl/fhZsQtN5LKy59Mpv6 It doesn't have enough resolution to resolve the red left arrow, especially when it's even mildly rainy.
At this intersection, it just gets confused and I have to take over to finish the turn: https://maps.app.goo.gl/DHeBmwpe3pfD6AXc6
You're welcome to try these locations.
dham•3w ago
cyberax•3w ago
> Red hands of death would be sunglare due to your windshield not being clean. I haven't had red hands since 14 came out.
My windshield is completely normal. Not unusually dirty or anything. It's also Seattle. What is the "sunglare"?
dham•3w ago
These are influencers who have a stake in Tesla. The general consensus from the regular users is that it is really good starting at FSD 14. It's the first version that finally feels complete. I have 5000 miles on FSD 14 with no disengagements. 99% of my driving is FSD. I couldn't say that for any other version. Even my wife has 85% of her driving on FSD and she hated it before. She just tends to drive herself on short drives and in parkings lots, where as I don't. So your take just doens't line up with what people are saying in social media and my personal experience.
> My windshield is completely normal
If it's never been cleaned from the inside, it's a good chance it's not. The off-gassing from new cars causes fog on the inside of the windshield in front of the camera. It might behave ok (or wierd) but when sun hits it you get red hands of death.
You need to clean it yourself or have Tesla do it. They offer it for free. I did mine following this video and it wasn't bad if you have the right tool. After I did this things were completely fine in low direct sun.
https://www.youtube.com/watch?v=PwiMCIxYFXM
cyberax•3w ago
I've seen it on multiple forums. Just like a broken record.
> If it's never been cleaned from the inside, it's a good chance it's not.
The camera is clean. I can see that on the dashcam records. And if the system is so fragile that a bit of dust kills it, then it's not good.
The issue with the red-hands-of-death is caused by the forward collision warning, the road there curves and slopes up, so the car gets confused and interprets the car in front as if it's on a collision course. This happens even during manual driving, btw. False FCWs are a common problem, if you check forums, and people are annoyed because it affects their safety score used for Tesla Insurance.
FSD got better than it was 4 years ago. But it's still _nowhere_ near Waymo. You absolutely can not just sit back and snooze while it's driving, you constantly have to be on guard.
dham•3w ago
You won't see it unless you shine light into it.
> And if the system is so fragile that a bit of dust kills it, then it's not good.
It's not dust, it's fog on the inside of the windshield from offgassing.
> The issue with the red-hands-of-death is caused by the forward collision warning, the road there curves and slopes up, so the car gets confused and interprets the car in front as if it's on a collision course
Of fair enough. I've never seen this, and I used FSD (14) all through the Appalachian mountains.
> FSD got better than it was 4 years ago. But it's still _nowhere_ near Waymo
Fair enough, but FSD is still years ahead of any other system you can buy as a consumer.
JumpCrisscross•3w ago
I used the latest FSD and Waymo in December. FSD still needs to be supervised. It’s impressive and better than what my Subaru’s lane-keeping software can do. But I can confidently nap in a Waymo. These are totally different products and technology stacks.
iknowstuff•3w ago
dham•3w ago
cyberax•3w ago
The most recent one is: https://media.mbusa.com/releases/release-4889b1d1c66cddc7120...
imtringued•3w ago
I'm not talking about some Tesla style last second bullshit where you're supposed to compensate for the deficiencies of the system that supposedly can do the full journey. I mean a route like L2->L3->L2 where L2 is human supervised autonomous driving and L3 is autonomous driving with zero intervention. You can't tell people they're allowed to drink a coffee and then one minute later tell them to supervise the driving.
dham•3w ago
Interesting because that's just not my experience at all and a lot of other users.
dham•3w ago
This isn't even close to being right.
Rebelgecko•3w ago
If you're just getting me mixed up with another poster, I got my stats from an electrek article supplemented by Waymo's releases: https://waymo.com/safety/impact/
Tesla's tech is also marketed as a full self driving autopilot, not just basic driver assistance like adaptive cruise control.
That's how they're doing the autonomous robotaxis and the cross country drives without anyone touching the steering wheel.
shaklee3•3w ago
iknowstuff•3w ago
cr125rider•3w ago
iknowstuff•3w ago
kelnos•3w ago
iknowstuff•3w ago
DustinBrett•3w ago
jeltz•3w ago
DustinBrett•3w ago
jatins•3w ago
They might get better but how is that not evidence enough that currently Robotaxis are behind Waymos in self driving capabilities?
DustinBrett•3w ago
kcb•3w ago
FSD is here, it wasn't 3 or 4 years ago when I first bought a Tesla, but today it's incredible.
jandrewrogers•3w ago
For better or worse, passive optical is much more robust against these types of risks. This doesn't matter much when LIDAR is relatively rare but that can't be assumed to remain the case forever.
consumer451•3w ago
What's crazy to me is that anyone would think that anything short of ASI could take image based world understanding to true FSD. Tesla tried to replicate human response, ~"because humans only have eyes" but largely without even stereoscopic vision, ffs.
[0] https://www.youtube.com/watch?v=IQJL3htsDyQ
wongarsu•3w ago
Sure, someone can put up a wall painted to look like a road, but we have about a century of experience that people will generally not do that. And if they do it's easy to understand why that was an issue, and both fixing the issue (removing the mural) and punishing any malicious attempt at doing this would be swift
consumer451•3w ago
Is this a joke? Graffiti is now punishable and enforced by whom exactly? Who decides what constitutes an illegal image? How do you catch them? What if vision-only FSD sees a city-sanctioned brick building's mural as an actual sunset?
So you agree that all we need is AGI and human-equal sensors for Tesla-style FSD, but wait... plus some "swift" enforcement force for illegal murals? I love this, I have had heath issues recently, and I have not laughed this hard for a while. Thank you.
Hell, at the last "Tesla AI Day," Musk himself said ~"FSD basically requires AGI" - so he is well aware.
wongarsu•3w ago
consumer451•3w ago
But what if your city hired you to paint a sunset mural on a wall, and then a vision-only system killed a family of four by driving into it, during some "edge case" lighting situation?
I would like to think that we would apply "security is an onion" to our physical safety as well. Stereo vision + lidar + radar + ultrasonic? Would that not be the least that we could do as technologists?
wongarsu•3w ago
dham•3w ago
https://www.youtube.com/watch?v=TzZhIsGFL6g
solumunus•3w ago
ycui1986•3w ago
jandrewrogers•3w ago
LIDAR has much more in common with ordinary radar (it is in the name, after all) and is similarly susceptible to interference.
CamperBob2•3w ago
Like GPS, LIDAR can be jammed or spoofed by intentional actors, of course. That part's not so easy to hand-wave away, but someone who wants to screw with road traffic will certainly have easier ways to do it.
addaon•3w ago
For rotating pulsed lidar, this really isn't the case. It's possible, but certainly not trivial. The challenge is that eye safety is determined by the energy in a pulse, but detection range is determined by the power of a pulse, driving towards minimum pulse width for a given lens size. This width is under 10 ns, and leaning closer to 2-4 ns for more modern systems. With laser diode currents in the tens of amps range, producing a gaussian pulse this width is already a challenging inductance-minimization problem -- think GaN, thin PCBs, wire-bonded LDs etc to get loop area down. And an inductance-limited pulse is inherently gaussian. To play any anti-interference games means being able to modulate the pulse more finely than that, without increasing the effective pulse width enough to make you uncompetitive on range. This is hard.
CamperBob2•3w ago
Large numbers of bits per unit of time are what it takes to make two sequences correlate (or not), and large numbers of bits per unit of time are not a problem in this business. Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
addaon•3w ago
I haven't seen a system that does anti-interference across multiple pulses, as opposed to by shaping individual pulses. (I've seen systems that introduce random jitter across multiple pulses to de-correlate interference, but that's a bit different.) The issue is you really do get a hell of a lot of data out of a single pulse, and for interesting objects (thin poles, power lines) there's not a lot of correlation between adjacent pulses -- you can't always assume properties across multiple pulses without having to throw away data from single data-carrying pulses.
Edit: Another way of saying this -- your revisit rate to a specific point of interference is around 20 Hz. That's just not a lot of bits per unit time.
> Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
I can believe this is true for FMCW lidar, but I know it to be untrue for pulsed lidar. Perhaps we're discussing different systems?
CamperBob2•3w ago
My naive assumption would be that they would do exactly that. In fact, offhand, I don't know how else I'd go about it. When emitting pulses every X ns, I might envision using a long LFSR whose low-order bit specifies whether to skip the next X-ns time slot or not. Every car gets its own lidar seed, just like it gets its own key fob seed now.
Then, when listening for returned pulses, the receiver would correlate against the same sequence. Echoes from fixed objects would be represented by a constant lag, while those from moving ones would be "Doppler-shifted" in time and show up at varying lags.
So yes, you'd lose some energy due to dead time that you'd otherwise fill with a constant pulse train, but the processing gain from the correlator would presumably make up for that and then some. Why wouldn't existing systems do something like this?
I've never designed a lidar, but I can't believe there's anything to the multiple-access problem that wasn't already well-known in the 1970s. What else needs to be invented, other than implementation and integration details?
Edit re: the 20 Hz constraint, that's one area where our assumptions probably diverge. The output might be 20 Hz but internally, why wouldn't you be working with millions of individual pulses per frame? Lasers are freaking fast and so are photodiodes, given synchronous detection.
addaon•3w ago
A typical long range rotating pulsed lidar rotates at ~20 Hz, has 32 - 64 vertical channels (with spacing not necessarily uniform), and fires each channel's laser at around 20 kHz. This gives vertical channel spacing on the order of 1°, and horizontal channel spacing on the order of 0.3°. The perception folks assure me that having horizontal data orders of magnitude denser than vertical data doesn't really add value to them; and going to a higher pulse rate runs into the issue of self-interference between channels, which is much more annoying to deal with then interference from other lidars.
If you want to take that 20 kHz to 200 kHz, you first run into the fact that there can now be 10 pulses in flight at the same time... and that you're trying to detect low-photon-count events with an APD or SPAD outputting nanoamps within a few inches of a laser driver putting generating nanosecond pulses at tens of amps. That's a lot of additional noise! And even then, you have an 0.03° spacing between pulses, which means that successive pulses don't even overlap at max range with a typical spot diameter of 1" - 2" -- so depending on the surfaces you're hitting, on their continuity as seen by you, you still can't really say anything about the expected time alignment of adjacent pulses. Taking this to 2 MHz would let you guarantee some overlap for a handful of pulses, but only some... and that's still not a lot of samples to correlate. And of course your laser power usage and thermal challenges just went up two orders of magnitude...
tim333•3w ago
vachina•3w ago
catgirlinspace•3w ago
leobg•3w ago