The reason AR glasses are chonky and not sexy is because they have a bunch of hardware and batteries and whatnot that require them to be that shape and size.
Assuming they’re fraudulent, they can make it look like anything they want because it doesn’t do what it purports. I’m sure that RayBan and Meta want them to look better but it’s simply not possible with the technology they have.
This guy is serious.
"absolutely no latency" -> only apple manages anything close to this, and that -- with custom silicon that can feed data from the camera to screen while it is still being read out from the camera. A no-name startup doing this ain't happening.
Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
uhm I didn't say that - what I am asking is exactly the opposite, in fact and the power glove thing was hardly capitalized. I wouldn't consider that a serious attempt.
And also in the real world people just do not care about cameras on glasses as much as people on HN trot out the glasshole articles from a decade ago. Both smart glasses and phones that are actively recording are everywhere already.
I'd explicitly want one without camera to avoid the 'glasshole effect'.
And yes people do care at least here in Europe. The meta glasses are banned at a lot of events now.
The cameras are not what makes the glasses bulky and people find a lot of utility in taking and sharing pictures and videos from their glasses. So you'll probably always want to have at least one camera on the product for that use case.
The ratios of image resolution and viewing distance to physical size are veeeeeeery bad with sound compared to cameras though. Cameras are also completely passive sensors that don't require an attached emitter in most circumstances.
I was thinking of backing it and I'm so glad I didn't. Immersed has a great app so I don't think this was a blatant con but I do think they bit off more than they could chew.
> tracking blah blah 6DoF blah blah IMU
This whole section is just wildly false. Tracking like shown in the video is easily done with just a camera, 1980s-era sparse optical flow, and basic fucking geometry. No IMU needed. People have been doing far more complex and stable motion tracking with no more input than single camera video for literally decades. And this device doesn't just have a camera; it has two HD stereoptic cameras, so they also get a depth map. You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
(If you want a fantastic example, see the intro sequence to the movie Stranger Than Fiction from 2006.)
> It would take time to affix an open source SLAM pipeline and even more for them to build their own.
And this is a complete non sequitur, as SLAM is also not needed for what they show in the video. Nothing shown requires mapping the area. It's also a joke to say that it would "take time to affix an open source SLAM pipeline" unless by "time" he means a few minutes.
> This would indicate either the software is using real-time depth tracking blah blah
The glasses have fucking binocular cameras in them! What the fuck else would they be for?
> But in the photos of Pickle 1, there is no sign of any spot to charge the device.
There is zero reason whatsoever to believe that those images are photos of the final product and not renders or props. It's like he's never seen marketing material before.
I can't even with this.
This guy's LinkedIn bio says "Aug 2022 - Mar 2023: Attended UVA as a first year studying economics and commerce before dropping out to build in VR full time." So it seems he's a self-important child with zero background. That explains a lot tbh.
>1980s-era corner feature detection, and basic fucking geometry
Which are pieces of how SLAM works.
>You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
World locked content is not novel. Existing glasses can do it today. The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built it as opposed to Pickle building all of this within a month.
It absolutely is not. Tracking is needed for mapping, not the other way around.
And it's definitely not needed for what they show in the video that this kid is complaining about. It's not even needed for associating things that go out of view and then come back, though it can help there.
> Which are pieces of how SLAM works.
Screws are pieces of how automobiles work, but it would be foolish to suggest that one needs a Honda Pilot to hang a painting on their wall.
> The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built
Please don't shift goalposts. The claim is that they're lying about capability. And the evidence given for that claim is flat out wrong.
The term tracking is being used twice. The tracking data that OpenXR exposes comes from SLAM. SLAM is done doing sensor fusion including signals that come from tracking points.
>And it's definitely not needed for what they show in the video
The video shows 6 dof tracking which for a production implementation would do SLAM tracking.
>for associating things that go out of view and then come back
Having memory of what existed before implies you have a form of a map. You also want a map to be able to match together the views of the multiple cameras.
>Please don't shift goalposts.
The claim I am referring to is, "6DoF with spatial anchoring on a device this small and compute constrained is hard for any company to build, let alone Pickle."
SLAM is not required to do what is shown in the video. As is an IMU. And an IMU is also not required for SLAM. Everything about the blog post is factually wrong.
> Having memory of what existed before implies you have a form of a map
Once again, you're just wrong here. Image feature correspondence works even without any spatial mapping. Once again, you're getting things backwards. You need to find correspondences before you can begin to make a map, not the other way around!
Anyway, I don't have the energy to argue more with someone who confidently doesn't actually know what they're talking about. So, good luck, have fun.
Digital camera sensors are all inherently extremely sensitive to infrared anyway and can see quite well in the dark with nothing more than an IR LED if you don't add a physical filter over the sensor, soooo...
Not with imperceptible latency
First of all, did you watch the video? (the whole thing is kinda annoying and long, but the part in question here is only about 3 seconds so it's worth looking) Two points about the video: 1) The positioning of the overlay is noticeably unstable in relation to the apparent camera motion, so it doesn't even show what the OP claims it does. 2) You don't have any way to know what the latency is because of that.
Anyway, yes even with, and even in the form factor if you optimize for the right things. The kind of simple feature tracking that can accomplish what's shown in the video was real-time in like 2005, and there have been significant hardware and algorithmic advancements in the past 20 years.
But I'd be interested in your examples that can achieve what Pickle is offering in a single pair of glasses.
It could be the case here? What would explain the accelerated development timeline, it is possible because it isn't their timeline at all, it is someone else's who started a long time ago. And it may be they are talking about their supplier's two year roadmap or something similar.
PS. One of the companies (or more specifically its owners) that was doing did was eventually charged with fraud.
Probably 99% of the electronics industry these days is like that. Laptops are one of the most commonly OEM'd products.
I think it's the case, but I also think it will not look or function anything like the mock they showed.
mkl•1d ago