The video on Reddit: https://www.reddit.com/r/3Dprinting/comments/1olyzn6/i_made_...
> Do you mean the refresh rate should be higher? There's two things limiting that: > - The sensor isn't optimized for actually reading out images, normally it just does internal processing and spits out motion data (which is at high speed). You can only read images at about 90Hz > - Writing to the screen is slow because it doesn't support super high clock speeds. Drawing a 3x scale image (90x90 pixels) plus reading from the sensor, I can get about 20Hz, and a 1x scale image (30x30 pixels) I can get 50Hz.
I figured there would be limitations around the second, but I was hoping the former wasn't such a big limit.
https://www.youtube.com/watch?v=EE9AETSoPHw&t=44
https://www.instructables.com/Single-Pixel-Camera-Using-an-L...
(Okay not the same guy, but I wanted to share this somewhat related "extreme" camera project)
Sincerely a lot of thanks.
HN rarely has content like this. You’re better off just going straight to the subreddit to get more than just a random once in a blue moon post that’s actually somewhat related to bona fide news for hackers. Reddit has them in massive quantities.
Just my 2c. And yes thanks dang/tomhow I know I broke the guidelines I’ll see myself out no need to remind me.
https://old.reddit.com/r/electronics/comments/1olyu7r/i_made...
> Optical computer mice work by detecting movement with a photoelectric cell (or sensor) and a light. The light is emitted downward, striking a desk or mousepad, and then reflecting to the sensor. The sensor has a lens to help direct the reflected light, enabling the mouse to convert precise physical movement into an input for the computer’s on-screen cursor. The way the reflected changes in response to movement is translated into cursor movement values.
I can't tell if this grammatical error is a result of nonchalant editing and a lack of proofreading or a person touching-up LLM content.
> It’s a clever solution for a fundamental computer problem: how to control the cursor. For most computer users, that’s fine, and they can happily use their mouse and go about their day. But when Dycus came across a PCB from an old optical mouse, which they had saved because they knew it was possible to read images from an optical mouse sensor, the itch to build a mouse-based camera was too much to ignore.
Ah, it's an LLM. Dogshit grifter article. Honestly, the HN link should be changed to the reddit post.
https://old.reddit.com/r/electronics/comments/1olyu7r/i_made...
I wonder why so many shades of grey? Fancy!
(Yeah, the U.K. spelling of "grey" looks more "gray" to these American eyes.)
Hilarious too that this article is on Petapixel. (Centipixel?)
camera the size of a grain of rice with 320x320 resolution
https://ams-osram.com/products/sensor-solutions/cmos-image-s...
https://www.mouser.com/datasheet/3/5912/1/NanEyeC_DS000503_5...
MarkusWandel•2mo ago
lillecarl•2mo ago
16x16 sounds really shit for me who still has perfect vision indeed but I bet it's life changing to be able to identify presence / absence of stuff around you and such! Yay for technology!
ACCount37•2mo ago
By now, we have smartphones with camera systems that beat human eyes, and SoCs powerful enough to perform whatever image processing you want them to, in real time.
But our best neural interfaces have the throughput close to that of a dial-up modem, and questionable longevity. Other technological blockers advanced in leaps and bounds, but SOTA on BCI today is not that far away from 20 years ago. Because medicine is where innovation goes to die.
It's why I'm excited for the new generation of BCIs like Neuralink. For now, they're mostly replicating the old capabilities, but with better fundamentals. But once the fundamentals - interface longevity, ease of installation, ease of adaptation - are there? We might actually get more capable, more scalable BCIs.
arcanemachiner•2mo ago
BCI == Brain-computer interface
https://en.wikipedia.org/wiki/Brain–computer_interface
Lapsa•2mo ago
ACCount37•2mo ago
Lapsa•2mo ago
SiempreViernes•2mo ago
Fixed the typo for you.
ACCount37•2mo ago
Inaction has a price, you know.
omnicognate•2mo ago
jama211•2mo ago
rogerrogerr•2mo ago
The “no harm, ever” crowd does not have a monopoly on ethics.
jama211•2mo ago
chmod775•2mo ago
We didn't come up with these rules around medical treatments out of nowhere, humanity has learned them through painful lessons.
The medical field used to operate very differently and I do not want to go back to those times.
metalman•2mo ago
AI is the final failure of "intermitent" wipers,which like my latest car, is irevocably enabled to smeer the road grime and imperceptable "rain" into a goo, blocking by ability to see
immibis•2mo ago
metalman•2mo ago
who's working for who here anyway?
already?
makeitdouble•2mo ago
That's what we're having with VR: we came to a point where increasing DPI for laptop or phone seemed to make no sense; but that was also the point where VR started to be reachable, and over there a 300DPI screen is crude and we'd really want 3x that pixel density.
rogerrogerr•2mo ago
MarkusWandel•1mo ago
But the cruise control. You can not disable the adaptive part. It's either adaptive cruise control or nothing. And at night, the camera-based system sometimes gets confused in the mess of over-bright headlights coming the other way and brakes for no reason. So you end up not being able to use cruise control at all.
I guess the reason they make it that way is exactly the habit problem. You don't want people crashing into the car in front because they forgot that the cruise is not in adaptive mode.
Automatic wipers would be great (if optional to enable!) After all that same camera that makes all the other wizardry work can see rain drops on the windshield in front of it. So why don't the make that feature?
SwtCyber•2mo ago
MarkusWandel•2mo ago
Where this becomes relevant is when you consider depixellation. True blur can't be undone, but pixellation without appropriate antialiasing filtering...
https://www.youtube.com/watch?v=acKYYwcxpGk
So if your 30x30 camera has sharp square pixels with no antialiasing filter in front of the sensor, I'll bet the brain would soon learn to "run that depixellation algorithm" and just by natural motion of the camera, learn to recognize finer detail. Of course that still means training the brain to recognize 900 electrodes, which is beyond the current state of the art (but 16x16 pixels aren't and the same principle can apply there).
jacquesm•2mo ago
dehrmann•2mo ago
https://en.wikipedia.org/wiki/Direct_Stream_Digital
dhosek•2mo ago
I also remember a lot of experimenting with timing to try to get a simulation of polyphonic sound by trying to toggle the speaker at the zeros of sin aθ + sin bθ.