16x16 sounds really shit for me who still has perfect vision indeed but I bet it's life changing to be able to identify presence / absence of stuff around you and such! Yay for technology!
The video on Reddit: https://www.reddit.com/r/3Dprinting/comments/1olyzn6/i_made_...
https://www.youtube.com/watch?v=EE9AETSoPHw&t=44
https://www.instructables.com/Single-Pixel-Camera-Using-an-L...
(Okay not the same guy, but I wanted to share this somewhat related "extreme" camera project)
By now, we have smartphones with camera systems that beat human eyes, and SoCs powerful enough to perform whatever image processing you want them to, in real time.
But our best neural interfaces have the throughput close to that of a dial-up modem, and questionable longevity. Other technological blockers advanced in leaps and bounds, but SOTA on BCI today is not that far away from 20 years ago. Because medicine is where innovation goes to die.
It's why I'm excited for the new generation of BCIs like Neuralink. For now, they're mostly replicating the old capabilities, but with better fundamentals. But once the fundamentals - interface longevity, ease of installation, ease of adaptation - are there? We might actually get more capable, more scalable BCIs.
AI is the final failure of "intermitent" wipers,which like my latest car, is irevocably enabled to smeer the road grime and imperceptable "rain" into a goo, blocking by ability to see
Sincerely a lot of thanks.
Fixed the typo for you.
That's what we're having with VR: we came to a point where increasing DPI for laptop or phone seemed to make no sense; but that was also the point where VR started to be reachable, and over there a 300DPI screen is crude and we'd really want 3x that pixel density.
Inaction has a price, you know.
> Do you mean the refresh rate should be higher? There's two things limiting that: > - The sensor isn't optimized for actually reading out images, normally it just does internal processing and spits out motion data (which is at high speed). You can only read images at about 90Hz > - Writing to the screen is slow because it doesn't support super high clock speeds. Drawing a 3x scale image (90x90 pixels) plus reading from the sensor, I can get about 20Hz, and a 1x scale image (30x30 pixels) I can get 50Hz.
I figured there would be limitations around the second, but I was hoping the former wasn't such a big limit.
> Optical computer mice work by detecting movement with a photoelectric cell (or sensor) and a light. The light is emitted downward, striking a desk or mousepad, and then reflecting to the sensor. The sensor has a lens to help direct the reflected light, enabling the mouse to convert precise physical movement into an input for the computer’s on-screen cursor. The way the reflected changes in response to movement is translated into cursor movement values.
I can't tell if this grammatical error is a result of nonchalant editing and a lack of proofreading or a person touching-up LLM content.
> It’s a clever solution for a fundamental computer problem: how to control the cursor. For most computer users, that’s fine, and they can happily use their mouse and go about their day. But when Dycus came across a PCB from an old optical mouse, which they had saved because they knew it was possible to read images from an optical mouse sensor, the itch to build a mouse-based camera was too much to ignore.
Ah, it's an LLM. Dogshit grifter article. Honestly, the HN link should be changed to the reddit post.
https://old.reddit.com/r/electronics/comments/1olyu7r/i_made...
I wonder why so many shades of grey? Fancy!
(Yeah, the U.K. spelling of "grey" looks more "gray" to these American eyes.)
Hilarious too that this article is on Petapixel. (Centipixel?)
https://old.reddit.com/r/electronics/comments/1olyu7r/i_made...
who's working for who here anyway?
already?
Where this becomes relevant is when you consider depixellation. True blur can't be undone, but pixellation without appropriate antialiasing filtering...
https://www.youtube.com/watch?v=acKYYwcxpGk
So if your 30x30 camera has sharp square pixels with no antialiasing filter in front of the sensor, I'll bet the brain would soon learn to "run that depixellation algorithm" and just by natural motion of the camera, learn to recognize finer detail. Of course that still means training the brain to recognize 900 electrodes, which is beyond the current state of the art (but 16x16 pixels aren't and the same principle can apply there).
camera the size of a grain of rice with 320x320 resolution
https://ams-osram.com/products/sensor-solutions/cmos-image-s...
https://www.mouser.com/datasheet/3/5912/1/NanEyeC_DS000503_5...
The “no harm, ever” crowd does not have a monopoly on ethics.
We didn't come up with these rules around medical treatments out of nowhere, humanity has learned them through painful lessons.
The medical field used to operate very differently and I do not want to go back to those times.
I also remember a lot of experimenting with timing to try to get a simulation of polyphonic sound by trying to toggle the speaker at the zeros of sin aθ + sin bθ.