It probably _is_ using OpenCV under the hood.
I still haven't seen a compelling use case for AR.
It’s still kinda clunky now, but the tech will get better. That’s a money saver (and a big improvement) with the main barriers being tech and familiarity, and those just come with time. Very bullish on that
Even if they do manage to create mind-blowing hardware, they aren't exactly cornering a market here.
The background is dark, the occlusions are bad, the hardware is large, and the FOV is poor.
Magic Leap really burned a lot of good will imo by sucking up enormous amounts of AR funding having 'demo' marketing that was at best intentionally misleading if not just fraudulent.
I'm still bullish on AR being the next platform when the hardware is ready, but I'd bet on Apple or Oculus pulling that off, I wouldn't go near anything from Magic Leap.
This about sums it up: https://twitter.com/fernandojsg/status/1017411969169555457
It's a little reminiscent of General Magic - something like the AR they want is likely to exist in the future, but I'd surprised if it's from them.
Can you imagine Steve Jobs shipping something at the quality level of that video?
Note: Magic Leap specs are from a quick google search and may be out of date. Even improved they'll have the same issues to a slightly lesser degree.
First - field of view: The horizontal field of Magic Leap is 40 degrees. My primary monitor, a 16x9 32" monitor at about 3 feet from my eyes, is 42 degrees. So this can't even show me 100% of that, and definitely can't show me a second monitor in my peripheral vision.
Field of view is hard to improve as the optics are really close to your eyes and being head worn have limits of size and weight.
Second - Resolution: The magic leap resolution is apparently 1280x960, significantly less than 1080p. That's not even close to the 4K monitor I'm typing this on. That low resolution has to cover the entire area of my monitor. More if I want to stretch the field of wider.
Picture yourself programming on a 1280x960 32" monitor. Just to see I set my system that way for a minute. PIXELS EVERYWHERE! Also, now I need to reset all my carefully curated windows.
It's hard to improve resolution. The displays are very small to keep size and weight down. HMD displays are generally about the highest of DPI that can be built.
Third - Brightness: You can't draw black on a see-through HMD, all you can do is make the existing world brighter. The lenses are too close to the eye to be able to do any kind of masking or blocking of the ambient light.
So your display system won't be able to show much of an image over bright area; the text is either white over world color or background colored in a white field. It's not good for reading text and almost illegible at typical sizes in office lighting.
You can't improve brightness easily. These tiny displays make a lot of heat right near your head. Making them brighter means bigger heatsinks, taking weight and size, and more power with requires bigger batteries or shorter run time.
You can kinda cheat one a little with dark sunglass lenses to make the whole world darker. Or you can go to VR and just block the whole world and draw your interface over a video stream. The second option isn't really compelling because it for AR demos like Magic Leap shows.
Things like, pulling up addresses of buildings you look at, names of people you've met, line on ground for gps, playing board games with people without needing a board or dealing with the rule book (software assisted), see meta information floating around devices (battery level, year, serial number), etc. etc.
The UX of phones is pretty good but it suffers from its form factor. If you could have a UX for the world you can really enable a lot more human abilities in a really intuitive way and you can get closer to something that feels like telepathy.
Took me less than 5 minutes to think of the following:
1. Educational aspects such as being able to copy choreography by watching a virtual expert do it and still be able to see your own body mimicking the actions which she would not be able to do in VR (this could include juggling patterns, martial arts, any kind of complex motion)
2. Overlaying any number of AR layers on top of physical hardware, think of the idea that you could look at a complex circuit board and immediately get tooltip pop-ups over each integrated circuit and how they work
3. Building things in the real world located at absolute GPS coordinates and having them persist so that other people who are on the same shared AR "layer" see them. You could create buildings wondrous castles creatures and effectively create new layers of existence, and these layers could stack and be as deep as you ever wanted them to be
4. Being able to do virtual reality in much larger spaces so you could take your AR glasses and walk out onto a soccer field and then project a game such as you fighting a bunch of storm troopers while moving around physically in a huge field
Use case 1 seems to be a minor improvement over a video call on a decent monitor today, and this is assuming that the AR and other tech would advance hugely from where it is today, to actually be able to do realtime filming and rendering with high precision, perhaps even in 3D to get some real advantage.
Use case 2 seems more realistic, but will be limited by eye tracking precision, component idenficiaction precision, and occlusion issues. Input will also be an issue (choosing which tooltips to see).
Use case 3 seems worse than building things in VR, other than some fancy art installations. Why would I want a virtual object that I can't view from my own home? Also, interaction would be fantastically limited, making the whole thing disappointing.
Use case 4 suffers even worse from interaction issues, and it also seems like a downgrade from current technology, which allows me to play in huge virtual environments without even getting off my chair.