Heres my video flying inside my house:
The jaw dropping aspects:
- it correctly knows when to mask for the column
- it does lighting effects from the planes headlights
- it does particle collisions with my furniture when it crashes
- it crashes by detecting i hit the wall
It really is amazing tech but it is very unpolished. But I am very hopeful they keep pushing and it gets cheaper and more people can experience it and develop for it.
This is like the amiga. We are at the infancy of AR.
Problem is it was marketed like a supercomputer and what we got was an Amiga.
Which is fine for the hobbyist nerds who can see the potential. Not so good for literally anyone else.
They are kinda doing the Tesla approach now (or as of 2019) where they are trying to sell a higher end experience.
Their medical demos / apps were really awesome.
You could place a virtual cadaver on the physical table.
When they work out view sharing that is gonna be amazing for collaboration.
They have a dream of "5G cities" where everyone walks around with these things fully connected.
I think that is the wrong direction. Will be curious to see what comes of it.
It probably _is_ using OpenCV under the hood.
I still haven't seen a compelling use case for AR.
The ML seems to scan rooms better and "lock" things in space better but the HL seemed to have a better resolution (_not_ field of view) when using "normal" apps.
They are both competing on field of view now with the next generation of devices but the ML beats the first gen HL hands down on the larger FOV.
As someone who was following the public hype but never actually saw what they released. This is absolutely completely terrible. The promises they made when they raised half a billion dollars 7 years ago as a stealth company backed by tech geniuses.
If this is what they have today, what were they showing investors 7 years ago? and why did investors think this is going to revolutionize the world?
Wasn't the real breakthrough meant to be to do with variable focus lightfield voodoo meaning that the rendered components looked like they 'fitted in' with the scene and could occlude as well as be occluded, rather than being additive CGI floating in front of it?
It’s still kinda clunky now, but the tech will get better. That’s a money saver (and a big improvement) with the main barriers being tech and familiarity, and those just come with time. Very bullish on that
Even if they do manage to create mind-blowing hardware, they aren't exactly cornering a market here.
That's a nightmare. See "Hyperreality"[1], if you haven't. That may be the future of AR. Especially if Facebook is involved.
The background is dark, the occlusions are bad, the hardware is large, and the FOV is poor.
Magic Leap really burned a lot of good will imo by sucking up enormous amounts of AR funding having 'demo' marketing that was at best intentionally misleading if not just fraudulent.
I'm still bullish on AR being the next platform when the hardware is ready, but I'd bet on Apple or Oculus pulling that off, I wouldn't go near anything from Magic Leap.
This about sums it up: https://twitter.com/fernandojsg/status/1017411969169555457
It's a little reminiscent of General Magic - something like the AR they want is likely to exist in the future, but I'd surprised if it's from them.
Can you imagine Steve Jobs shipping something at the quality level of that video?
Such an unusual experience for it to go behind something.
Note: Magic Leap specs are from a quick google search and may be out of date. Even improved they'll have the same issues to a slightly lesser degree.
First - field of view: The horizontal field of Magic Leap is 40 degrees. My primary monitor, a 16x9 32" monitor at about 3 feet from my eyes, is 42 degrees. So this can't even show me 100% of that, and definitely can't show me a second monitor in my peripheral vision.
Field of view is hard to improve as the optics are really close to your eyes and being head worn have limits of size and weight.
Second - Resolution: The magic leap resolution is apparently 1280x960, significantly less than 1080p. That's not even close to the 4K monitor I'm typing this on. That low resolution has to cover the entire area of my monitor. More if I want to stretch the field of wider.
Picture yourself programming on a 1280x960 32" monitor. Just to see I set my system that way for a minute. PIXELS EVERYWHERE! Also, now I need to reset all my carefully curated windows.
It's hard to improve resolution. The displays are very small to keep size and weight down. HMD displays are generally about the highest of DPI that can be built.
Third - Brightness: You can't draw black on a see-through HMD, all you can do is make the existing world brighter. The lenses are too close to the eye to be able to do any kind of masking or blocking of the ambient light.
So your display system won't be able to show much of an image over bright area; the text is either white over world color or background colored in a white field. It's not good for reading text and almost illegible at typical sizes in office lighting.
You can't improve brightness easily. These tiny displays make a lot of heat right near your head. Making them brighter means bigger heatsinks, taking weight and size, and more power with requires bigger batteries or shorter run time.
You can kinda cheat one a little with dark sunglass lenses to make the whole world darker. Or you can go to VR and just block the whole world and draw your interface over a video stream. The second option isn't really compelling because it for AR demos like Magic Leap shows.
Nothing else took its place, and, while still being popular, it quickly faded from the status of global phenomenon.
It's a nice money making game, but not a hardware seller.
Things like, pulling up addresses of buildings you look at, names of people you've met, line on ground for gps, playing board games with people without needing a board or dealing with the rule book (software assisted), see meta information floating around devices (battery level, year, serial number), etc. etc.
The UX of phones is pretty good but it suffers from its form factor. If you could have a UX for the world you can really enable a lot more human abilities in a really intuitive way and you can get closer to something that feels like telepathy.
Took me less than 5 minutes to think of the following:
1. Educational aspects such as being able to copy choreography by watching a virtual expert do it and still be able to see your own body mimicking the actions which she would not be able to do in VR (this could include juggling patterns, martial arts, any kind of complex motion)
2. Overlaying any number of AR layers on top of physical hardware, think of the idea that you could look at a complex circuit board and immediately get tooltip pop-ups over each integrated circuit and how they work
3. Building things in the real world located at absolute GPS coordinates and having them persist so that other people who are on the same shared AR "layer" see them. You could create buildings wondrous castles creatures and effectively create new layers of existence, and these layers could stack and be as deep as you ever wanted them to be
4. Being able to do virtual reality in much larger spaces so you could take your AR glasses and walk out onto a soccer field and then project a game such as you fighting a bunch of storm troopers while moving around physically in a huge field
Thank you for sharing it in this discussion!
I tried my friend's oculus rift around the same time, and it felt immersive and fun.
Use case 1 seems to be a minor improvement over a video call on a decent monitor today, and this is assuming that the AR and other tech would advance hugely from where it is today, to actually be able to do realtime filming and rendering with high precision, perhaps even in 3D to get some real advantage.
Use case 2 seems more realistic, but will be limited by eye tracking precision, component idenficiaction precision, and occlusion issues. Input will also be an issue (choosing which tooltips to see).
Use case 3 seems worse than building things in VR, other than some fancy art installations. Why would I want a virtual object that I can't view from my own home? Also, interaction would be fantastically limited, making the whole thing disappointing.
Use case 4 suffers even worse from interaction issues, and it also seems like a downgrade from current technology, which allows me to play in huge virtual environments without even getting off my chair.
So if you can't reproduce the experience on a 2D screen, then fake it and lie, you're saying? That IS the whole point.
Its not a baf ifea if you have the right tradeoffs. The tech to make that worth it is going to be at least a decade away IMO.
I'm sure it'll get there someday but for now it is all just rose-tinted glasses.