zlacker

[parent] [thread] 10 comments
1. sydthr+(OP)[view] [source] 2021-10-12 13:17:06
Are you serious? This looks like an undergraduate project hacked up with OpenCV.. I'd still give it an A though.
replies(2): >>leetro+z5 >>snek_c+76
2. leetro+z5[view] [source] 2021-10-12 13:48:12
>>sydthr+(OP)
As I said, video does not do it justice. If you walk around while flying everything stays very well locked in place.

It probably _is_ using OpenCV under the hood.

replies(1): >>goneho+X81
3. snek_c+76[view] [source] 2021-10-12 13:50:53
>>sydthr+(OP)
Yeah. I'm kind of left feeling like I'd rather fly the plane in a better looking simulated environment, with a first-person perspective. It's more flexible and fun than being tied to flying it in the environment around you.

I still haven't seen a compelling use case for AR.

replies(3): >>tjs8rj+h71 >>goneho+qJ1 >>throwa+yR1
◧◩
4. tjs8rj+h71[view] [source] [discussion] 2021-10-12 18:51:25
>>snek_c+76
Coolest one I’ve seen is VR workspaces. Buying a $5k setup per head where they can work from space or the Grand Canyon with unlimited monitors, sit/stand/lay down, software upgrades (only a few pieces of hardware to replace), virtual meetings, building and modeling anything they want, etc

It’s still kinda clunky now, but the tech will get better. That’s a money saver (and a big improvement) with the main barriers being tech and familiarity, and those just come with time. Very bullish on that

replies(2): >>smolde+W71 >>Tarrag+Nd1
◧◩◪
5. smolde+W71[view] [source] [discussion] 2021-10-12 18:54:01
>>tjs8rj+h71
I dunno, I still can't quite see the potential of that. And even if they did get it working perfectly, then they're following in the footsteps of other people who've already done this on headsets that are a fraction of the price (like Immersed or Virtual Desktop).

Even if they do manage to create mind-blowing hardware, they aren't exactly cornering a market here.

replies(1): >>snek_c+YF1
◧◩
6. goneho+X81[view] [source] [discussion] 2021-10-12 18:58:21
>>leetro+z5
I normally don't like to pile on with negativity, but I'd argue the video does do it justice. I played with one myself (the only one I ever saw in the wild and she only had one because her SO was one of the original investors) and it was similar to what the video shows. My first experience with VR was impressive, Magic Leap was a disappointment.

The background is dark, the occlusions are bad, the hardware is large, and the FOV is poor.

Magic Leap really burned a lot of good will imo by sucking up enormous amounts of AR funding having 'demo' marketing that was at best intentionally misleading if not just fraudulent.

I'm still bullish on AR being the next platform when the hardware is ready, but I'd bet on Apple or Oculus pulling that off, I wouldn't go near anything from Magic Leap.

This about sums it up: https://twitter.com/fernandojsg/status/1017411969169555457

It's a little reminiscent of General Magic - something like the AR they want is likely to exist in the future, but I'd surprised if it's from them.

Can you imagine Steve Jobs shipping something at the quality level of that video?

◧◩◪
7. Tarrag+Nd1[view] [source] [discussion] 2021-10-12 19:25:14
>>tjs8rj+h71
There are hardware limits that will make this less compelling than you imagine. I'll give 3 reasons.

Note: Magic Leap specs are from a quick google search and may be out of date. Even improved they'll have the same issues to a slightly lesser degree.

First - field of view: The horizontal field of Magic Leap is 40 degrees. My primary monitor, a 16x9 32" monitor at about 3 feet from my eyes, is 42 degrees. So this can't even show me 100% of that, and definitely can't show me a second monitor in my peripheral vision.

Field of view is hard to improve as the optics are really close to your eyes and being head worn have limits of size and weight.

Second - Resolution: The magic leap resolution is apparently 1280x960, significantly less than 1080p. That's not even close to the 4K monitor I'm typing this on. That low resolution has to cover the entire area of my monitor. More if I want to stretch the field of wider.

Picture yourself programming on a 1280x960 32" monitor. Just to see I set my system that way for a minute. PIXELS EVERYWHERE! Also, now I need to reset all my carefully curated windows.

It's hard to improve resolution. The displays are very small to keep size and weight down. HMD displays are generally about the highest of DPI that can be built.

Third - Brightness: You can't draw black on a see-through HMD, all you can do is make the existing world brighter. The lenses are too close to the eye to be able to do any kind of masking or blocking of the ambient light.

So your display system won't be able to show much of an image over bright area; the text is either white over world color or background colored in a white field. It's not good for reading text and almost illegible at typical sizes in office lighting.

You can't improve brightness easily. These tiny displays make a lot of heat right near your head. Making them brighter means bigger heatsinks, taking weight and size, and more power with requires bigger batteries or shorter run time.

You can kinda cheat one a little with dark sunglass lenses to make the whole world darker. Or you can go to VR and just block the whole world and draw your interface over a video stream. The second option isn't really compelling because it for AR demos like Magic Leap shows.

◧◩◪◨
8. snek_c+YF1[view] [source] [discussion] 2021-10-12 22:13:47
>>smolde+W71
AR seems to me like one of those technologies that people keep claiming is definitely going to be "the future" even though the use cases are dubious.
◧◩
9. goneho+qJ1[view] [source] [discussion] 2021-10-12 22:31:35
>>snek_c+76
The use case is a virtual meta-layer for the real world where you can interact with stuff without needing to look at a small glass handheld display.

Things like, pulling up addresses of buildings you look at, names of people you've met, line on ground for gps, playing board games with people without needing a board or dealing with the rule book (software assisted), see meta information floating around devices (battery level, year, serial number), etc. etc.

The UX of phones is pretty good but it suffers from its form factor. If you could have a UX for the world you can really enable a lot more human abilities in a really intuitive way and you can get closer to something that feels like telepathy.

◧◩
10. throwa+yR1[view] [source] [discussion] 2021-10-12 23:34:19
>>snek_c+76
You can't envision a use case for A.R.???

Took me less than 5 minutes to think of the following:

1. Educational aspects such as being able to copy choreography by watching a virtual expert do it and still be able to see your own body mimicking the actions which she would not be able to do in VR (this could include juggling patterns, martial arts, any kind of complex motion)

2. Overlaying any number of AR layers on top of physical hardware, think of the idea that you could look at a complex circuit board and immediately get tooltip pop-ups over each integrated circuit and how they work

3. Building things in the real world located at absolute GPS coordinates and having them persist so that other people who are on the same shared AR "layer" see them. You could create buildings wondrous castles creatures and effectively create new layers of existence, and these layers could stack and be as deep as you ever wanted them to be

4. Being able to do virtual reality in much larger spaces so you could take your AR glasses and walk out onto a soccer field and then project a game such as you fighting a bunch of storm troopers while moving around physically in a huge field

replies(1): >>tsimio+1q2
◧◩◪
11. tsimio+1q2[view] [source] [discussion] 2021-10-13 05:32:02
>>throwa+yR1
These are all nifty niche cases, not compelling use cases for an expensive piece of hardware which everyone would own.

Use case 1 seems to be a minor improvement over a video call on a decent monitor today, and this is assuming that the AR and other tech would advance hugely from where it is today, to actually be able to do realtime filming and rendering with high precision, perhaps even in 3D to get some real advantage.

Use case 2 seems more realistic, but will be limited by eye tracking precision, component idenficiaction precision, and occlusion issues. Input will also be an issue (choosing which tooltips to see).

Use case 3 seems worse than building things in VR, other than some fancy art installations. Why would I want a virtual object that I can't view from my own home? Also, interaction would be fantastically limited, making the whole thing disappointing.

Use case 4 suffers even worse from interaction issues, and it also seems like a downgrade from current technology, which allows me to play in huge virtual environments without even getting off my chair.

[go to top]