I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.
theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.
Really, the compute on a phone has been good enough for at least a decade now once we got USB C. We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long
I'm happy this is becoming a real thing. I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.
I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.
Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.
Sending plain text messages is pretty much the same as back then, yes. But these days I'm also taking high resolution photos and videos and share those with others via my phone.
> I hope they'll also allow the phone's screen to be used like a trackpad.
Samsung's DeX already does that.
> I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.
Your own 'good enough' logic already suggests otherwise? Processors are still getting cheap and better, so why not just duplicate them? Instead of having a dumb large screen (and keyboard) that you plug your phone into, it's not much extra cost to add some processing power to that screen, and make it a full desktop pc.
If we are getting to 'thin client' world, it'll be because of 'cloud', not because of connecting to our phones. Even today, most of what people do on their desktops can be done in the browser. So we likely see more of that.
The thin client world is one anticipating a world with fewer resources to make these excess chips. It's just a speculation of what things will look like when we can't sustain what is unsustainable.
The video comment was about phones. The raytracing was about laptops.
Yes, laptops were capable of watching DVDs in 2005. (But they weren't capable of watching much YouTube, because YouTube was only started later that year. Streaming video was in its infancy.)
> It's just a speculation of what things will look like when we can't sustain what is unsustainable.
Huh? We are sitting on a giant ball of matter, and much of what's available in the crust is silicates. You mostly only need energy to turn rocks into computer chips. We get lots and lots of energy from the sun.
How is any of this unsustainable?
(And a few computer chips is all you save with the proposed approach. You still need to make just as many screens and batteries etc.)
You don't need to imagine a total economic collapse. Take any resource that goes into a chip, and contrive any reason we'll have to consume significantly less of that resource. How do you solve that?
Well, we have highly-redundant compute-per-person. I personally have nine pretty capable computer chips to my person, just in the building I'm in. That's a lot, and that represents an excess in resource consumption. A phone-as-motherboard laptop solves one of those chips. If we make the same games we're making today but we go back a decade or two in graphics, then we can have fewer consoles and gaming PCs, too.
I'm not saying "one chip for many devices" is a panacea. There are other things we might do. Maybe laptops and phones can be made to have display input, for example.