Unfortunately with today's SPA apps we don't even get the HTML directly, but with the recent resurgence of server-side rendering we may soon be able to get rendered HTML with one HTTP request. And then the only hurdles will be legal.
1. SPA that you can run on your phone or desktop
2. Centralized User Management, need some way to block known bad actors
3. Signing posts / comments
4. Distribution of posts and comments over DHT?
5. Hosting images, videos and lengthy text posts on torrents
6. A whack ton of content moderation software to somehow make decentralized moderation work.
7. Image recognition for gore / CP that inevitably will get spammed
This would enable people to help host the subreddits they are subscribed to, but murder battery life on mobile unfortunately.
It works the other way: with today's SPAs the API (that powers the frontend) is exposed for us to use directly, without going through the HTML - just use your browser's devtools to inspect the network/fetch/XHR requests and build your own client.
-----
On an related-but-unrelated note: I don't know why so many website companies aren't allowing users to pay to use their own client: it's win-win-win: the service operator gets new revenue to make-up for the lack of ads in third-party clients, it doesn't cost the operator anything (because their web-services and APIs are already going to be well-documented, right?), and makes the user/consumer-base happy because they can use a specialized client.
Where would Twitter be today if we could continue to use Tweetbot and other clients with our own single-user API-key or so?
So like OAuth? IIRC Twitter used that with all the 3rd party clients. I think the problem is that 3rd party clients filters out ad posts one way or the other. Your other point still stands though, just charge the user API access.
The purpose of an API is the agreement, more than the access. You can always reverse engineer something, but your users won't be too happy when things randomly stop working, whenever reddit chooses.
On the user side you need to:
- pay the service a recurring fee
- pay the client probably a recurring fee (x2 or x3 if you use multiple clients on different platform)
- mix and match the above and manage when it falls out of sync
It's totally possible, but how many users are willing to go that route ? Weather apps could be an example of that with the pluggable data sources, but that's to me a crazy small niche.
You can see how the end game of this is HTML no longer being free, right?
- It's a very niche thing to charge for, and merely charging for something means having to support it, so you can be underwater on support costs alone
- Users on third-party clients are resistant to enshittification
The business model of any Internet platform is to reintermediate: find a transaction that is being done direct-to-consumer, create a platform for that transaction, and get everyone on both ends of the transaction to use your platform yourself. You get people hooked to your platform by shifting your surpluses around, until everyone's hooked and you can skim 30% for yourself. But you can't really do this if a good chunk of your users have third-party clients.
This is usually phrased as "third-party clients don't show ads", but it extends way broader than that. If it was just ads, you could just charge $x.99/mo and make it profitable. But there's plenty of other ways to make money off users that isn't ads. For example, you might want to open a new vertical on your site to attract new creators. Think like Facebook's "pivot to video", how every social network added Stories, or YouTube Shorts. Those sorts of strategic moves are very unlikely to be properly supported by third-party clients, because nobody actually wants Twitter to become Snapchat. So your most valuable power users would be paying you money in order to... become less valuable users!
If social media businesses worked how they said they worked, then yes, this would actually be a good idea. But it isn't. Platform capitalism is entirely a game of butting yourself in to every transaction and extracting a few pennies off the top of everything.
If you do that, I'm going to make a client that uses a rotating set of accounts and masquerades as a different client. I am then going to make content available through my client for free, and I'm going to put ads on it so that I can make money. With some small number of accounts, I will serve perhaps x1000 users and you can't do anything about it.
In time, perhaps I will lock the users into my platform. They will talk about how the community on Reddit doesn't understand Reneit and how all the memes come from Reneit. If I win, I'll be Reddit over Digg. If I lose I'll be Imgur.
So go ahead. You'll be Invision to Tapatalk and you will die.
Fundamentally you're advocating for a web that doesn't rely on ad money. I'm totally with you, but the discussion should probably expand beyond the web and to why our society generate so much ad money in the first place.
What should we do to free our societies from ad money ?
This is not a useful comparison. A failure of an ad blocker means you don't see an ad while using the service. Big deal. A failure of a reverse engineered glorified web scraper is that the app stops working, completely, for all users of the client, at once, until someone fixes it.
Yes, it could be democratized, but most users wouldn't understand any of this, and say "ugh, this app never works". It would be a user experience that reddit could make as terrible as they wanted.
What I'm talking about already exists by the way. Stuff like nitter, teddit, youtube downloaders. I once wrote one for my school's shitty website.
Honestly I don't really care about "most users". To me they're only relevant as entries in the anonymity set. As long as we have access to such powerful software, I'm happy. I'm not out to save everyone.
Now the only way to access site Y is by a) routing all your data through some third party server, or b) installing a native application which has way more access to your machine than the web app would.
Some days you gotta wonder if anyone on the web committees has any interest in end-users.
Or installing a browser extension that allows rewriting CORS headers.
> Some days you gotta wonder if anyone on the web committees has any interest in end-users.
Oh, they do. The defaults are much safer for end-users than they used to be. Who they mostly leave out is a narrow slice of power users with use cases where bypassing make sense, and the extension facilities available address some of that.
But I do agree that CORS is being hijacked/abused for this purpose. But at the same time it's an important security feature. It prevents the scenario where you visit some website and some malicious javascript starts making calls to some-internal-site/api/... and exfiltrating data.
The slice is only narrow because it’s practically impossible. If there were an option presented to end users “let X.com read data from Y.com?” there would be a rich ecosystem of alternative UI’s for any website you could think of.
These alt-UI’s would be likely to have better security practices than the original, or at the very least introduce competition to drive privacy/security/accessibility standards up for everyone. Whereas currently if the Origin has the data, they have full ability to impose whatever draconian practices they want on people who desire to access that data.
Content and advertising cannot be separated by IP and the site content is basically an application that is difficult to parse.
If a third-party webapp wanted to access Reddit, an auth flow that gets API tokens from it and then stories those for usage gets this working (in the universe in which Reddit wants this to happen of course). You still get CORS protection from the general drive-by issues, and you'll need an explicit auth step on a third party site (but that's why OAuth sends you to the data provider's website to then be redirected)
That's a nice dream but the reality is that HTML would be a really bad API, even worse than SOAP.
I’m talking about the case when the User wants origin A to render data origin B has, but origin B doesn’t want that. You’d expect the User Agent to act on the User’s behalf and hand B’s data to A after confirming with the User that is their intention.
But instead the User Agent totally disregards the User and exclusively listens to origin B. This prevents the User from rendering the data in the more accessible/secure/privacy-preserving/intuitive way that origin A would have provided.
Strange to see all the comments arguing that in fact the browser ought to be an Origin Agent.
Funny
One universe I could see is the browser allowing a user to grant cross origin cookies when wanted. Though even then a site B that really doesn’t want this can stick CSRF tokens in the right spots and that just falls apart immediately
I imagine you understand the security questions at play here right? Since a user going to origin A might not know what other origins that origin A wants to reach out to.
CSRF mitigations mean that origins could still block things off even without CORS, but it’s an interesting thought experiment
Worth noting this model would introduces no new holes - everything I ask for is already possible when running a native application.
> introduces no new holes - everything I ask for is already possible when running a native application.
A native application involves downloading a binary and installing it on your machine. Those involve a higher degree of trust than, say, clicking on a random URL. "I will read this person's blog" vs "I will download a binary to read this preson's blog" are acts with different trust requirements. At least for most people.
I suppose in a somewhat ironic way the iOS sandbox makes me feel more comfortable downloading random native apps but it probably really shouldn't! The OS is good about isolating cookie access for exactly the sort of things you're talking about (the prompt is like "this app wants to access your data for website.com)), but I should definitely be careful
I understand what you're saying, but I think this is the key to my point:
> It would be a user experience that reddit could make as terrible as they wanted.
It's an unfair cat and mouse game. Yes, effort could be made to fix it each time, but, if reddit chose, they could force everyone into the "most users" group, when the only app works for 5 minutes a day, and people get bored, because they decided to randomize page elements.