zlacker

[parent] [thread] 36 comments
1. danije+(OP)[view] [source] 2023-05-31 20:27:25
The web went in the wrong direction when we abandoned the initial concepts of user agents, which was that the browser has the ultimate choice of what to render and how. That concept, transferred to today's world of apps would simply mean that any client like Apollo is essentially a browser locked on Reddit's website, parsing HTML (which has the role of an API) and rendering the content in a native interface. As long as the user can access the HTML for free, they should be able to use any application (a browser or a special app) and render the content however they wish.

Unfortunately with today's SPA apps we don't even get the HTML directly, but with the recent resurgence of server-side rendering we may soon be able to get rendered HTML with one HTTP request. And then the only hurdles will be legal.

replies(9): >>leros+E5 >>numpad+H9 >>DaiPlu+9c >>teej+zj >>paulco+Nm >>renewi+0u >>makeit+KD >>codeth+K91 >>interl+8h1
2. leros+E5[view] [source] 2023-05-31 20:55:01
>>danije+(OP)
Seems like you could still a meta UI that drives the underlying SPA in a hidden browser but it would be a pain. Maybe a framework for that will be built one day
replies(1): >>bearja+Pb
3. numpad+H9[view] [source] 2023-05-31 21:17:43
>>danije+(OP)
App Store. It’s the App Store and iPhone that killed the web.
◧◩
4. bearja+Pb[view] [source] [discussion] 2023-05-31 21:29:29
>>leros+E5
Seems like we're always missing a fusion of:

1. SPA that you can run on your phone or desktop

2. Centralized User Management, need some way to block known bad actors

3. Signing posts / comments

4. Distribution of posts and comments over DHT?

5. Hosting images, videos and lengthy text posts on torrents

6. A whack ton of content moderation software to somehow make decentralized moderation work.

7. Image recognition for gore / CP that inevitably will get spammed

This would enable people to help host the subreddits they are subscribed to, but murder battery life on mobile unfortunately.

5. DaiPlu+9c[view] [source] 2023-05-31 21:31:12
>>danije+(OP)
> Unfortunately with today's SPA apps we don't even get the HTML directly

It works the other way: with today's SPAs the API (that powers the frontend) is exposed for us to use directly, without going through the HTML - just use your browser's devtools to inspect the network/fetch/XHR requests and build your own client.

-----

On an related-but-unrelated note: I don't know why so many website companies aren't allowing users to pay to use their own client: it's win-win-win: the service operator gets new revenue to make-up for the lack of ads in third-party clients, it doesn't cost the operator anything (because their web-services and APIs are already going to be well-documented, right?), and makes the user/consumer-base happy because they can use a specialized client.

Where would Twitter be today if we could continue to use Tweetbot and other clients with our own single-user API-key or so?

replies(8): >>poyu+Uf >>nomel+Ug >>makeit+hl >>kmeist+9o >>renewi+Go >>drozyc+ws >>jakear+zS >>theage+fu1
◧◩
6. poyu+Uf[view] [source] [discussion] 2023-05-31 21:51:05
>>DaiPlu+9c
> Where would Twitter be today if we could continue to use Tweetbot and other clients with our own single-user API-key or so?

So like OAuth? IIRC Twitter used that with all the 3rd party clients. I think the problem is that 3rd party clients filters out ad posts one way or the other. Your other point still stands though, just charge the user API access.

◧◩
7. nomel+Ug[view] [source] [discussion] 2023-05-31 21:55:53
>>DaiPlu+9c
> inspect the network/fetch/XHR requests and build your own client

The purpose of an API is the agreement, more than the access. You can always reverse engineer something, but your users won't be too happy when things randomly stop working, whenever reddit chooses.

replies(1): >>matheu+vv
8. teej+zj[view] [source] 2023-05-31 22:10:49
>>danije+(OP)
There was a 15 year period where many websites were only compatible with Internet Explorer. The dream of clients in control is worth fighting for, but it’s never been reality.
◧◩
9. makeit+hl[view] [source] [discussion] 2023-05-31 22:19:50
>>DaiPlu+9c
> allowing users to pay to use their own client

On the user side you need to:

- pay the service a recurring fee

- pay the client probably a recurring fee (x2 or x3 if you use multiple clients on different platform)

- mix and match the above and manage when it falls out of sync

It's totally possible, but how many users are willing to go that route ? Weather apps could be an example of that with the pluggable data sources, but that's to me a crazy small niche.

10. paulco+Nm[view] [source] 2023-05-31 22:30:21
>>danije+(OP)
> As long as the user can access the HTML for free, they should be able to use any application (a browser or a special app) and render the content however they wish.

You can see how the end game of this is HTML no longer being free, right?

replies(1): >>NovaDu+WY
◧◩
11. kmeist+9o[view] [source] [discussion] 2023-05-31 22:38:11
>>DaiPlu+9c
There's two reasons why they don't want third-party clients as a pro feature:

- It's a very niche thing to charge for, and merely charging for something means having to support it, so you can be underwater on support costs alone

- Users on third-party clients are resistant to enshittification

The business model of any Internet platform is to reintermediate: find a transaction that is being done direct-to-consumer, create a platform for that transaction, and get everyone on both ends of the transaction to use your platform yourself. You get people hooked to your platform by shifting your surpluses around, until everyone's hooked and you can skim 30% for yourself. But you can't really do this if a good chunk of your users have third-party clients.

This is usually phrased as "third-party clients don't show ads", but it extends way broader than that. If it was just ads, you could just charge $x.99/mo and make it profitable. But there's plenty of other ways to make money off users that isn't ads. For example, you might want to open a new vertical on your site to attract new creators. Think like Facebook's "pivot to video", how every social network added Stories, or YouTube Shorts. Those sorts of strategic moves are very unlikely to be properly supported by third-party clients, because nobody actually wants Twitter to become Snapchat. So your most valuable power users would be paying you money in order to... become less valuable users!

If social media businesses worked how they said they worked, then yes, this would actually be a good idea. But it isn't. Platform capitalism is entirely a game of butting yourself in to every transaction and extracting a few pennies off the top of everything.

◧◩
12. renewi+Go[view] [source] [discussion] 2023-05-31 22:40:18
>>DaiPlu+9c
> I don't know why so many website companies aren't allowing users to pay to use their own client...

If you do that, I'm going to make a client that uses a rotating set of accounts and masquerades as a different client. I am then going to make content available through my client for free, and I'm going to put ads on it so that I can make money. With some small number of accounts, I will serve perhaps x1000 users and you can't do anything about it.

In time, perhaps I will lock the users into my platform. They will talk about how the community on Reddit doesn't understand Reneit and how all the memes come from Reneit. If I win, I'll be Reddit over Digg. If I lose I'll be Imgur.

So go ahead. You'll be Invision to Tapatalk and you will die.

◧◩
13. drozyc+ws[view] [source] [discussion] 2023-05-31 23:01:58
>>DaiPlu+9c
They sort of are allowing users to pay to use their own client by charging for API access. It will be interesting to see how Apollo adapts to this new reality.
14. renewi+0u[view] [source] 2023-05-31 23:12:08
>>danije+(OP)
There's free API access with a client of your own. You just can't distribute a single client that intermediates the site: thereby not being a user agent so much as its own site. If you use your own client_id and OAuth2, you get 100 req/min which is enough to browse.
◧◩◪
15. matheu+vv[view] [source] [discussion] 2023-05-31 23:22:05
>>nomel+Ug
Total non-issue. If it breaks, people will fix it. There's people out there maintaining immense ad filter lists and executable countermeasures against ad blocker detection. Someone somewhere will care enough to fix it.
replies(2): >>chromo+GA >>nomel+gE
◧◩◪◨
16. chromo+GA[view] [source] [discussion] 2023-06-01 00:03:21
>>matheu+vv
There are only so many programmers, who will fix the client, per 1 person. This fraction, when inverted, will be a rough threshold for the client's audience size for continued fixes to be there.
replies(1): >>matheu+TB
◧◩◪◨⬒
17. matheu+TB[view] [source] [discussion] 2023-06-01 00:14:49
>>chromo+GA
And yet these people somehow maintain immense amounts of ad blocking filters and code, including active counter measures which require reverse engineering web site javascripts. I gotta wonder what would happen if they started making custom clients for each website instead.
replies(1): >>chromo+2D
◧◩◪◨⬒⬓
18. chromo+2D[view] [source] [discussion] 2023-06-01 00:26:42
>>matheu+TB
Adblockers' audience is huge, much more than any single site's audience, and they probably wouldn't care about most single sites (to care, you have to be in the audience, and most sites have small audiences).
replies(1): >>matheu+TG
19. makeit+KD[view] [source] 2023-06-01 00:33:41
>>danije+(OP)
> the browser has the ultimate choice of what to render and how

Fundamentally you're advocating for a web that doesn't rely on ad money. I'm totally with you, but the discussion should probably expand beyond the web and to why our society generate so much ad money in the first place.

What should we do to free our societies from ad money ?

◧◩◪◨
20. nomel+gE[view] [source] [discussion] 2023-06-01 00:38:45
>>matheu+vv
> There's people out there maintaining immense ad filter lists and executable countermeasures against ad blocker detection.

This is not a useful comparison. A failure of an ad blocker means you don't see an ad while using the service. Big deal. A failure of a reverse engineered glorified web scraper is that the app stops working, completely, for all users of the client, at once, until someone fixes it.

Yes, it could be democratized, but most users wouldn't understand any of this, and say "ugh, this app never works". It would be a user experience that reddit could make as terrible as they wanted.

replies(1): >>matheu+yH
◧◩◪◨⬒⬓⬔
21. matheu+TG[view] [source] [discussion] 2023-06-01 01:07:45
>>chromo+2D
Someone cared enough to defeat annoying blocker blockers of sites. If they care just a little bit more, they could replace the web developer's code with their own minimal version. Chances are the site doesn't actually need most of the code it includes anyway.

What I'm talking about already exists by the way. Stuff like nitter, teddit, youtube downloaders. I once wrote one for my school's shitty website.

◧◩◪◨⬒
22. matheu+yH[view] [source] [discussion] 2023-06-01 01:14:03
>>nomel+gE
It absolutely is a useful comparison. It's obvious that this software depends on unstable interfaces that will eventually break. I wasn't talking about that, I was talking about the sheer effort it takes to create such things. Such efforts are absolutely in the realm of existence today. Projects like nitter and teddit exist. Teddit is on the frontpage of HN right now no doubt in reaction to this thread. There's probably one for HN too, I just haven't found HN to be hostile enough to search for it.

Honestly I don't really care about "most users". To me they're only relevant as entries in the anonymity set. As long as we have access to such powerful software, I'm happy. I'm not out to save everyone.

replies(1): >>nomel+vf3
◧◩
23. jakear+zS[view] [source] [discussion] 2023-06-01 03:21:18
>>DaiPlu+9c
CORS ruined this pipe dream. Ideally you’d be able to tell your browser that website X loading content from site Y was a-okay and exactly what you want to happen because site Y is user-hostile and site X addresses all those issues, but alas.

Now the only way to access site Y is by a) routing all your data through some third party server, or b) installing a native application which has way more access to your machine than the web app would.

Some days you gotta wonder if anyone on the web committees has any interest in end-users.

replies(3): >>dragon+DT >>minhaz+cU >>rtpg+j31
◧◩◪
24. dragon+DT[view] [source] [discussion] 2023-06-01 03:33:10
>>jakear+zS
> Now the only way to access site Y is by a) routing all your data through some third party server, or b) installing a native application which has way more access to your machine than the web app would.

Or installing a browser extension that allows rewriting CORS headers.

> Some days you gotta wonder if anyone on the web committees has any interest in end-users.

Oh, they do. The defaults are much safer for end-users than they used to be. Who they mostly leave out is a narrow slice of power users with use cases where bypassing make sense, and the extension facilities available address some of that.

replies(1): >>jakear+dY
◧◩◪
25. minhaz+cU[view] [source] [discussion] 2023-06-01 03:38:38
>>jakear+zS
Technically you can still do that by launching chrome with some special flags or with a chrome extension.

But I do agree that CORS is being hijacked/abused for this purpose. But at the same time it's an important security feature. It prevents the scenario where you visit some website and some malicious javascript starts making calls to some-internal-site/api/... and exfiltrating data.

replies(1): >>jakear+oY
◧◩◪◨
26. jakear+dY[view] [source] [discussion] 2023-06-01 04:31:47
>>dragon+DT
From what I can tell there’s no such extension on iOS. I think it should be part of the standard, not a hole left for extensions to fill in.

The slice is only narrow because it’s practically impossible. If there were an option presented to end users “let X.com read data from Y.com?” there would be a rich ecosystem of alternative UI’s for any website you could think of.

These alt-UI’s would be likely to have better security practices than the original, or at the very least introduce competition to drive privacy/security/accessibility standards up for everyone. Whereas currently if the Origin has the data, they have full ability to impose whatever draconian practices they want on people who desire to access that data.

◧◩◪◨
27. jakear+oY[view] [source] [discussion] 2023-06-01 04:33:41
>>minhaz+cU
The chrome flag disables CORS entirely, which presents a major security risk as you point out. What I’m asking for is an option to let specific origins read from specific other origins. Extensions might be able to do this but they aren’t available in all contexts (iOS, for instance)
◧◩
28. NovaDu+WY[view] [source] [discussion] 2023-06-01 04:41:39
>>paulco+Nm
The worse case vision I have of the future internet in one in which content and advertising is hosted by the advertising companies and rendered via a web assembly system.

Content and advertising cannot be separated by IP and the site content is basically an application that is difficult to parse.

◧◩◪
29. rtpg+j31[view] [source] [discussion] 2023-06-01 05:37:20
>>jakear+zS
I understand what you're saying, but plenty of websites resolve this by having an in-browser OAuth flow, and then working off of an API. It's not like APIs are asking for CORS stuff in general, just cookie auth to the third party server requires CORS.

If a third-party webapp wanted to access Reddit, an auth flow that gets API tokens from it and then stories those for usage gets this working (in the universe in which Reddit wants this to happen of course). You still get CORS protection from the general drive-by issues, and you'll need an explicit auth step on a third party site (but that's why OAuth sends you to the data provider's website to then be redirected)

replies(1): >>jakear+dg1
30. codeth+K91[view] [source] 2023-06-01 07:00:14
>>danije+(OP)
> parsing HTML (which has the role of an API) and rendering the content in a native interface

That's a nice dream but the reality is that HTML would be a really bad API, even worse than SOAP.

◧◩◪◨
31. jakear+dg1[view] [source] [discussion] 2023-06-01 08:23:11
>>rtpg+j31
I don’t think you do get what I’m saying. If an Origin wants to be accessed by other Origins there are plenty of ways to do that, that much should be obvious.

I’m talking about the case when the User wants origin A to render data origin B has, but origin B doesn’t want that. You’d expect the User Agent to act on the User’s behalf and hand B’s data to A after confirming with the User that is their intention.

But instead the User Agent totally disregards the User and exclusively listens to origin B. This prevents the User from rendering the data in the more accessible/secure/privacy-preserving/intuitive way that origin A would have provided.

Strange to see all the comments arguing that in fact the browser ought to be an Origin Agent.

replies(1): >>rtpg+0j1
32. interl+8h1[view] [source] 2023-06-01 08:35:26
>>danije+(OP)
Why can't these apps just use the api that reddit.com uses? How could the servers differentiate between reddit.com and apollo app pretending to be reddit.com to the server?
◧◩◪◨⬒
33. rtpg+0j1[view] [source] [discussion] 2023-06-01 09:04:29
>>jakear+dg1
> Strange to see all the comments arguing that in fact the browser ought to be an Origin Agent

Funny

One universe I could see is the browser allowing a user to grant cross origin cookies when wanted. Though even then a site B that really doesn’t want this can stick CSRF tokens in the right spots and that just falls apart immediately

I imagine you understand the security questions at play here right? Since a user going to origin A might not know what other origins that origin A wants to reach out to.

CSRF mitigations mean that origins could still block things off even without CORS, but it’s an interesting thought experiment

replies(1): >>jakear+yC1
◧◩
34. theage+fu1[view] [source] [discussion] 2023-06-01 11:01:49
>>DaiPlu+9c
The reason there will always be ads: average consumers are never willing to pay as much to keep their eyes clean as others are willing to pay to dirty them.
◧◩◪◨⬒⬓
35. jakear+yC1[view] [source] [discussion] 2023-06-01 12:16:20
>>rtpg+0j1
Can they stick CSRF tokens in the right spot under this model? The typical CSRF mitigations require other origins to not be able to access the HTML of the page (as they just inject a hidden form field or similar). If the cross-origin has full access to the page’s resources they ought to be able to emulate the environment of the page as viewed in-origin quite accurately.

Worth noting this model would introduces no new holes - everything I ask for is already possible when running a native application.

replies(1): >>rtpg+JJ1
◧◩◪◨⬒⬓⬔
36. rtpg+JJ1[view] [source] [discussion] 2023-06-01 13:07:19
>>jakear+yC1
I get what you're saying w/r/t CSRF. While every app could be different, in practice most websites do real bog-standard CSRF tokens, and I could see a user agent be able to get things working with like 95% of websites. Though I could think of many schemes to obfuscate things dynamically if you are motivated enough! But I like the idea of a user agent that is built around making it easier for you to just get "your" data in these ways.

> introduces no new holes - everything I ask for is already possible when running a native application.

A native application involves downloading a binary and installing it on your machine. Those involve a higher degree of trust than, say, clicking on a random URL. "I will read this person's blog" vs "I will download a binary to read this preson's blog" are acts with different trust requirements. At least for most people.

I suppose in a somewhat ironic way the iOS sandbox makes me feel more comfortable downloading random native apps but it probably really shouldn't! The OS is good about isolating cookie access for exactly the sort of things you're talking about (the prompt is like "this app wants to access your data for website.com)), but I should definitely be careful

◧◩◪◨⬒⬓
37. nomel+vf3[view] [source] [discussion] 2023-06-01 19:43:55
>>matheu+yH
> I was talking about the sheer effort it takes to create such things

I understand what you're saying, but I think this is the key to my point:

> It would be a user experience that reddit could make as terrible as they wanted.

It's an unfair cat and mouse game. Yes, effort could be made to fix it each time, but, if reddit chose, they could force everyone into the "most users" group, when the only app works for 5 minutes a day, and people get bored, because they decided to randomize page elements.

[go to top]