zlacker

[parent] [thread] 52 comments
1. mwcamp+(OP)[view] [source] 2022-09-10 14:44:59
> However, contemporary applications rarely run on a single machine. They increasingly use remote procedure calls (RPC), HTTP and REST APIs, distributed key-value stores, and databases,

I'm seeing an increasing trend of pushback against this norm. An early example was David Crawshaw's one-process programming notes [1]. Running the database in the same process as the application server, using SQLite, is getting more popular with the rise of Litestream [2]. Earlier this year, I found the post "One machine can go pretty far if you build things properly" [3] quite refreshing.

Most of us can ignore FAANG-scale problems and keep right on using POSIX on a handful of machines.

[1]: https://crawshaw.io/blog/one-process-programming-notes

[2]: https://litestream.io/

[3]; https://rachelbythebay.com/w/2022/01/27/scale/

replies(5): >>bryanl+Z5 >>mike_h+da >>oefrha+cq >>jayd16+7b1 >>nsm+oms
2. bryanl+Z5[view] [source] 2022-09-10 15:24:20
>>mwcamp+(OP)
Cranshaw mentions "tens of hours of downtime" as a drawback. Given that downtime sometime corresponds with times of high load aka, the really important times, that's going to be a deal killer for most.

But his architecture does seem to be consistent with a "minutes of downtime" model. He's using AWS, and has his database on a separate EBS volume with a sane backup strategy. So he's not manually fixing servers, and has reasonable migration routes for most disaster scenarios.

Except for PBKAC, which is what really kills most servers. And HA servers are more vulnerable to that, since they're more complicated.

replies(1): >>mattar+le
3. mike_h+da[view] [source] 2022-09-10 15:52:40
>>mwcamp+(OP)
If you have an application server then you still have RPCs coming from your user interface, even if you run the whole DB in process. And indeed POSIX has nothing to say about this. Instead people tend to abuse HTTP as a pseudo-RPC mechanism because that's what the browser understands, it tends to be unblocked by firewalls etc.

One trend in OS research (what little exists) is the idea of the database OS. Taking that as an inspiration I think there's a better way to structure things to get that same simplicity and in fact even more, but without many of the downsides. I'm planning to write about it more at some point on my company blog (https://hydraulic.software/blog.html) but here's a quick summary. See what you think.

---

In a traditional 3-tier CRUD web app you have the RDBMS, then stateless web servers, then JavaScript and HTML in the browser running a pseudo-stateless app. Because browsers don't understand load balancing you probably also have an LB in there so you can scale and upgrade the web server layer without user-visible downtime. The JS/HTML speaks an app specific ad-hoc RPC protocol that represents RPCs as document fetches, and your web server (mostly) translates back and forth between this protocol and whatever protocol your RDBMS speaks layering access control on top (because the RDBMS doesn't know who is logged in).

This approach is standard and lets people use web browsers which have some advantages, but creates numerous problems. It's complex, expensive, limiting for the end user, every app requires large amounts of boilerplate glue code, and it's extremely error prone. XSS, XSRF and SQL injection are all bugs that are created by this choice of architecture.

These problems can be fixed by using "two tier architecture". In two tier architecture you have your RDBMS cluster directly exposed to end users, and users log in directly to their RDBMS account using an app. The app ships the full database driver and uses it to obtain RPC services. Ordinary CRUD/ACL logic can be done with common SQL features like views, stored procedures and row level security [1][2][3]. Any server-side code that isn't neatly expressible with SQL is implemented as RDBMS server plugins.

At a stroke this architecture solves the following problems:

1. SQL injection bugs disappear by design because the RDBMS enforces security, not a highly privileged web app. By implication you can happily give power users like business analysts direct SQL query access to do obscure/one-off things that might otherwise turn into abandoned backlog items.

2. XSS, XSRF and all the other escaping bugs go away, because you're not writing a web app anymore - data is pulled straight from the database's binary protocol into your UI toolkit's data structures. Buffer lengths are signalled OOB across the entire stack.

3. You don't need a hardware/DNS load balancer anymore because good DB drivers can do client-side load balancing.

4. You don't need to design ad-hoc JSON/REST protocols that e.g. frequently suck at pagination, because you can just invoke server-side procedures directly. The DB takes care of serialization, result streaming, type safety, access control, error reporting and more.

5. The protocol gives you batching for free, so if you have some server logic written in e.g. JavaScript, Python, Kotlin, Java etc then it can easily use query results as input or output and you can control latency costs. With some databases like PostgreSQL you get server push/notifications.

6. You can use whatever libraries and programming languages you want.

This architecture lacks popularity today because to make it viable you need a few things that weren't available until very recently (and a few useful things still aren't yet). At minimum:

1. You need a way to distribute and update GUI desktop apps that isn't incredibly painful, ideally one that works well with JVM apps because JDBC drivers tend to have lots of features. Enter my new company, stage left (yes! that's right! this whole comment is a giant ad for our product). Hydraulic Conveyor was launched in July and makes distributing and updating desktop apps as easy as with a web app [4].

2. You're more dependent on having a good RDBMS. PostgreSQL only got RLS recently and needs extra software to scale client connections well. MS SQL Server is better but some devs would feel "weird" buying a database (it's not that expensive though). Hosted DBs usually don't let you install arbitrary extensions.

3. You need solid UI toolkits with modern themes. JetBrains has ported the new Android UI toolkit to the desktop [5] allowing lots of code sharing. It's reactive and thus has a Kotlin language dependency. JavaFX is a more traditional OOP toolkit with CSS support, good business widgets and is accessible from more languages for those who prefer that; it also now has a modern GitHub-inspired SASS based style pack that looks great [6] (grab the sampler app here [7]). For Lispers there's a reactive layer over the top [8].

4. There's some smaller tools that would be useful e.g. for letting you log into your DB with OAuth, for ensuring DB traffic can get through proxies.

Downsides?

1. Migrating between DB vendors is maybe harder. Though, the moment you have >1 web server you have the problem of doing a 'live' migration anyway, so the issues aren't fundamentally different, it'd just take longer.

2. Users have install your app. That's not hard and in a managed IT environment the apps can be pushed out centrally. Developers often get hung up on this point but the success of the installed app model on mobile, popularity of Electron and the whole video game industry shows users don't actually care much, as long as they plan to use the app regularly.

3. To do mobile/tablet you'd want to ship the DB driver as part of your app. There might be oddities involved, though in theory JDBC drivers could run on Android and be compiled to native for iOS using GraalVM.

4. Skills, hiring, etc. You'd want more senior devs to trailblaze this first before asking juniors to learn it.

[1] https://www.postgresql.org/docs/current/ddl-rowsecurity.html

[2] https://docs.microsoft.com/en-us/sql/relational-databases/se...

[3] https://docs.oracle.com/database/121/TDPSG/GUID-72D524FF-5A8...

[4] https://hydraulic.software/

[5] https://www.jetbrains.com/lp/compose-mpp/

[6] https://github.com/mkpaz/atlantafx

[7] https://downloads.hydraulic.dev/atlantafx/sampler/download.h...

[8] https://github.com/cljfx/cljfx

replies(7): >>cgh+ng >>pjmlp+Og >>mwcamp+zB >>whartu+NN >>agumon+D31 >>kubanc+ZP1 >>sicp-e+Jhp
◧◩
4. mattar+le[view] [source] [discussion] 2022-09-10 16:20:13
>>bryanl+Z5
> Cranshaw mentions "tens of hours of downtime" as a drawback. Given that downtime sometime corresponds with times of high load aka, the really important times, that's going to be a deal killer for most.

I posit that this kind of "deal killer" is most often a wish list item and not a true need. I think most teams without a working product think these kinds of theoretical reliability issues are "deal killers" as a form of premature optimization.

I worked at a FANG doing a product where we thought availability issues caused by sessions being "owned" by a single server design was a deal killer. I.e. that one machine could crash at any time and people would notice, we thought. We spent a lot of time designing a fancy fully distributed system where sessions could migrate seamlessly, etc. Spent the good part of a year designing and implementing it.

Then, before we finished, a PM orchestrated purchase of a startup that had a launched product with similar functionality. Its design held per-user session state on a single server and was thus much simpler. It was almost laughably simple compared to what we were attempting. The kind of design you'd write on a napkin over a burrito lunch as minimally viable, and quickly code up -- just what you'd do in a startup.

After the acquisition we had big arguments between our team and those at the startup about which core technology the FANG should go forward with. We'd point at math and theory about availability and failure rates. They'd point at happy users and a working product. It ended with a VP pointing at the startup's launched product saying "we're going with what is working now." Within months the product was working within the FANG's production infrastructure, and it has run almost unchanged architecturally for over a decade. Is the system theoretically less reliable than our fancier would-be system? Yes. Does anybody actually notice or care? No.

replies(2): >>thayne+3q >>hiptob+3g2
◧◩
5. cgh+ng[view] [source] [discussion] 2022-09-10 16:32:29
>>mike_h+da
Desktop apps that query/update the db directly were a thing back in the '90s. They were an example of what we called "client/server". They were swiftly superseded by the web, which sort of hijacked the client/server architecture. As you noted, the basic reason is desktop app distribution and updating is hard. If your company can beat this problem, then great, because removing the intermediate web layer makes a lot of sense in certain cases (eg, enterprise deployments).
replies(2): >>mike_h+nq >>mccull+qs
◧◩
6. pjmlp+Og[view] [source] [discussion] 2022-09-10 16:34:52
>>mike_h+da
Basically back to the VB/Delphi glory days with stored procedures, or even better Oracle Forms.
replies(1): >>mike_h+8x
◧◩◪
7. thayne+3q[view] [source] [discussion] 2022-09-10 17:32:46
>>mattar+le
It is a deal killer for anyone who has SLAs specified in contracts. Which is pretty common in B2B
replies(1): >>macint+4w
8. oefrha+cq[view] [source] 2022-09-10 17:33:48
>>mwcamp+(OP)
> Running the database in the same process as the application server, using SQLite, is getting more popular with the rise of Litestream.

As someone who uses SQLite a lot, I'm suspicious of this claim. Litestream is strictly a backup tool, or, as its author puts it, disaster recovery tool. It gives you a bit more peace of mind than good old periodic snapshots, but it does not give you actual usable replication,* so I doubt it meaningfully increased SQLite adoption in the RDBMS space (compared to the application data format space where it has always done well).

* There was a live read replica beta which has since been dropped. Author did mention a separate tool they're working on which will include live replication. https://github.com/benbjohnson/litestream/issues/8#issuecomm...

replies(1): >>tekacs+AR
◧◩◪
9. mike_h+nq[view] [source] [discussion] 2022-09-10 17:34:49
>>cgh+ng
Yep. OS vendors dropped the ball on distribution, browsers picked up the slack. But browsers also dropped the ball in a lot of ways. E.g. browsers do a lot of useful stuff but none of that is available for servers or CLI apps. There's lots of scope to do better. Also, on the desktop you have POSIX or POSIX-ish features at least :)

BTW, Conveyor is free for open source projects, and currently free for commercial too. The current versions focus on solving the basics really well. Deployment/update is like for a website statically generated from Markdown. You can build fully signed-or-self-signed and self-updating packages from cross-platform artifacts on whatever OS you happen to use with one command. So, you can package up Electron and JVM apps given just inputs like JS or JAR files, from your dev laptop or Linux CI box, and you get packages for every OS. You also get a simple download HTML page that detects the user's OS and CPU.

To do a new release you just re-build and re-upload the site. Clients will start updating immediately. On macOS it uses Sparkle, on Windows the OS will do the updates in the background and Linux users get packages.

It does native apps too but then of course you need to compile the binaries for each OS yourself.

One possibility we're researching is to page code in from the database on demand. That way you only have to push updates to the client occasionally, like when refreshing the core runtimes. For changing business logic the client would use SQL queries to speculatively load code based on what other clients have been requesting. If it works it means you can get rid of minification, bundling, all the other hacks web devs do to reduce requests and round-tripping, whilst keeping the "instant" deployment browsers give you.

◧◩◪
10. mccull+qs[view] [source] [discussion] 2022-09-10 17:49:24
>>cgh+ng
I have built a proprietary Swing desktop app that we use inside the tugboat company I run that does this. The wrinkle is that instances of this app are only intermittently connected to the Internet and our central instance of PostgreSQL. The Swing app uses a local SQLite instance and synchronizes it in the background when a connection is available. The users never experience any latency.

Careful schema design to support synchronization without collisions is the only real difference between this kind of app and CRUD apps that expect to always be able to reach the Internet.

replies(1): >>chriss+St1
◧◩◪◨
11. macint+4w[view] [source] [discussion] 2022-09-10 18:13:46
>>thayne+3q
Maybe. In that example, if the service has run for over a decade, it seems plausible that whatever contractual penalties they would have had to pay out for occasional downtimes would be far less than the initial and ongoing development time required to implement a far more complex solution, not to mention the additional hardware/cloud costs.
replies(1): >>thayne+oU2
◧◩◪
12. mike_h+8x[view] [source] [discussion] 2022-09-10 18:19:28
>>pjmlp+Og
Yeah. Never used Oracle Forms but did use Delphi a lot. Borland never tried to solve distribution any better than Microsoft did. These firms were born in the era when "shipping" was meant literally and showed no interest in the problem of how to change software more than once every few years. Then people realized you could iterate a web app every day if you wanted to, the way they worked gave you total insight into what users were actually doing, you could use scripting languages better than VB and more. Businesses wanted the agility, devs wanted to use UNIX and Perl instead of VB/Delphi and the web was off to the races.

There were other issues of course, it wasn't just about distribution. Too bad so many downsides came along with the upsides. The real goal for OS research should be IMHO to find ways to combine what people like about web dev with what people like about desktop/mobile dev. All the action is above the POSIX layer.

replies(2): >>mwcamp+GC >>mwcamp+bE
◧◩
13. mwcamp+zB[view] [source] [discussion] 2022-09-10 18:46:57
>>mike_h+da
Interesting approach.

Web applications can come close to directly accessing the database by using GraphQL with something like Hasura or PostGraphile. PostGraphile even uses Postgres's row-level security. A colleague and I once did a project using Hasura with a SPA-style JavaScript front-end and a separate back-end service driven by Hasura webhooks for doing the actual computation, and we ended up being unhappy with that approach. Some of our problems were related to the SPA architecture, but some were related to our use of GraphQL and Hasura.

We ended up starting over with a typical server-rendered web application, where the server itself accesses the database and communicates with the computation-heavy back-end service over gRPC, using a minimum of client-side JavaScript. I remain happy with that architecture, though I continue to explore different ways of integrating modest amounts of client-side JavaScript for interactivity and real-time updates. And to bring it back to the topic of my previous comment, if we assume that there has to be a server rendering HTML anyway, then I think it often makes sense to reduce complexity by bringing the database into that server process. I haven't yet tried that in production, though.

I think my preference is to use HTTP primarily as it was originally intended, for fetching HTML, as well as for form submissions. For finer-grained interactivity, I think it's better to use WebSockets as opposed to REST-ish requests and responses. I'm not dogmatic on that, though.

On web apps versus packaged desktop apps, I'm still inclined to go with web whenever feasible, and only develop a packaged desktop app if a web app just won't work. Being able to use an application without an installation step is really powerful for pre-sales demos or trials, for onboarding new users, and for applications that may only be used occasionally by any given user. Even for an application that you use all the time, a web app can be fine, as the popularity of Google Docs demonstrates. For example, if you just want to get the browser chrome out of the way, desktop browsers support "installing" a web app as an OS-level app with no browser chrome. IMO, Hydraulic's Eton demo app could just as well be a web app.

I look forward to your blog post, though even your preliminary HN comment offers a lot to think about.

replies(1): >>mike_h+5Q
◧◩◪◨
14. mwcamp+GC[view] [source] [discussion] 2022-09-10 18:53:32
>>mike_h+8x
> the way they worked gave you total insight into what users were actually doing

How do you suggest achieving this in desktop apps? Some kind of log streaming?

replies(1): >>mike_h+jO
◧◩◪◨
15. mwcamp+bE[view] [source] [discussion] 2022-09-10 19:01:57
>>mike_h+8x
> devs wanted to use UNIX and Perl instead of VB/Delphi

What do you think drove this? Presumably plenty of people in the dark mass of 9-to-5 devs were happy with VB/Delphi. Jonathan Edwards has written [1] that VB came from "a more civilized age. Before the dark times… before the web." Did nerdy devs like me, with our adolescent anti-Microsoft attitude (speaking for myself anyway; I was born in 1980), ruin everything?

[1]: https://alarmingdevelopment.org/?p=865

replies(2): >>mike_h+3O >>AtlasB+dP
◧◩
16. whartu+NN[view] [source] [discussion] 2022-09-10 20:06:00
>>mike_h+da
I only have a couple of points regarding this.

First, simply, I don't know anyone that puts their DB connections "on the internet". That live, raw, database "SQL" socket to be poked, prodded, hammered, and cajoled by complete strangers (or their automatronic minions).

Second, is DB latency. Sending "coarse" grained service requests across the planet is one thing compared to the potential volume of random SQL commands that folks do.

Mind, much of that can be mitigated if you build a stored procedure service layer. But classic "2 tier" "client/server" work didn't do that exclusively, just just threw out SQL willy nilly as the need dictated.

As old school as I am, even I tend to shy away from the "monolithic DB". You think your app was a monolith before, wait until it's all baked into Oracle. I've always found the DB to be a "bad citizen" when it comes to things like versioning, source code control, etc.

Even if I were doing a desktop app, I still think I would prefer a middle tier ("application server") managing service endpoints than cram it all into the DB, especially today.

replies(2): >>mike_h+zS >>sicp-e+nip
◧◩◪◨⬒
17. mike_h+3O[view] [source] [discussion] 2022-09-10 20:08:54
>>mwcamp+bE
The language and libraries situation on Windows wasn't great during this time.

Delphi was good but a compiled language with manual memory management. It was very easy to write code that crashed, which would nuke the user's state leaving little evidence of what happened. It also had a lot of legacy quirks like not allowing circular dependencies between compilation units, networking support was poor (iirc TCP, HTTP and other classes required you to buy a third party library). The VCL was a wrapper around Win32, which had some great strengths but also really frustrating weaknesses e.g. extremely poor/non-existent layout management, poor support for typography and no support for styling or branding. There were many positive aspects of course.

Microsoft gave you VB or C++, both with Win32 again. The C++ developer experience was horrid. VB was at least a scripting language with garbage collection, but, it was also constrained by the fact that B stood for "Beginners" so Microsoft were very reluctant to fix any of its legacy or add more powerful features.

Compared to that situation, scripting languages and especially Perl had some massive advantages:

1. Ran on UNIX/Big Iron which is where all the best hardware and databases could be found. Lots of devs liked UNIX because it was CLI and developer oriented.

2. Unashamedly designed for experts with tons of powerful features, quality of life stuff like integrated regex, garbage collection, proper stack traces, error logs you could view via telnet within seconds etc.

2. CPAN provided an ever growing repository of open source libraries, instantly accessible, for free! On Windows there were very few libraries, they were mostly quite expensive and closed source, no easy way to discover them (pre-Google) and only C++/Delphi devs could write them. VB was sort of a consumer-only language. Open source culture started with RMS at the MIT AI lab and so was very UNIX centric for a long time. Arguably it still is.

Really, it's hard to overstate how revolutionary proper garbage collection + CPAN was. GC is a force multiplier and CPAN is the granddaddy of all the open source library repositories we take for granted today. Imagine how unproductive you'd be without them.

The big downside was that Perl had no UI libraries and didn't really run on Windows. So how do you use it to write apps for normal people? Then Netscape started adding interactivity features to the web and it was all totally text based. Text was Perl's forte! Add the <form> tag, CGI, HTTP and now you're cooking with gas. Bye bye hateful 0xC00005 Access Violation errors and useless bug reports like "I clicked a button and the app disappeared".

The web was a huge step back for users, who went from having pretty sophisticated GUIs with fast table views, menus, shortcut keys, context menus, Office integration, working copy/paste, instant UI response etc to ... well, the web. But often users will suffer through that if it makes their devs more productive because all they really care about are features and competitive advantage. The web was crude but it let people escape the Windows ecosystem to one with open source + GC + pro languages + big iron UNIX.

◧◩◪◨⬒
18. mike_h+jO[view] [source] [discussion] 2022-09-10 20:10:37
>>mwcamp+GC
Most RDBMS have auditing features these days. Unless there's a lot of complex logic on the client that's sufficient. Otherwise yes, there are plenty of libraries and services for collecting and analyzing logs these days. I've looked at streaming Java Flight Recorder logs too, but it's probably overkill.
◧◩◪◨⬒
19. AtlasB+dP[view] [source] [discussion] 2022-09-10 20:18:19
>>mwcamp+bE
"a more civilized age. Before the dark times… before the web."

There is revisionist history, and then there is that statement. That statement ... is almost offensive.

... Why did the "golden age" end? Because Microsoft sucked in so so so so so so so many ways. That statement is a bald faced lie, a whitewashing attempt to forgive Microsoft from inflicting millions of man years in damage to the IT industry over three decades of anticompetitive practices to keep their substandard software in a position of market dominance.

But for anyone beyond the MCSE factory programmer (aka NOT the thought leaders of the industry) aside from those profiting handsomely from the Microsoft evil empire, did not like Microsoft.

In the days of DOS, you had DOS or UNIX. Which was better?

In the days of Windows in the 3.1 and even pretty much in the windows 95, you didn't have preemptive multitasking (something that non-windows had for 20 years at that point). It crashed CONSTANTLY, had no security, required restarts for practically anything that was installed.

Meanwhile the UNIX people would brag about machines not being rebooted for years. This was before the constant patch cycle was a thing.

Microsoft's apis were slapdash and disorganized, and frequently went out of favor.

During this time Microsoft would constantly crush and screw over competitors and engage in constant anticompetitive behavior. If you didn't suck on the teat of Microsoft, your platform was under unrelenting assault, not by pure technical achievement, but by the full smorgasborg of corporate dirty tricks, FUD, bribery/kickbacks, lobbying, lies, secret apis, golf schmoozing with nontechnical CIOs to close deals, no or undermined standards, etc.

The graveyard is long: Sybase, Lotus 123, Netscape, Novell.

The Microsoft times were a time of being stuck with one option: and OS that crashed constantly or is utterly porous to attackers. A browser that has incompatible web apis and a disbanded developer team (IE 6) that gets corporate mandated and is a thorn in your side in the entire IT stack for two decades. Databases stolen from companies (Sybase signed SUCH a bad deal, it astonishes me to this day) running on platforms that can't stay up. Office software with inaccessible file formats and byzantine and closed programmatic apis for accessing it. A substandard desktop UI.

If you used Microsoft software with an ounce of historical knowledge or awareness, you could see the prison. You had no practical choices. All the executives in your company were bought and paid for. Microsoft had sales forces that tracked non-Microsoft systems and targeted them within companies by any means necessary. Every new piece of software written in MIcrosoft had to pay the "sociopath management tax" and go through extensive reviews on how it could be used to further or maintain Microsoft's empire and control.

Their software was deliberately dumped in beta form onto the market to crowd out the competitors.

None of this is an "adolescent" attitude. I'll repeat myself: MILLIONS OF MAN HOURS OF DAMAGE. You know, probably billions. Decades x tens of millions of workers.

This isn't just IT programmer frustration. This is bad applications forced on non-programmer users. This is better companies, better software, better IT industry denied proper funding and profits. Instead, Microsoft took trillions of dollars in revenues from them. This is undermining a free market, free ideas, and freedom for Microsoft's profit.

replies(3): >>mike_h+oR >>mek680+IU >>pjmlp+TA1
◧◩◪
20. mike_h+5Q[view] [source] [discussion] 2022-09-10 20:26:04
>>mwcamp+zB
The Eton Notes app is just a way to show what the download/install UX looks like when combined with a Compose app. It's mostly just a mockup with no real functionality.

Yes, for transient apps that are only used occasionally or where the user isn't committed e.g. social networks, the combination of sandboxing + no integration/unintegration steps, automatic deletion from the cache etc is really useful. Of course there's no rule that says only web browsers can supply these features. Other kinds of browser-like thing can do so too.

It's also worth thinking a bit outside the box. Although we claim web apps don't have installation steps, that's not really true most of the time. The lack of any explicit integration step ("install") means the browser doesn't know if the user values anything they do on the site. So you have to provide data persistence, and that in turn means you need a signup / account creation flow. It also means you're on the hook to store, replicate and back up any data any user ever creates, even if they only used it once and then never return.

Well, a lot of users really hate creating yet another account, especially if they aren't committed yet. It's tedious, they think you'll spam them with marketing emails and they're probably right, plus they don't want to make another password. Or they make one, abandon for a year, then can't remember how to log in.

You might think that's so fundamental it can't be any other way, but it's really just a side effect of how browsers have evolved. Think about how you might do demos outside the browser. You could just have a trial mode where the app spins up a local in-process RDBMS like H2 that writes the data into the app's private data area (on Windows) or home directory on macOS/Linux. No accounts necessary - just one or two clicks to download+trigger app install, and you're done. If/when the user decides to graduate, then they create an account and the app uploads the data from the local DB to the real remote DB. If they abandon it and don't care, it's not your problem, it costs you nothing. If they run low on disk space the OS will suggest they delete old unused apps at that point and you'll get tossed along with the rest.

Mostly though, this is about making developers more productive. If the primary determinant of your success is feature throughput and not shaving a few seconds off your onboarding, e.g. you're making specialised apps, internal apps, enterprise software, then optimizing for better dev environments can make sense. Installation just isn't that bad.

◧◩◪◨⬒⬓
21. mike_h+oR[view] [source] [discussion] 2022-09-10 20:38:29
>>AtlasB+dP
Both stances are too extreme. Yes, the web didn't bring "dark times", it was adopted for sound reasons. But your view of Windows vs UNIX is equally far out. Until the web got good Windows vs UNIX was no contest for anything meant for ordinary users. To the extent MS was anti-competitive it was hurting Netscape and Apple, not UNIX:

• Until Linux, UNIX didn't even run on PCs at all, only expensive proprietary hardware. Windows ran on cheap machines made by a competitive market. Businesses wanted PCs for lots of reasons (e.g. Office).

• UNIX wasn't a single OS, it was many forks that were not compatible. You couldn't just write an app for UNIX, put it on CDs and sell it.

• Anything GUI related was terrible compared to Win32. Motif was the standard but cave man tech compared to what you could do on the client with Windows.

• UNIX was wildly more expensive.

• PCs had a thriving gaming culture. It meant people used the same tech at home as at work, creating a self-training workforce. UNIX vendors didn't care about games, they were too 'serious' for that. Their loss.

• Windows 95 did do pre-emptive multi-tasking, by the way. Only Win16 apps that hadn't been ported were cooperatively multi-tasked. Those apps didn't stick around very long because porting to Win32 was easy and Windows 95 was an absolute monster success. People queued up at midnight around the blocks of PC stores to buy it, it was so popular.

• Windows apps crashed a lot compared to UNIX apps because the average Windows machine ran way more apps than the average UNIX machine, ran on way lower quality hardware that was often flaky, had way more diversity of peripherals and configurations, and apps were deployed in places where it was hard to get crash logs (no networks).

• Windows machines didn't have uptimes of years because nobody cared. They were workstations. You turned them off at the end of the day when you went home because otherwise they'd burn their CRTs in and their lifetime was reduced. The culture of leaving non-server machines on all the time and never rebooting them only began once ACPI and other power management tech started to become common (something non-Apple UNIX struggles with to this day). And once it was useful to do so, Microsoft had a version of Windows that could have uptimes of months or years, no big deal.

replies(1): >>nevera+Le1
◧◩
22. tekacs+AR[view] [source] [discussion] 2022-09-10 20:40:00
>>oefrha+cq
For folks' context, the new tool that's being discussed in the thread mentioned by the parent here is litefs [0], as well as which you can also look at rqlite [1] and dqlite [2], which all provide different trade-offs (e.g. rqlite is 'more strongly consistent' than litefs).

[0]: https://github.com/superfly/litefs

[1]: https://github.com/rqlite/rqlite

[2]: https://github.com/canonical/dqlite

◧◩◪
23. mike_h+zS[view] [source] [discussion] 2022-09-10 20:49:15
>>whartu+NN
Shodan knew at least 600,000 PostgreSQLs listening on the open internet when I last looked. Presumably quite a few are mistakes, of course. But people do it and the sky doesn't fall. Same for SSH or many other types of server. Of course the web ecosystem has 30 years of accumulated work so yes, you'd be missing stuff like Cloudflare, reCAPTCHA etc. Better for more controlled contexts than something like HN.

Latency is easy to screw up whether you do web apps or direct SQL connections. You have to be conscious of what a request costs, and you can easily batch SQL queries. Yes, you have to watch out for frameworks that spam the DB but those are bad news anyway, and of course there are lots of web frameworks that generate inefficient code. Not sure it's so different.

Your app will have to deal with DB versioning whether it's a web app or not. Tools like Flyway help a lot with linking your DB to version control and CI.

Nonetheless, I totally understand where you're coming from. Thanks for the thoughts.

replies(1): >>coldte+3V
◧◩◪◨⬒⬓
24. mek680+IU[view] [source] [discussion] 2022-09-10 21:10:55
>>AtlasB+dP
I agree with you about the massive damage caused by Microsoft, but ... it was there. In the DOS times, there was no choice between DOS and Unix on PCs -- there was effectively just DOS. Aside from niche things like Coherent (IIRC), Unix was only available on workstations and up, too expensive for small businesses and consumers.

Also, VMS and other OSes were the ones that ran for years without rebooting. Unix at the time was not so stable. Before Tcl, John Ousterhout wrote a log-structured file system for Unix because their Unix systems crashed often enough and took so long to boot that a possible loss of data but fast boot-up was deemed better than the lengthier downtimes with the existing file system.

So the PC market went with Microsoft and its encompassing environment, much to everyone's detriment. Fortunately, we've all moved on to the web and JavaScript and everything is now sunshine and roses. :-)

◧◩◪◨
25. coldte+3V[view] [source] [discussion] 2022-09-10 21:15:02
>>mike_h+zS
>* Shodan knew at least 600,000 PostgreSQLs listening on the open internet when I last looked. Presumably quite a few are mistakes, of course.*

A few? I'd say most are accidental and the rest are just bad ideas...

>But people do it and the sky doesn't fall.

Well, the same is true for playing Russian roulette too. Most of the times you're winning!

replies(1): >>mike_h+e11
◧◩◪◨⬒
26. mike_h+e11[view] [source] [discussion] 2022-09-10 22:18:36
>>coldte+3V
We don't know either way, but a standard Postgres install doesn't let remote connections do much. You still have to authenticate before anything is allowed. It's not much different to sshd in this regard. A typical web server is far more promiscuous, with a massive surface area exposed to unauthenticated connections. There have been way more disasters from buggy web frameworks/apps that get systematically popped by crawlers, than from people running RDBMS.
◧◩
27. agumon+D31[view] [source] [discussion] 2022-09-10 22:45:15
>>mike_h+da
what about the ux side of things ? I love everything you mention about perf but I never ran into a single 90s era app that had interesting ergonomics. Web can be shitty or worse but there are some valuable UX aspects now.
replies(1): >>pjmlp+uB1
28. jayd16+7b1[view] [source] 2022-09-11 00:08:00
>>mwcamp+(OP)
I just don't see it. If it's an internal tool that has no scale at all, fine. No SLA, no problem. Good enough is good enough in a lot of cases.

But what about global customers? Most of the planet just eats the latency? What about single node failure? You usually need to scale past n=1 for a public facing service. It's not just about Google scale.

replies(1): >>hiptob+hg2
◧◩◪◨⬒⬓⬔
29. nevera+Le1[view] [source] [discussion] 2022-09-11 00:48:21
>>mike_h+oR
"Until Linux, UNIX didn't even run on PCs at all, only expensive proprietary hardware"

Not true. There was Xenix, SCO, and Coherent as 3 examples off the top of my head.

replies(3): >>icedch+Mi1 >>pjmlp+XA1 >>mike_h+ZT1
◧◩◪◨⬒⬓⬔⧯
30. icedch+Mi1[view] [source] [discussion] 2022-09-11 01:37:34
>>nevera+Le1
Yes! I ran Coherent 4.0 on a 386SX laptop when I was in high school (before moving to Linux.) Coherent had incredible documentation, something that is very rare today. I still remember that book with the shell on it, and learned a ton about systems administration and POSIX programming from it.

Here it is: https://archive.org/details/CoherentMan

◧◩◪◨
31. chriss+St1[view] [source] [discussion] 2022-09-11 04:19:12
>>mccull+qs
I’d love to hear more about how you solved the synchronization process, if you’d be willing to share. How do you handle conflicting changes, like local and remote both changing the same field, or one deleting a resource while another modified it?

I’m trying to understand more real world examples of syncing offline with online.

Thanks!

replies(1): >>mccull+eY2
◧◩◪◨⬒⬓
32. pjmlp+TA1[view] [source] [discussion] 2022-09-11 06:07:59
>>AtlasB+dP
The only thing the anti Microsoft speech always misses, like the anti-FANG nowadays, it that the competition also has themselves to blame.

Bad management, bad products, not willing to reduce prices.

You see it nowadays on the Linux Desktop, instead of uniting, everyone goes do their own little thing.

No wonder it doesn't work out.

Linux won on the server room, thanks to being a cheap UNIX clone, and now with cloud computing and managed language runtimes, it hardly matters if it is there, or they are running on top of a type-1 hypervisor.

◧◩◪◨⬒⬓⬔⧯
33. pjmlp+XA1[view] [source] [discussion] 2022-09-11 06:09:16
>>nevera+Le1
It did, but for their prices and hardware requirements I would rather use OS/2 instead.
replies(1): >>icedch+qu2
◧◩◪
34. pjmlp+uB1[view] [source] [discussion] 2022-09-11 06:16:01
>>agumon+D31
You mean the UX where everything is flat and we don't know where to click any longer?
replies(1): >>agumon+wE1
◧◩◪◨
35. agumon+wE1[view] [source] [discussion] 2022-09-11 07:03:43
>>pjmlp+uB1
:) please

I mean smooth async, fuzzy matching and reactive sub trees. I'm not a modern fanatic, I actually enjoy the good old as400 or win311 model a lot, and use old emacs daily. But there was a big chunk of the 90s where GUIs were really really subpar.

replies(1): >>mike_h+GW1
◧◩
36. kubanc+ZP1[view] [source] [discussion] 2022-09-11 09:25:38
>>mike_h+da
Since I've seen a similar thing in the 90s, I have a practical point to make.

If a two-tier app sends out emails, PLSQL/dbplugin does it. Now every ops task for sending emails involves the DB and by extension, your data is at stake. To launch a new parallel process, or to roll a new version, or to spread to a different location, or to kill a frozen process, or to measure much RAM a new feature has eaten, these are all DB tasks despite the fact that the task was just for a send-email feature.

Anything happening server-side (i.e. not on a user's device) needs to pass DBA middlepersons.

To put it back on feet, the architecture might be: the DB is one of the services. A frontend can talk to a database, and the two can work out the protocol, the authn/authz, the load balancing. They don't need any CRUD "backend" that is not really a "back" "end" but just a glorified boilerplate SQL-to-JSON converter.

The tradeoff is that you lose a lot of implicit trust. An email service cannot trust the frontend with the business rules. If user is allowed to only send to a set of recipients - it's an email service that needs to query that set from the DB.

replies(1): >>mike_h+3V1
◧◩◪◨⬒⬓⬔⧯
37. mike_h+ZT1[view] [source] [discussion] 2022-09-11 10:19:06
>>nevera+Le1
You're right, I'd forgotten about Xenix. Never heard of Coherent. SCO I thought was big iron but I'll take your word for it that I'm wrong about that. There sure were a lot of UNIX vendors back then!
◧◩◪
38. mike_h+3V1[view] [source] [discussion] 2022-09-11 10:32:24
>>kubanc+ZP1
Yes, you can go for a mixed approach. As you observe, it might not change that much because most of the issues aren't dependent on how many tiers you have. If you have middlemen between you and prod they're probably there anyway, regardless of architecture. And something will have to query the DB to find out who the user can email. Whether that's a DB plugin written in Python, a web server or whether it's an email microservice that connects to the DB over the network, it's going to boil down to how much you care about service isolation vs distributed systems complexity.

If you wanted isolation of an email service in this design, you'd use the DB as a task queue. The app triggers a procedure (written in SQL, Python, Java or whatever) which verifies the business logic and then does an insert to a tasks table. The email microservice wakes up and processes the queued tasks. That's a pretty common design already.

replies(1): >>kubanc+ws4
◧◩◪◨⬒
39. mike_h+GW1[view] [source] [discussion] 2022-09-11 10:54:27
>>agumon+wE1
Could you elaborate? What does "smooth async" and "reactive subtrees" mean in the context of UX, that sounds more like developer experience than user experience.

Generally if you can do it on mobile you can do it elsewhere, right? If you want something like ReactJS and coroutines/async/await, look at Jetpack Compose. It's inspired by ReactJS but for Android/Desktop: https://developer.android.com/jetpack/compose

You don't need any particular UI toolkit though. Many years ago I did a tutorial on "functional programming in Kotlin":

https://www.youtube.com/watch?v=AhA-Q7MOre0

It uses JavaFX with a library called ReactFX that adds functional utilities on top of the UI framework. It shows how to do async fuzzy matching of user input against a large set of ngrams. I guess that's in the region of what you mean too.

replies(1): >>agumon+R63
◧◩◪
40. hiptob+3g2[view] [source] [discussion] 2022-09-11 14:13:21
>>mattar+le
So many examples of this across Google it's not even funny.
◧◩
41. hiptob+hg2[view] [source] [discussion] 2022-09-11 14:15:29
>>jayd16+7b1
Depending on the product, the latency difference might not even be visible compared to the every day latency of the backend itself.

If your ui maintains state via some kind of async layer then the latency might not be observable at all.

◧◩◪◨⬒⬓⬔⧯▣
42. icedch+qu2[view] [source] [discussion] 2022-09-11 15:51:34
>>pjmlp+XA1
Coherent was relatively cheap if you wanted a PC unix clone. $100 in 1992: https://techmonitor.ai/technology/coherent_unixalike_for_int...
replies(1): >>pjmlp+OZ3
◧◩◪◨⬒
43. thayne+oU2[view] [source] [discussion] 2022-09-11 18:59:13
>>macint+4w
I would consider it dishonest to promise your customers a certain uptime, knowing you likely won't meet it. And some customers, particular more lucrative ones, want to see historical uptime and/or evidence that you have a resilient architecture.

That is not at all to say that it is a deal breaker for everyone, but it certainly will be for some companies.

◧◩◪◨⬒
44. mccull+eY2[view] [source] [discussion] 2022-09-11 19:30:14
>>chriss+St1
I regret to report that I am not doing anything especially clever (e.g., CRDT). In some cases, I am doing things that are expensive in terms of storage, bandwidth, or local computation to facilitate synchronization.

Basically, my schema design prohibits use of UPDATE and requires that every row have a timestamp. The clients maintain a synchronization log to ensure they have fetched every available row. The keep track of which rows have not yet been sent up to the server.

This means that finding the current state of things that can change means doing a "SELECT column ORDER by timestamp DESC LIMIT 1" in order to see the latest state and always doing INSERT instead of UPDATE to update state.

In some cases, I am storing a delta in a row instead of a complete state representation. This means that some views have to replay the changes to show the current state. I cache the result of these.

I do some general high level caching on the client side to make all of this as fast as possible. I have watchdogs set on the local GUI to warn me when latency of the GUI event loop is over 200 milliseconds. I use these warnings to focus effort on caching and other optimizations.

◧◩◪◨⬒⬓
45. agumon+R63[view] [source] [discussion] 2022-09-11 20:37:07
>>mike_h+GW1
smooth async is basically mastered ajax with lean client/server events and nice loading widgets

nothing I said was web only, but web has focused on it and made it basic lingo; I admit not being aware of the javafx world evolution when I used it it was crude (circa 2010).

reactive subtrees ~= two way data binding over dom like trees, i don't recall running into such ui rendering models before

replies(1): >>mike_h+yd3
◧◩◪◨⬒⬓⬔
46. mike_h+yd3[view] [source] [discussion] 2022-09-11 21:29:49
>>agumon+R63
I think if you compare like with like it probably wasn't so bad. Web stuff was a lot cruder in 2010 too. JavaFX has two way data binding into the scene graph (a.k.a. DOM) and did from the start:

https://openjfx.io/javadoc/18/javafx.base/javafx/beans/bindi...

◧◩◪◨⬒⬓⬔⧯▣▦
47. pjmlp+OZ3[view] [source] [discussion] 2022-09-12 05:39:00
>>icedch+qu2
That is the price for the software + the hardware to actually run it at an acceptable speed.

And all things being equal you could still get OS/2 as low as $49,

> The suggested introductory price of OS/2 2.0 is $139. However, the cost falls to $99 for users upgrading from DOS, which includes just about anyone, and to $49 for users who are willing to turn in their Microsoft version of Windows.

https://www.nytimes.com/1992/04/21/science/personal-computer...

replies(1): >>icedch+lP4
◧◩◪◨
48. kubanc+ws4[view] [source] [discussion] 2022-09-12 10:45:31
>>mike_h+3V1
Ah, so you are saying to just bundle the DB with business logic and let it call other components (if any exist).

I thought about the whole idea over the weekend a bit and I'd say it is worth a try.

If you say that you have the distribution problem figured out, that makes it viable, it was the biggest obstacle in the 90s. What I'd expect it to mean is that to roll out a significant DB change, the frontend can self-update and not lose a hour-worth of users' unsaved work.

Also I think when selling this, you don't need to avoid the Delphi nostalgia that much. Everyone old who sees "remove the middle tier" will instantly go into mental mode of "uh-oh, those who do not learn from history are bound to repeat it". You are seeing a lot of it around this subthread - if you acknowledge upfront that you know you build on that past exp, it adds credibility.

replies(1): >>mike_h+vsb
◧◩◪◨⬒⬓⬔⧯▣▦▧
49. icedch+lP4[view] [source] [discussion] 2022-09-12 13:15:14
>>pjmlp+OZ3
True, OS/2 was much cheaper. Coherent was relatively cheap for a Unix clone, which is basically what I was getting at. SCO Xenix / Unix was in the $500+ range. A C compiler wasn't even included, if I recall.
◧◩◪◨⬒
50. mike_h+vsb[view] [source] [discussion] 2022-09-14 10:09:04
>>kubanc+ws4
Yes, exactly. Glad to hear that! Thanks for the words of advice, it's helpful.

People always have different thresholds for what "solved" means. Today Conveyor gives you a Chrome-style update experience on Windows where the app updates in the background even if it's being used. The user isn't disrupted. Ditto on macOS if the user states they want silent background updates at first update (we'll probably tweak this so the user isn't asked and must opt-out explicitly). The user won't lose any unsaved work or be surprised by sudden changes.

So to make a DB change, you need to do it in a backwards compatible way until the clients have had a chance to fully update. Probably that means being compatible for a few days, if you have users who aren't always online. This is probably solved enough for the general case.

The developer experience in that mode is exactly the same as compiling Markdown to HTML using Jekyll or Hugo.

The next step is to go further, so code changes can take effect much faster than a background update cycle. It requires the client to have some notion of a page, screen, activity etc - some point between interactions where you can make changes without disrupting the user. And it requires the client to be informed by the server if code has changed, even whilst the user is running the app. This takes more work, but it's on the roadmap. Mostly we think the async model is OK. You have to change your schemas in backwards compatible ways even in the web case to avoid site outages or different web servers getting confused during a rolling upgrade.

◧◩
51. sicp-e+Jhp[view] [source] [discussion] 2022-09-18 15:23:04
>>mike_h+da
Thanks for this detailed comment. I would like to add that I have seen solutions which facilitate this architecture, but still have your application be on the web. One example is postgREST which generates a REST api from your database. Using a separate schema and views you can tightly control what gets exposed and how, but all security and logic still only happens in the database. Do you have any opinions on similar solutions?
◧◩◪
52. sicp-e+nip[view] [source] [discussion] 2022-09-18 15:27:13
>>whartu+NN
What specifically are you concerned about? An HTTP connection to NGINX is just a TCP connection anybody can open, a database connection to postgres is the same. Both applications need to be extremely careful with those connections to protect against attacks, but database security is already a huge priority, so they both have invested heavily in that.

One concern I can think of is NGINX might have better DoS protection. What else do you have in mind?

53. nsm+oms[view] [source] 2022-09-19 16:00:22
>>mwcamp+(OP)
A lot of these are FAANG scale problems primarily when it comes to server/backend infrastructure.

For a lot of "embedded" use cases like robotics, where you are trying to squeeze out maximum performance from a single machine (the hardware you have on the robot), POSIX is and will remain a hindrance.

[go to top]