zlacker

[parent] [thread] 4 comments
1. cgh+(OP)[view] [source] 2022-09-10 16:32:29
Desktop apps that query/update the db directly were a thing back in the '90s. They were an example of what we called "client/server". They were swiftly superseded by the web, which sort of hijacked the client/server architecture. As you noted, the basic reason is desktop app distribution and updating is hard. If your company can beat this problem, then great, because removing the intermediate web layer makes a lot of sense in certain cases (eg, enterprise deployments).
replies(2): >>mike_h+0a >>mccull+3c
2. mike_h+0a[view] [source] 2022-09-10 17:34:49
>>cgh+(OP)
Yep. OS vendors dropped the ball on distribution, browsers picked up the slack. But browsers also dropped the ball in a lot of ways. E.g. browsers do a lot of useful stuff but none of that is available for servers or CLI apps. There's lots of scope to do better. Also, on the desktop you have POSIX or POSIX-ish features at least :)

BTW, Conveyor is free for open source projects, and currently free for commercial too. The current versions focus on solving the basics really well. Deployment/update is like for a website statically generated from Markdown. You can build fully signed-or-self-signed and self-updating packages from cross-platform artifacts on whatever OS you happen to use with one command. So, you can package up Electron and JVM apps given just inputs like JS or JAR files, from your dev laptop or Linux CI box, and you get packages for every OS. You also get a simple download HTML page that detects the user's OS and CPU.

To do a new release you just re-build and re-upload the site. Clients will start updating immediately. On macOS it uses Sparkle, on Windows the OS will do the updates in the background and Linux users get packages.

It does native apps too but then of course you need to compile the binaries for each OS yourself.

One possibility we're researching is to page code in from the database on demand. That way you only have to push updates to the client occasionally, like when refreshing the core runtimes. For changing business logic the client would use SQL queries to speculatively load code based on what other clients have been requesting. If it works it means you can get rid of minification, bundling, all the other hacks web devs do to reduce requests and round-tripping, whilst keeping the "instant" deployment browsers give you.

3. mccull+3c[view] [source] 2022-09-10 17:49:24
>>cgh+(OP)
I have built a proprietary Swing desktop app that we use inside the tugboat company I run that does this. The wrinkle is that instances of this app are only intermittently connected to the Internet and our central instance of PostgreSQL. The Swing app uses a local SQLite instance and synchronizes it in the background when a connection is available. The users never experience any latency.

Careful schema design to support synchronization without collisions is the only real difference between this kind of app and CRUD apps that expect to always be able to reach the Internet.

replies(1): >>chriss+vd1
◧◩
4. chriss+vd1[view] [source] [discussion] 2022-09-11 04:19:12
>>mccull+3c
I’d love to hear more about how you solved the synchronization process, if you’d be willing to share. How do you handle conflicting changes, like local and remote both changing the same field, or one deleting a resource while another modified it?

I’m trying to understand more real world examples of syncing offline with online.

Thanks!

replies(1): >>mccull+RH2
◧◩◪
5. mccull+RH2[view] [source] [discussion] 2022-09-11 19:30:14
>>chriss+vd1
I regret to report that I am not doing anything especially clever (e.g., CRDT). In some cases, I am doing things that are expensive in terms of storage, bandwidth, or local computation to facilitate synchronization.

Basically, my schema design prohibits use of UPDATE and requires that every row have a timestamp. The clients maintain a synchronization log to ensure they have fetched every available row. The keep track of which rows have not yet been sent up to the server.

This means that finding the current state of things that can change means doing a "SELECT column ORDER by timestamp DESC LIMIT 1" in order to see the latest state and always doing INSERT instead of UPDATE to update state.

In some cases, I am storing a delta in a row instead of a complete state representation. This means that some views have to replay the changes to show the current state. I cache the result of these.

I do some general high level caching on the client side to make all of this as fast as possible. I have watchdogs set on the local GUI to warn me when latency of the GUI event loop is over 200 milliseconds. I use these warnings to focus effort on caching and other optimizations.

[go to top]