zlacker

[parent] [thread] 1 comments
1. sologo+(OP)[view] [source] 2021-06-12 09:47:47
You’d probably already have indices by user then. There also other ways to store the data, for example not as a time series of user actions, but as another data structure. It just depends on your requirements and data model.

The gist of the original article is asking whether you could reduce tech stack complexity and use a single set of technologies for more use cases, allowing to understand the tool you are using better.

Also, note that a traditional database may or may not be the right tool for the job - there are different storage/behavior needs for writing lots of data, processing lots of data and serving the results.

replies(1): >>squigg+ld
2. squigg+ld[view] [source] 2021-06-12 12:25:02
>>sologo+(OP)
> You’d probably already have indices by user then.

Do you mean "you'd probably already have indices by user then, so you won't be able to take advantage of quick drops"?

> There also other ways to store the data, for example not as a time series of user actions, but as another data structure. It just depends on your requirements and data model.

I suppose I want to be a bit more specific. Till now, if I wanted to deal with this issue, I would just have used a cron job and some indexes and taken on the load. But what is an example of a nice way of dealing with this? You get to make up plausible requirements and data model details, perhaps drawing on particular requirements you had when you faced a similar issue.

> The gist of the original article is asking whether you could reduce tech stack complexity and use a single set of technologies for more use cases, allowing to understand the tool you are using better.

The legitimacy of a question in a free-form conversation doesn't depend on the original inspiration, several iterations ago. But even if it did, the question is here is exactly about understanding the tools better and whether we can use one tool instead of two, so by your summary it's perfectly on topic.

> Also, note that a traditional database may or may not be the right tool for the job - there are different storage/behavior needs for writing lots of data, processing lots of data and serving the results.

The subquestion here is precisely if we can get away with just using Postgres for data with mandatory lifespan requirements that vary over the lifetime of the data.

Extra tools come at a cost - that is the presumption of this article and the thread it has spawned.

If we have to use Postgres, we need to assess the cost of doing this in Postgres before we can decide whether or not to pay the cost of other tools. Waving in the general direction of other tools isn't helpful; it probably takes as much work to enumerate candidates as it does to calculate the cost of doing it in Postgres.

[go to top]