zlacker

[return to "Understanding Kafka with Factorio (2019)"]
1. margin+lu[view] [source] 2023-07-13 16:44:10
>>pul+(OP)
> Vertical scaling — a bigger, exponentially more expensive server

This is in practice not true at all. Vertical scaling is typically a sublinear cost increase (up to a point, but that point is a ridiculous beast of a machine), since you're (typically) upgrading just the CPU and/or just the RAM or just the storage; not all of them at once.

There are instances where you can get nearly 10x the machine for 2x the cost.

◧◩
2. teawre+gC[view] [source] 2023-07-13 17:14:48
>>margin+lu
For small consumer products sure, but we're talking at the extreme end of performance and physical capabilities. Sure you can get a 2Ghz CPU for ~2x the price of a 200Mhz CPU, but how much are you going to pay for a 6.0Ghz CPU vs 5.0Ghz? 6.1Ghz vs 6.0Ghz?
◧◩◪
3. margin+CM[view] [source] 2023-07-13 17:52:43
>>teawre+gC
You can go from a 8T/16C Epyc 7xxx series CPU to a 32T/64C CPU and not even double the cost.
◧◩◪◨
4. fluori+uR[view] [source] 2023-07-13 18:08:32
>>margin+CM
That's more like horizontal scaling, though. You get more throughput (transactions per second) but not lower latency (seconds per transaction). Though it may be more cost-effective to have a single 32-core machine than two 16-core machines.
◧◩◪◨⬒
5. margin+MV[view] [source] 2023-07-13 18:25:55
>>fluori+uR
I disagree with this definition of horizontal scaling. If you're moving to a bigger computer rather than more computers, then you're scaling vertically and not horizontally.

(and fwiw, wikipedia agrees with this definition: https://en.wikipedia.org/wiki/Scalability#Horizontal_(scale_... )

◧◩◪◨⬒⬓
6. fluori+oZ[view] [source] 2023-07-13 18:40:49
>>margin+MV
Then it sounds like you have a disagreement of terminology with FTA, since the article is using the terms like I am. Vertical scaling means increasing the serial performance of the system, and horizontal scaling means increasing the parallel performance of the system. In this sense, vertical scaling past a certain point does indeed get exponentially more expensive, while horizontal scaling almost always scales linearly in cost, or better.
◧◩◪◨⬒⬓⬔
7. margin+K81[view] [source] 2023-07-13 19:20:24
>>fluori+oZ
What I'm commenting on is this phrasing from the article

> Vertical scaling — a bigger, exponentially more expensive server

> Horizontal scaling — distribute the load over more servers

◧◩◪◨⬒⬓⬔⧯
8. teawre+wo1[view] [source] 2023-07-13 20:36:29
>>margin+K81
Ok, I see where the lay person would get confused on this. In the context of this article, every core is what Wikipedia calls a "node". There is no difference between a single 32C CPU and 4x 8C CPUs except for their ability to share memory faster. Both are similarly defined as horizontal scaling in the context of this article. You're not going to finish a single workload any faster, but you're going to increase the throughput of finishing multiple workloads in parallel.

The fact that AMD chooses to package the "nodes" together on one die vs multiple doesn't change that.

◧◩◪◨⬒⬓⬔⧯▣
9. margin+Yq1[view] [source] 2023-07-13 20:49:35
>>teawre+wo1
The wikipedia article qualifies what it means with vertical scaling

> typically involving the addition of CPUs, memory or storage to a single computer.

◧◩◪◨⬒⬓⬔⧯▣▦
10. teawre+dL1[view] [source] 2023-07-13 22:31:47
>>margin+Yq1
This is one of those times when I feel like you just didn't read anything I typed. So... I'm just gonna let you be confidently incorrect.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. margin+gj3[view] [source] 2023-07-14 12:35:23
>>teawre+dL1
I'm reading what you're typing, but I just don't agree with it. It's also contradicted by both the article we're discussing and the wikipedia article; further it's an interpretation of vertical scaling that effectively doesn't

Distinction between horizontal and vertical scaling becomes nonsense if we accept your definitions, because literally nobody does that sort of vertical scaling.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. fluori+JM3[view] [source] 2023-07-14 14:51:18
>>margin+gj3
Wrong. If you do any of these you're scaling vertically, even by that definition:

* Replace the CPU with a faster one, but with the same number of cores. Or simply run the same one at a higher clock rate.

* Add memory, or use faster memory.

* Add storage, or use faster storage.

These are all forms of vertical scaling because they reduce the time it takes to process a single transaction, either by reducing waits or by increasing computation speed.

> It's also contradicted by both the article we're discussing and the wikipedia article

The article agrees with this definition. Transaction latency decreases iff vertical scale increases. Transaction throughput increases with either form of scaling. Without this interpretation, the analogy to conveyor belts makes no sense.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. surema+yh4[view] [source] 2023-07-14 17:05:59
>>fluori+JM3
Think of it this way, instead. Building a multi-belt system is a pain in the ass that complicates the design of your factory. Conveyor belt highways, multiplexers, tunnels, and a bunch of stuff related to the physical routing of your belts suddenly becomes relevant. But you can still increase throughput keeping a single belt, if your bottleneck is not belt speed but processing speed (in the industrial sense). I can have several factories sharing the same belt, which increases throughput but not latency.

Also, it's worth pointing out that increasing the number of processing units often _does_ decrease latency. In Factorio you need 3 advanced circuits for the chemical science pack. If your science lab can produce 1 science pack every 24 seconds but your pipeline takes 16 seconds to produce one advanced circuit, your whole pipeline is going to have a latency of 48 seconds from start to finish due to being bottlenecked by the advanced circuit pipeline. Doubling the amount of processing units in each step of the circuit pipeline will double your throughput and bring your latency down to 24 seconds, as it should be. And if you have room for those extra processing units, you can do that without adding more belts.

The idea that serial speed is equivalent to latency breaks down when you consider what your computer's hardware is really doing under the scenes, too. Your cpu is constantly doing all manner of things in parallel: prefetching data from memory, reordering instructions and running them in parallel, speculatively executing branches, ...et cetera. None of these things decrease the fundamental latency of reading a single byte from memory with a cold cache, but it doesn't really matter because at the end of the day we're measuring some application-specific metric like transaction latency.

[go to top]