zlacker

[parent] [thread] 39 comments
1. gianca+(OP)[view] [source] 2018-05-24 23:38:58
I found it amusing that Twitter was Rails' biggest advertisement. Everyone wanted to use Rails but Twitter turned into a franken app with different stacks to keep it running
replies(3): >>phaedr+91 >>scarfa+Zd >>evanwe+Cx
2. phaedr+91[view] [source] 2018-05-24 23:51:53
>>gianca+(OP)
Twitter was Rails' worst advertisement. They used Rails as a scapegoat to hide their bad tech. I still hear things like "Rails can't scale; remember Twitter?"
replies(4): >>joerin+r1 >>jensvd+x7 >>fanpun+ok >>lfxyz+Yr
◧◩
3. joerin+r1[view] [source] [discussion] 2018-05-24 23:55:23
>>phaedr+91
At least 3 large companies I know my friends developers work at, over the last 5 years switched from Rails to PHP. All told me same story "after twitter, noone wants to work or touch rails anymore"
replies(2): >>bdcrav+n2 >>dasil0+2i
◧◩◪
4. bdcrav+n2[view] [source] [discussion] 2018-05-25 00:09:11
>>joerin+r1
Meanwhile all 3 large companies probably host their code on Github :-)
replies(1): >>treaha+i5
◧◩◪◨
5. treaha+i5[view] [source] [discussion] 2018-05-25 00:47:47
>>bdcrav+n2
And even if they don’t, many if not most of their dependencies do.
◧◩
6. jensvd+x7[view] [source] [discussion] 2018-05-25 01:25:27
>>phaedr+91
Airbnb is still on rails.. Probably the largest rails app out there for now.
replies(1): >>Jagat+89
◧◩◪
7. Jagat+89[view] [source] [discussion] 2018-05-25 01:47:46
>>jensvd+x7
Except that I'd bet Airbnb's read qps requirement is less than 1% of Twitter's. Write qps would be even smaller.
replies(2): >>MBCook+vb >>myth_d+Vb
◧◩◪◨
8. MBCook+vb[view] [source] [discussion] 2018-05-25 02:22:03
>>Jagat+89
True, but on an absolute scale AirBNB is still very big.
replies(2): >>ksec+st >>segmon+rc1
◧◩◪◨
9. myth_d+Vb[view] [source] [discussion] 2018-05-25 02:28:28
>>Jagat+89
I would imagine Shopify on Black Friday is under heavier load.
10. scarfa+Zd[view] [source] 2018-05-25 03:04:15
>>gianca+(OP)
From 2008:

Scaling is fundamentally about the ability of a system to easily support many servers. So something is scalable if you can easily start with one server and go easily to 100, 1000, or 10,000 servers and get performance improvement commensurate with the increase in resources.

When people talk about languages scaling, this is silly, because it is really the architecture that determines the scalability. One language may be slower than another, but this will not affect the ability of the system to add more servers.

Typically one language could be two or three, or even ten times slower. But all this would mean in a highly scalable system is that you would need two or three or ten times the number of servers to handle a given load. Servers aren't free (just ask Facebook), but a well-capitalized company can certainly afford them.

http://www.businessinsider.com/2008/5/why-can-t-twitter-scal...

replies(1): >>sulam+Uh
◧◩
11. sulam+Uh[view] [source] [discussion] 2018-05-25 04:16:30
>>scarfa+Zd
Yes, well, that's a nice idea in theory. In practice, you could get over 10x (sometimes 100x) the rps off a box running the new, JVM-based services vs their Rails-equivalents. Orders of magnitude probably matter a little less when you're well-funded and have hundreds of servers, but when you are thinking about trying to go public and your bottom line is being scrutinized and you have 10's of thousands of servers, it starts to matter.
replies(4): >>scarfa+4k >>realus+cl >>ksec+Sv >>segmon+ed1
◧◩◪
12. dasil0+2i[view] [source] [discussion] 2018-05-25 04:17:54
>>joerin+r1
What will they do when they find out phpBB and WordPress are written in PHP?
◧◩◪
13. scarfa+4k[view] [source] [discussion] 2018-05-25 04:47:40
>>sulam+Uh
That's exactly what he said.

but a well-capitalized company can certainly afford them

But these days when you don't have to buy servers and make a long term capital commitment and you can use something like AWS, if you have a scalable but not efficient architecture and you have the faith of the investors, you can get enough servers to get you over the hump temporarily, slowly start replacing the most performance sensitive part of your architecture and then scale down.

Look at what HN darling Dropbox did, they bootstrapped on AWS, got big and then when the time was right, they moved to a cheaper architecture - off of AWS and built their own infrastructure.

replies(1): >>sulam+CA1
◧◩
14. fanpun+ok[view] [source] [discussion] 2018-05-25 04:52:44
>>phaedr+91
I had the same thought when I read this. I wasn't into Rails back then (or development) so I don't have a sense of context for what the framework was like at that point in time, but the more articles I read about Twitter in the early days, the more of a sense I get that maybe they didn't write the best code.
◧◩◪
15. realus+cl[view] [source] [discussion] 2018-05-25 05:06:57
>>sulam+Uh
Rails strength isn't the speed of it's code but it's speed of development, the JVM-based service equivalent you are comparing it with probably needs 5x the amount of engineering time to build the same as what you have for free with Rails. (which also needs to be included in the engineering costs)
replies(2): >>scarfa+um >>segmon+pd1
◧◩◪◨
16. scarfa+um[view] [source] [discussion] 2018-05-25 05:26:47
>>realus+cl
I doubt that any language gives you 5x the productivity of another language. I don't think I would be 5x slower writing a Back end web code in C than I would in C# and C is probably the worse language to do an API in.
replies(1): >>realus+0u
◧◩
17. lfxyz+Yr[view] [source] [discussion] 2018-05-25 06:44:03
>>phaedr+91
But Rails is nice to work with and whispers 99% of websites will never need to reach Twitter-scale.
◧◩◪◨⬒
18. ksec+st[view] [source] [discussion] 2018-05-25 07:02:14
>>MBCook+vb
By RPS, I imagined Shopify is largest. But then from what I understand, Shopify operate in such a way that every shop is its own app. i.e there is 100s of thousands of Basecamp / AirBnb running on the same code base and every shop is somewhat isolated.

Purely in terms of App in think the largest would be Cookpad. AirBnB never shared their numbers so I don't know.

https://speakerdeck.com/a_matsuda/the-recipe-for-the-worlds-...

◧◩◪◨⬒
19. realus+0u[view] [source] [discussion] 2018-05-25 07:10:42
>>scarfa+um
It's not about the language, it's about the batteries-included mindset of rails.

You have your nice web framework in X language and then you need something else quickly is already built-in in Rails or a very powerful feature that in Rails you could just install a gem and call it a day.

I'm working with Express/Node now, I've worked with Symfony/Laravel in php, I've worked with Django in Python, I like them all but there's nothing which can truly replace the speed of coding with Rails.

◧◩◪
20. ksec+Sv[view] [source] [discussion] 2018-05-25 07:37:04
>>sulam+Uh
10x, Sure, possible. 100x?

Most of the problem lies in Database. Rails may not be best architecture for scale. But I doubt you could even get 100x difference if the bottleneck is in Database.

I don't know any large, JVM based WebSite in large scale on top of my head, but I consider Stackoverflow, written in ASP.NET to be one of the best and most optimised site. Near 700M Pageview per month, with 10 Front End Servers. At peak it does close to 5000 RPS, Cookpad does 15,000 RPS with 300 Rails Server. But the SO servers are at least twice as powerful, so that scale is like 500 RPS / Server to 100RPS / Server. 5x Difference.

replies(2): >>sulam+rA1 >>tmd83+yG3
21. evanwe+Cx[view] [source] 2018-05-25 08:03:43
>>gianca+(OP)
Rails, and the way it used MySQL/Postgres, was designed for building CMSs. Very amenable to CDN caching to scale.

Twitter was a realtime messaging platform (fundamentally not edge-cacheable) that evolved from that CMS foundation. So the reason for the difficult evolution should be clear.

It's not really a coincidence that before Twitter I worked at CNET on Urbanbaby.com which was also a realtime, threaded, short message web chat implemented in Rails.

Anyway the point is: use my new project/company https://fauna.com/ to scale your operational data. :-)

replies(1): >>Operyl+Ny
◧◩
22. Operyl+Ny[view] [source] [discussion] 2018-05-25 08:21:20
>>evanwe+Cx
I take it from a quick glance: you either own/work for Fauna and it’s a completely proprietary product? If so .. I’ll pass, I like building things atop stuff I can control. :)
replies(1): >>evanwe+jz
◧◩◪
23. evanwe+jz[view] [source] [discussion] 2018-05-25 08:26:52
>>Operyl+Ny
Yes, it's not open source (for now). Managed cloud and on-premises edition. We will have a free download soon.
replies(1): >>Operyl+sz
◧◩◪◨
24. Operyl+sz[view] [source] [discussion] 2018-05-25 08:28:50
>>evanwe+jz
Might I suggest disclosing it when you post about it? Clicking through your profile revealed it, but it seems like it could read off better for you in the future. :)

EDIT: either it was edited above, or I’m blind. I think it was the earlier?

replies(1): >>evanwe+WA
◧◩◪◨⬒
25. evanwe+WA[view] [source] [discussion] 2018-05-25 08:50:44
>>Operyl+sz
Yeah I fixed it. Thanks.
◧◩◪◨⬒
26. segmon+rc1[view] [source] [discussion] 2018-05-25 14:33:30
>>MBCook+vb
I don't think so. Seriously think about it. How many active users does AirBnB have? How many put their home up for rent, and how many rent in a year? I reckon you get more tweets in a day than AirBnB rentals in a year. That's how big Twitter is or small AirBnB is. :D
replies(1): >>MBCook+dE2
◧◩◪
27. segmon+ed1[view] [source] [discussion] 2018-05-25 14:39:05
>>sulam+Uh
Only if you look at these ridiculous benchmarks serving "hello world" and json encoding/decoding a basic dataset. The real limits to scale is I/O. CPU I/O, Storage I/O, Network I/O. JVM doesn't give you an edge on it. Once your app starts doing useful non localized work, the advantage of JVM is 2-3x at best.
replies(1): >>sulam+mA1
◧◩◪◨
28. segmon+pd1[view] [source] [discussion] 2018-05-25 14:41:02
>>realus+cl
Rails speed is speed of prototype. When you have a sufficiently large system with Rails, your development slows down. This is not just so for Rails, but languages like PHP, Python, Javascript.
◧◩◪◨
29. sulam+mA1[view] [source] [discussion] 2018-05-25 16:53:20
>>segmon+ed1
No, I'm looking at Twitter's numbers in production. Before and after very much tells the tale -- and it was mostly the same engineers, so it wasn't "oh well you had the B team doing Rails and the A team doing JVM code" or whatever other excuse you want to look for. Microbenchmarks aren't the end-all-be-all, but you see the same story there.
◧◩◪◨
30. sulam+rA1[view] [source] [discussion] 2018-05-25 16:54:02
>>ksec+Sv
You're making the assumption that there is a database in the mix for the 100X case. There wasn't, except in-memory. It wasn't a 100X improvement across the board, it was 10X to 100X.
replies(1): >>jashma+8D6
◧◩◪◨
31. sulam+CA1[view] [source] [discussion] 2018-05-25 16:55:25
>>scarfa+4k
And this is exactly what Twitter did, and how Twitter replaced Ruby and Rails with the JVM.
replies(1): >>scarfa+XD1
◧◩◪◨⬒
32. scarfa+XD1[view] [source] [discussion] 2018-05-25 17:13:07
>>sulam+CA1
I doubt they just "replaced" Ruby on Rails with the JVM without making any architectural changes based on the lessons they learned from thier first implementations.
replies(1): >>sulam+Vq2
◧◩◪◨⬒⬓
33. sulam+Vq2[view] [source] [discussion] 2018-05-25 22:52:44
>>scarfa+XD1
Did I say that? We (I spent 4.5 years there, starting with the writing of the first service extracted from the monorail and left shortly before the 'twitter' repo was deleted) absolutely went through a huge architectural transition, arguably multiple transitions. The biggest was the breakup of the monolithic Rails-based application into microservices running on the JVM. These services generally scaled ten to one hundred times better than the code they replaced. (By "scaled" here I specifically mean that RPS was 10-100X higher per physical machine for services on the JVM as compared to the RPS the Rails stack could handle before falling over).
replies(1): >>scarfa+Yv2
◧◩◪◨⬒⬓⬔
34. scarfa+Yv2[view] [source] [discussion] 2018-05-25 23:46:00
>>sulam+Vq2
I was replying to this:

And this is exactly what Twitter did, and how Twitter replaced Ruby and Rails with the JVM

In the context of my original post where the contention was that languages don't scale that architectures do. Your post was that it was exactly what you did - replaced Ruby with Java. Not that you replaced Ruby with Java and rearchitected the entire stack - exactly what my original post said waa the problem with Twitter - the architecture.

replies(1): >>sulam+VB2
◧◩◪◨⬒⬓⬔⧯
35. sulam+VB2[view] [source] [discussion] 2018-05-26 01:09:51
>>scarfa+Yv2
Well, that's true to a degree, but only to a degree. If I wrote my database in Ruby, it would be slow compared to the same database written in C++ (assuming equivalent competency in the developers and equivalent architecture). Even a database written in Java benchmarks slower than the same database with the same architecture written in C/C++. Of course architectural changes can make further improvements.

To the point of Twitter, what we _didn't_ do, despite a lot of Ruby expertise on the team, is write a lot of microservices in Ruby. The reason for that is that I don't think you can get the same RPS out of a Ruby service that you can out of a JVM service, all else being equal. In fact HTTP benchmarks for various platforms show this, if you bother to look.

replies(1): >>scarfa+xG2
◧◩◪◨⬒⬓
36. MBCook+dE2[view] [source] [discussion] 2018-05-26 01:42:31
>>segmon+rc1
Twitter was bigger but they don’t run on Rails anymore where I gather AirBNB does.
◧◩◪◨⬒⬓⬔⧯▣
37. scarfa+xG2[view] [source] [discussion] 2018-05-26 02:14:05
>>sulam+VB2
I'm not disagreeing with you. Looking on the outside, Twitter had two issues.

1. Twitter wasn't built on a scalable architecture

2. Ruby didn't use resources efficiently -- it was slower than other stacks.

If Twitter had been scalable, even if it were 10x slower than Java, you could throw 10x the number of servers at it until you optimized the stack then reduce the number of servers needed and the customers would have been none the wiser. Of course the investors wouldn't have been happy. Well at least in today's world. I don't know what the state of cloud services were in 2008. Then you could focus on efficiency.

But since Twitter wasn't scalable, you had to fix the stack while the customers were effected. I'm almost sure even in 2008, with the growth of Twitter they could have gotten the capital to invest in more servers if they needed them.

It's not completely analogous but Dropbox is a good counterexample. Dropbox was hosted on AWS at first. Dropbox never had to worry about running out of storage space no matter how big it grew (it had a scalable architecture) but for their use case, they weren't as efficient (ie cost not computer resources). Their customers were never affected by their lack of efficiency because they could operate at scale. They had breathing room to re-architect a more efficient solution.

replies(1): >>sulam+2Q2
◧◩◪◨⬒⬓⬔⧯▣▦
38. sulam+2Q2[view] [source] [discussion] 2018-05-26 05:20:35
>>scarfa+xG2
These are totally different problems, though. Dropbox is a trivial scaling exercise compared to Twitter. (Some Dropbox engineers are going to be all up on me now, but it's simpler by far to shard than Twitter was -- and yes some functionality in Dropbox is probably harder to shard, but the core use case of Twitter generated hot spots by definition).

FWIW, Twitter did what you're describing, we had 4 or 5 thousands hosts running the Ruby stack at its peak. Unicorns and Rainbows, oh my. Then it started shrinking until it shrank to nothing. That period was actual the relatively stable period. The crazy period, the one that I wasn't there for, was probably impossible to architect your way out of because it was simply a crazy amount of growth in a really short amount of time, and it had a number of ways in which unpredictable events could bring it to its knees. You needed the existing architecture to stay functional for more than a week at a time for a solid 6 months to be able to start taking load off the system and putting it onto more scalable software.

Any startup would be making a mistake to architect for Twitter scale. Some startups have "embarrassingly parallel" problems -- Salesforce had one of these, although they had growing pains that customers mostly didn't notice in 2004 timeframe. Dropbox is another one. If you're lucky enough to be able to horizontally scale forever, then great, throw money at the problem. Twitter, at certain points in its evolution (remember AWS was not a thing) was literally out of room/power. That happened twice with two different providers.

◧◩◪◨
39. tmd83+yG3[view] [source] [discussion] 2018-05-26 18:45:21
>>ksec+Sv
Does SO probably has a lot more cacheable content and their core content size is small < 2TB full DB size with DB memory hitting 768GB or so. Not that it doesn't require good engineering. I do love their stats.
◧◩◪◨⬒
40. jashma+8D6[view] [source] [discussion] 2018-05-28 21:26:45
>>sulam+rA1
Did Twitter ever upgrade past Ruby 1.8? If not these numbers won't be particularly relevant to modern Ruby. (for anyone else reading 1.8 was a simple interpreter and 1.9+ a modern high-performance bytecode VM)
[go to top]