zlacker

[parent] [thread] 3 comments
1. onepla+(OP)[view] [source] 2020-05-28 03:10:07
On the other hand, WSL2 is based on virtualisation rather than NT kernel personalities. Apparently building it 'on top' or 'inside' NT ends not not being good enough.
replies(1): >>yjftsj+N4
2. yjftsj+N4[view] [source] 2020-05-28 04:04:50
>>onepla+(OP)
I don't think that's a failure of the NT subsystem approach, I think that's just that Linux turned out to have a massive and changing ABI surface and Microsoft didn't want to try and recreate the whole thing by clean room reimplementation. Yes, there were some difficulties because of different underlying primitives, but in my outsider's opinion, they could have made it work if they've been wanting to spend the time and effort.
replies(1): >>wvenab+t6
◧◩
3. wvenab+t6[view] [source] [discussion] 2020-05-28 04:24:20
>>yjftsj+N4
The problem they couldn't solve is file system performance -- there's just too much of difference conceptually between files in Windows and files in Linux to make it perform reasonably well for the sorts of jobs people were using.

In the end, it just makes more sense to pull in the actual Linux kernel than to try and achieve the same performance semantics.

replies(1): >>throwa+na
◧◩◪
4. throwa+na[view] [source] [discussion] 2020-05-28 05:15:03
>>wvenab+t6
Windows file system performance in general is abysmally bad, we are talking Linux being 10x-100x faster on mass operations on small files for instance.

Due to this lots of Linux stuff is based around huge masses of tiny files (build processes, VCS, docker, etc) and there was just no chance the windows kernel was ever going to come remotely close performance wise.

[go to top]