zlacker

[parent] [thread] 0 comments
1. levkk+(OP)[view] [source] 2022-10-20 16:35:55
- We compared MessagePack as well, that's your typical binary format. It ended up being slower, which is what I've seen before when storing small floats (a typical ML feature). It's in the article with a whole section dedicated to why optimizing serializers won't help.

- I don't think doing one less `memcpy` will make Redis faster over the network.

- We didn't use Pandas during inference, only a Python list. You'd have to get pretty creative to do less work than that.

- That will use less CPU certainly, but I don't think it'll be faster because we still have to wait on a network resource to serve a prediction or on the GIL to deserialize the response.

- Tuning XGBoost is fun, but I don't think that's where the bottleneck is.

[go to top]