zlacker

[parent] [thread] 3 comments
1. chaps+(OP)[view] [source] 2022-10-20 02:29:55
I get the point of the post, but I still don't see how it's remotely useful to understand the performance in postgresml, as someone who's interested in using it for my own tooling. Maybe I don't spend enough time in ML space to know how often they use HTTP/redis in their flows as much as they do. Most of my stuff is just data on-disk, where adding two additional services would be embarrassingly overkill.

Don't you think it would be incredibly useful as a baseline if they included a third test with FDWs against redis and the http server?

replies(1): >>theamk+u5
2. theamk+u5[view] [source] 2022-10-20 03:31:05
>>chaps+(OP)
Are there any other FDWs that do ML inference?

Remember, this is not plain file serving -- this is actually invoking XGBoost library which does complex mathematical operations. The user does not get data from disk, they get inference results.

Unless you know of any other solution which can invoke XGBoost (or some other inference library), I don't see anything "embarrassingly overkill" there.

replies(1): >>chaps+48
◧◩
3. chaps+48[view] [source] [discussion] 2022-10-20 03:58:14
>>theamk+u5
My issue isn't with the inference step or even the reading step, it's the fetching step.
replies(1): >>montan+39
◧◩◪
4. montan+39[view] [source] [discussion] 2022-10-20 04:12:47
>>chaps+48
How are you doing online ML inference, without fetching data?
[go to top]