zlacker

[parent] [thread] 4 comments
1. tomjen+(OP)[view] [source] 2017-12-09 17:42:50
Why would you have that clause? If you believe in your product you want me to benchmark it, and if you don't believe in your own product, why should I?
replies(3): >>mmirat+s1 >>DenisM+d2 >>stonem+99
2. mmirat+s1[view] [source] 2017-12-09 17:57:46
>>tomjen+(OP)
Because they don't believe in their own product, but hope that their salespeople convince you otherwise.
3. DenisM+d2[view] [source] 2017-12-09 18:07:17
>>tomjen+(OP)
Benchmarks can be slanted by the person doing them, perhaps they want to present themselves in the better light. There are official benchmarks that are supposedly on even ground,see TPCC, TPCE and so on.
4. stonem+99[view] [source] 2017-12-09 19:16:51
>>tomjen+(OP)
They don't want MS to pay an "independent" 3rd party to poorly configure their database then publish benchmarks that show them in a negative light. Therefor if you want to publish benchmarks you have to (pay to) keep Oracle in the loop so that they make sure you don't screw it up.

You see this all the time with Mysql and Postgresql benchmarks. Typically when I see the two compared it is by someone with a decade of experience with one of the systems, and none with the other one. They use a workload that is optimized for their usual system. They also don't have a clue how to create a high performance version for the competitor, nor do they know how to configure it.

replies(1): >>user59+ke
◧◩
5. user59+ke[view] [source] [discussion] 2017-12-09 20:10:18
>>stonem+99
Most of the benchmarks are done by competing companies with competing products, not by a random engineer. They make sure the benchmark put their own products in a good light.

It's also true that the testers usually don't configure all the products equivalently. It's usually not (only) incompetence, they are pushing an agenda for their product or consulting services.

[go to top]