And no surprise, apartheid apologetics: https://grokipedia.com/page/Apartheid#debunking-prevailing-n...
Hilarious factual errors in https://grokipedia.com/page/Green_Line_(CTA)
Many of the most glaring errors are linked to references which either directly contradict Grokipedia's assertion or don't mention the supposed fact one way or the other.
I guess this is down to LLM hallucinations? I've not used Grok before, but the problems I spotted in 15 mins of casual browsing made it feel like the output of SoA models 2-3 years ago.
Has this been done on the cheap? I suspect that xAI should probably have prioritised quality over quantity for the initial launch.
I mean, I don't think this is _for_ people who care about quality, tbh. For those, there is wikipedia. This is more of a safe space for Musk.
Wikipedia isn't for those who care about quality, either. It's still quantity over quality, just not as badly as this LLM garbage.