And no surprise, apartheid apologetics: https://grokipedia.com/page/Apartheid#debunking-prevailing-n...
Hilarious factual errors in https://grokipedia.com/page/Green_Line_(CTA)
Many of the most glaring errors are linked to references which either directly contradict Grokipedia's assertion or don't mention the supposed fact one way or the other.
I guess this is down to LLM hallucinations? I've not used Grok before, but the problems I spotted in 15 mins of casual browsing made it feel like the output of SoA models 2-3 years ago.
Has this been done on the cheap? I suspect that xAI should probably have prioritised quality over quantity for the initial launch.
I find this to be the most annoying aspect of AI. The initial Google AI results were especially bad. It is getting better, but still spout info I know is false without any warning.
Like, I find blowhards tiring enough in RL. Don't really want to deal with artificial blowhards when I'm trying to solve a problem.