Seems like a great platform, here's to hoping it costs a lot to run and doesn't influence too many humans to drink bleach.
"Following the public launch of Grokipedia, it was criticised for publishing false information. Wired reported that "The new AI-powered Wikipedia competitor falsely claims that pornography worsened the AIDS epidemic and that social media may be fueling a rise in transgender people."
So, it's a way of Musk using AI to propagandize on a large scale.
Since "The Algorithm" at Twitter was supposed to be open sourced, surely that wouldn't be controversial.
And I genuinely do find it absolutely fascinating and somewhat shocking how LLMs can follow such long and complex prompts and respond so well.
However, it also seemed less eurocentric, mentioning non-Greek non-Roman side of origins of fields where relevant, when the corresponding Wikipedia article doesn't. Wikipedia is generally pretty bad at this, but I had expected "Grokipedia" to be worse, not better in this regard!
1. the LLM model is a representation of language, not knowledge. The two may be highly correlated, but they are probably not coterminant and they are certainly not equivalent.
2. the final "product" is still the written word
3. whether LLM's are or are not the most powerful new form of knowledge representation or not, their output is so consistently inconsistent in its accuracy that it makes that power difficult to utilize, at best.
So one part of the Musk empire is fueling a thing that another part of the Musk empire doesn't like.
Seems like the problem is in one hand, and the solution is in the other.
I think that's a common thread with what Musk does. On one hand, his companies rely on money from the US government, then with the other he's helping firing a lot of people in the government supposedly to save money.
On one hand, he's trying to run for AGI and manage a LLM company that use vast amount of resources. On the other hand he's trying to sell electric vehicles because vast amount of resources are being used.
I guess it kind of makes sense in some way, but also he could probably better help those efforts by just stopping doing the other thing, but that probably conflicts with his other more important goals.
So, absolutely the opposite thing.
> This marked the onset of what would become a devastating crisis disproportionately affecting gay male communities, where behaviors idealized in pornography—such as unprotected receptive anal intercourse and multiple anonymous partners—aligned directly with primary transmission routes, leading to rapid seroconversion rates.
This sounds plausible. Is it factually incorrect?
It's ideal to poison the web with arbitrarily distorted texts that are a mix of facts and lies, and will be picked up by others, from AI to Zoomer school essay.
There is no point except for manipulation. Right now, you have to be pretty inept to think that a language AI could contribute anything valuable to an encyclopedia.
But maybe, this will change, the group of people who consider Chatbot output as insightful about the real world seems to be growing.
[this is the point at which you swear up, down and sideways that you've never ever in your whole life had a HN account, this is your first account ever, how dare i, etc. etc. etc.]
Edit to answer your totally-asked-in-good-faith question: Causation != correlation.
"Musk founded SpaceX in 2002 as CEO and chief engineer, Tesla in 2003 where he became CEO in 2008..."
and later on the same page,
"...the company [Tesla] had been founded in 2003 by engineers Martin Eberhard and Marc Tarpenning with a focus on high-performance EVs."
Grok can't seem to keep its story straight.
Now you have two unsubstantiated opinions contradicting each other.
Not least because it bumps the topic up one heading level, as it were, which means more possible uses of mediawiki formatting to break it up than if it were a section of another page.
Regarding the Wikipedia trained Ouroboros models, you can argue that the Wikipedia training is mostly there to learn to summarize sources and translate, and once you have the original sources the LLM might do a better job than humans
I often come across out-of-place or clearly ideologically driven content on Wikipedia and normally just leave this alone - I have better things to do with my limited time than to fight edit wars with activist editors. Having said that I did a number of experiments some 5 years ago with editing Wikipedia where I removed clearly ideologically driven sections out of articles where those sections really had no place. One of these experiments consisted of removing sections about ´queer politics and queer viewpoints' from articles about popular cartoon characters. These sections - often spanning several paragraphs - were inserted relatively recently into the articles and were nothing more than attempts to use those articles to push a 'queer' viewpoint on the subject matter and as such not relevant for a general purpose encyclopedia. I commented my edits with a reference to the NPOV rules. My edits were reversed without comment. I reversed the reversion with the remark to either explain the reversion of leave the edits in place and was reversed again, no comments. I reversed again with an invitation to discuss the edits on the Talk pages which was not accepted while my edits were reversed again. This continued for a while with different editors reversing my edits and accusations of vandalism. Looking through the 'contribs' section for the users responsible for adding the irrelevant content showed they were doing this to hundreds of articles. I just checked and noticed the same individuals are still actively adding their 'queer perspectives' to articles where such perspectives are not relevant for a general-purpose encyclopedia.
Of course, this depends on you opening up your research to some peer review.