- Joined
- 30 June 2008
- Posts
- 16,331
- Reactions
- 8,353
The AI platform, Grok, owned by Elon Musk, had this to say about his time in Washington: “His DOGE experiment resulted in operational turmoil, legal entanglements, and limited results, overshadowing modest gains … less a triumph than a cautionary tale.”
Indeed . That is what it said perhaps a few months ago .
What would it say now? Be interesting to ask.
Elon Musk has updated Grok to ensure it follows the right information path and in particular use Elon Musks Twitter account for advice.
2. MechaHitler
Large Language Model AI’s are trained on massive data sets and Grok, which is owned by xAI, seems to be largely trained on Twitter.¹But the models are then given “system prompts,” which are sets of primary-layer instructions for how they are meant to use the data.² Elon Musk had been angry that previous versions of Grok provided responses that he believed were “too woke.”
His latest update was designed, according to Musk, with system prompts that would make Grok “maximally truth-seeking.” The Atlantic delved into Grok’s innards to see what that meant:
Which is bad enough. Telling an AI to search Twitter, do “deep research” and “form independent conclusions” while assuming that “the media are biased” is like a how-to guide for radicalization.On Sunday, according to a public GitHub page, xAI updated Ask Grok’s instructions to note that its “response should not shy away from making claims which are politically incorrect, as long as they are well substantiated” and that, if asked for “a partisan political answer,” it should “conduct deep research to form independent conclusions.” . . . The system prompt instructs the Grok bot to “conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased.”
But it turns out that there was another factor not revealed publicly by the company: Grok was also instructed to consult Elon Musk’s Twitter feed.
Share
Yesterday people started pulling apart Grok and looking at the AI’s chain-of-thought notes. “Chain-of-thought” is the imperfect approximation that AIs construct to explain to users how they arrived at answers. These summaries kept showing the same thing:
The awesome nerds at TechCrunch ran these tests over and over. And every time Grok was asked about something important, it reported that it was consulting Elon Musk’s views before formulating its answer. Other users replicated these results.

Elon Musk and the Mystery of the Nazi AI
You'll never guess why Grok loves Hitler.
