Elon Musk’s ‘TruthGPT’ may have a very narrow definition of truth


One thing that has happened exactly as Musk stated is that Twitter, which he dubbed the public square of the internet, has become less moderated; with reductions in safety staff, reduced ability to report issues, an increase in hate speech and most recently the explicit removal of protections for transgender users from its conduct policy.

Twitter has also sabotaged its verification system so it merely promotes the tweets of those paying monthly for a Twitter Blue subscription, rather than indicating a user’s identity or authority. New badges have also been invented to label news sources, which in some cases appear designed to discredit them, such as the “government-funded media” badge painted on the accounts of the non-government ABC News and NPR among others. The New York Times, often singled out by Musk, lost its verification marks entirely, opening it up to impersonation.

Speaking on Fox News, Musk said his TruthGPT would be a “maximum truth-seeking AI that tries to understand the nature of the universe,” and that he was a big fan of regulating the development of AI to stop it from ruining society. But given his concerns about Twitter pre-aquisition, could his idea of a truth-seeking AI merely be one that’s more politically conservative? One that aligns more closely with his idea of truth?

When OpenAI launched ChatGPT, Musk praised its performance and complained that the news media was not covering it because it was “not a far-left cause”. However, he has since pivoted to claiming the technology is being trained to be politically correct, is silencing conservative views and will become a tool of progressive censors. Making a chatbot politically correct is “simply another way of saying untruthful things,” Musk told Fox. He has also previously accused OpenAI of teaching its chatbots to lie.

Developers of chatbots such as ChatGPT — and vendors that use the technology for services like Microsoft’s Bing — do need to put in a lot of work to correct biases and problematic behaviour that results from the training data. These range from gender and racial biases, such as assuming “a doctor” will be male and white, to repeating or applying hate speech given the correct prompt. It’s possible that when he’s talking about the dangers of politically correct AI, Musk is referring to these efforts that keep chatbots from reflecting the worst aspects of internet (and human) discourse on which it’s trained.


But accounting for biases is not the same as lying. And a chatbot which believes everything in its training data, without being taught to account for biases, would not represent a “truth-seeking” agent, any more than a wholly unmoderated and anonymous public chatroom represents an equitable free speech platform.

Musk has said that unchecked AI development would lead to the singularity, a hypothetical scenario where accelerated machine intelligence makes humans insignificant. But he’s also said Neuralink will provide a solution to this scenario, with chips people can have implanted in their brains, so we can gain our own superintelligence.

He has made casual reference to Roko’s Basilisk — a thought experiment which became a meme, predicated on the fact that a malevolent machine god could create simulation universes in which people who stood in the way of AI development were tortured for eternity — but speaking to Fox News, he claimed that his own vision for a machine that thinks would be “unlikely to annihilate humans”.

Musk may be sincere in his concerns that continued development of conversational AI is dangerous for humankind. Or, given his previous actions and statements, he may just naturally think he is the one best placed to decide its appropriate direction.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Source link

Leave a Reply

Shopping cart


No products in the cart.

Continue Shopping