ChatGPT uses misinformation and a not so subtle left-leaning bias that is built into the system
OpenAI is a research organization founded by Elon Musk and Sam Altman in 2015 as a challenger to Google. The original mission of the venture was to create artificial intelligence for the benefit of humanity as a whole.
The most notable part of OpenAI is a function called ChatGPT. It’s a chat room like you’ve never seen before. Within a few days of launching, it hit one million users despite a total media blackout and zero publicity.
It now has over 100 million sign-ups. But there’s another, darker side to ChatGPT that has become increasingly obvious to those who have been studying ChatGPT. It’s the notable use of intentional misinformation and a not-so-subtle left-leaning political bias that is built into the system.
Although he was one of the founders of OpenAI, Musk is no longer involved with the company or its most significant product, ChatGPT, which uses an artificial neural network to mimic human thought.
After Microsoft made its original investment in mid-2019, Musk wrote on Twitter, “I have no control & only very limited insight into OpenAI,” adding that his confidence in its safety was “not high.”
Following Microsoft’s latest $10 billion-dollar investment in OpenAI last month, Musk wrote that “OpenAI was created as an open source, non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.” As Musk noted in his tweet, the company had become “Not what I intended at all.”
Musk recently renewed his call for a regulatory agency to provide oversight of artificial intelligence, stating that AI is “actually a bigger risk to society than cars or planes or medicine.”
Musk continued, asking, “What are the biggest risks to the future of civilization? A.I. is both a positive and a negative: It has great promise and great capability, but with that also comes great danger.”
By Jeff Carlson