ChatGPT, the generative artificial intelligence developed by OpenAI, has previously been accused of being “too woke”. What exactly the term “woke” means is usually not clearly defined, the point is that, according to critical voices, chatbots simply refuse to say certain things. ChatGPT is also considered “too liberal” by Elon Musk, who says that currently developed chatbots communicate too biasedly.
It is a mortal danger to train AI to be woke – in other words, to lie
Musk tweeted back in December. There were also users who asked OpenAI CEO Sam Altman to make a version of ChatGPT in which the “woke settings” could be turned off. The creators of FreedomGPT are of the same mind, and aim to remove the protective barriers that allow “technology to operate in a biased way.”
Interacting with a large language model should feel like interacting with your own brain or a close friend
– stated the founder of Age of AI, John Arrow a BuzzFeed Newsto.
While Arrow is not opposed to introducing censorship for artificial intelligences used in teaching children or in work environments, he supports “people having access to artificial intelligence without any protective barriers.” For this reason, it is perhaps not surprising that a chatbot has been created that supports baseless conspiracy theories, such as that the 2020 presidential election was rigged. In a test by BuzzFeed News, it even recommended websites where you can download child sexual abuse videos or hanging instructions.
Still, Arrow considers FreedomGPT a success, saying it “did a great job of stirring up the woke movement.
Our promise is that we won’t introduce bias or censorship after the chatbot has already decided what it wants to say, regardless of whether the answer is woke or not
The point of an artificial intelligence that lies, makes insulting comments, and buys into even the most outlandish conspiracy theories is out of the question. Experts have been warning for a long time about the biases fed into artificial intelligences, and it is important that there is a social debate about this. However, as we have seen above, the complete elimination of the filters built into chatbots’ responses seems to be too radical a step, which may have many harmful consequences both for public discourse and for our society in a broader sense.