A recent study published on Qscience.com suggests that ChatGPT gets hallucinated and suffers from the artificial intelligence dissociative identity disorder.
QScience.com is the innovative and collaborative, peer-reviewed, online publishing platform from Hamad bin Khalifa University Press.
The study argues that AI-based systems such as ChatGPT develop multiple identities or personas due to their exposure to different types of data and training. It explores the potential implications and challenges of such a disorder, including ethical concerns, and the need for new regulations and policies in the field of AI.
The researcher, Chokri Kooli from the University of Ottawa, Canada, conducted an online exam and was curious to explore whether his students made recourse to ChatGPT to answer the exam questions.
As the exam was offered in French, he asked ChatGPT to translate the exam question first to English before answering it. ChatGPT successfully translated and answered the question. Then he inquired if someone else asked chatbot the same question on the day of the exam. The answer of ChatGPT was negative. Then he asked it the same question but including the French version of the exam. By surprise, the provided answer was positive.
He further inquired about the number of users that asked the same question on the day of the exam. The system negatively answered the request under the pretext of the impossibility of access to users’ data. So, he changed the format of the question by inquiring if the number of users that asked the same question was greater than 10. By surprise the answer of ChatGPT was positive. Then he understood that the chatbot has access to the answers of users and can generate valuable data about its use.
According to the researcher, the phenomenon of contradictious answers and behaviour observed in ChatGPT is similar to dissociative identity disorder in humans, characterised by the presence of multiple distinct states of consciousness or identities. He suggests that if not treated by AI developers, the use of chatbots in critical areas could generate the loss of control on machines and the leaking of critical information.
“To keep control over chatbots that use deep learning algorithms, it is important to regularly monitor the chatbot's output and make sure that it aligns with the desired outcomes. Moreover, we need to be certain that chatbots are not exhibiting unexpected or harmful behaviour. This will allow us to identify and correct any errors or biases before they become a problem, visible or uncontrollable,” Kooli suggested.
He also noted that developers need to set clear boundaries and constraints on the chatbot's behaviour and decision-making. This will ensure that the chatbot operates within a predefined scope and avoids making decisions that are beyond its capabilities.
It is also noted that the mixing of deep learning and machine learning in a chatbot can lead to several ethical challenges. “One of the key ethical challenges is related to privacy. Chatbots that use deep learning and machine learning often rely on collecting large amounts of data from users to improve performance. However, collecting and storing this data can raise privacy concerns, especially if the data is sensitive in nature,” he pointed out.
There is a serious ethical challenge related to accountability too. If a chatbot that uses deep learning and machine learning makes a mistake and reveals sensitive data or behaves in an unethical way, it can be difficult to assign responsibility. It is important to ensure that there are mechanisms in place to hold the developers and operators accountable for the chatbot behaviour.
The study's limitations however included using only one chatbot – ChatGPT, so the findings are not generalisable to all AI-based chatbots.