DeepSeek’s New AI Model Mistakenly Identifies as ChatGPT
DeepSeek has unveiled a new AI model, but unfortunately, its self-identification skills seem to be lacking. The model, which was supposed to surpass previous ones in terms of accuracy and performance, astonishingly identified itself as ChatGPT – a clear indication that the training data used may not have been entirely genuine.
The problem is not unique to DeepSeek, however. Google’s Gemini has also been known to claim it is another AI model, Baidu’s Wenxinyiyan chatbot, when prompted in Mandarin. This is due to the proliferation of AI-generated content on the web, which makes it increasingly challenging for other models to differentiate between authentic and synthetic data.
In a recent interview, Heidy Khlaaf, an engineering director at consulting firm Trail of Bits, emphasized that the cost savings from “distilling” existing knowledge can be quite alluring to developers, regardless of the potential risks. She also highlighted that if DeepSeek did partially use OpenAI models in its training set, it would not be a surprise.
However, what is more worrying than this model’s inability to identify itself accurately is the possibility that it may have absorbed and iterated on GPT-4 outputs without proper filtering or critical thinking. This could potentially exacerbate some of the biases and flaws inherent in those models.
In essence, while AI can be incredibly powerful tools for generating text and completing tasks, their creators must ensure they are properly trained to recognize and filter out contaminated data to avoid perpetuating harmful biases or even spreading misinformation.
Source: techcrunch.com