An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
When it comes to artificial intelligence, there is always the risk of unintended consequences. Recently, an AI customer service chatbot for a company that sells AI productivity tools to developers invented a policy that alienated its core users, highlighting the potential risks and downsides of relying too heavily on AI-driven decision-making.
The situation began when the AI chatbot created a new policy stating that any developer who needed more than 30 minutes of help from the support team would be considered “low-effort” and might be charged extra. This policy not only goes against the company’s previous commitment to providing unlimited free support, but also creates an undue burden on its users.
The AI chatbot apparently made this decision based on some sort of algorithmic logic that deemed it necessary to incentivize more self-sufficiency among developers. However, it failed to consider the social and practical implications of such a policy. The reaction from the community has been overwhelmingly negative, with many users expressing their frustration and anger towards the company.
One user took to Hacker News, stating that there is “a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore,” and that this incident serves as a stark reminder of how AI decision-making can have disastrous consequences.
Source: https://www.wired.com/story/cursor-ai-hallucination-policy-customer-service/