
The Morning After: Google accused of using novices to fact-check Gemini’s AI answers
Google has come under fire after reports emerged that it is instructing contract workers evaluating the responses generated by its AI-powered chatbot, Gemini, to rate the answers they don’t fully understand. According to TechCrunch, these contractors have been told not to skip prompts that “require specialized domain knowledge” and instead rate the parts of the prompt they do comprehend.
This new guideline allegedly replaces a previous policy that allowed testers to skip certain topics if they were unfamiliar with them. It’s unclear what led to this change, but it raises significant concerns about the quality and accuracy of the AI-generated responses.
The impact of these changes could be far-reaching, as Gemini is designed to provide accurate and helpful information in response to user queries. If the chatbot is being evaluated by those without the necessary expertise, it’s possible that the system will learn to mimic the language patterns of novices rather than providing genuinely useful information.
In a statement provided to Engadget, Google defended its raters’ work, stating that they “perform a wide range of tasks across many different Google products and platforms. They provide valuable feedback on more than just the content of the answers, but also on the style, format and other factors.” However, this response does little to alleviate concerns about the quality control measures in place for Gemini.
It remains to be seen how these changes will affect the overall performance of the AI-powered chatbot, but it’s clear that Google needs to take steps to ensure that its responses are being evaluated by experts who can provide accurate and meaningful feedback.
Source: www.engadget.com