Amazon's employees warn of Amazon's new AI chatbot due to 'severe hallucinations' and leaking of confidential data
Amazon's employees are raising concerns about the accuracy and privacy issues of the newly developed Q chatbot
Some Amazon employees have raised concerns about the accuracy and privacy of Amazon's AI chatbot Q, just three days after its announcement. Leaked documents reveal that Q is reportedly "experiencing severe hallucinations and leaking confidential data," which includes sensitive information such as the location of AWS data centers, internal discount programs, and unreleased features. The severity of the issue was marked as "sev 2," indicating a significant incident requiring engineers to work around the clock to resolve it. This comes at a time when Amazon is striving to compete with other tech giants like Microsoft and Google in the field of generative artificial intelligence.
Amazon has downplayed the significance of these employee discussions, stating that feedback is shared through internal channels as standard practice and that no security issue was identified. Despite the initial reports, the company maintains that Q has not leaked confidential information. Q, currently available in a free preview, is positioned as an enterprise-friendly version of ChatGPT, designed to be more secure and private than consumer-grade chatbots, aiming to address security and privacy concerns associated with such AI assistants. However, an internal document acknowledges that Q may still generate harmful or outdated responses, a common challenge with large language models.