(Natural News)—Amazon’s fledgling generative AI assistant, Q, has been struggling with factual inaccuracies and privacy issues, according to leaked internal communications.
The chatbot was recently announced by Amazon’s cloud computing division and will be aimed at businesses. A company blog post says it was built to help employees write emails, troubleshoot, code, research and summarize reports and will provide users with helpful answers that relate only to the content that “each user is permitted to access.”
It was promoted as a safer and more secure offering than ChatGPT. However, leaked documents show that it is not performing up to standards, experiencing “severe hallucinations” and leaking confidential data.
According to Platformer, who obtained the leaked documents, one incident was flagged as “sev 2.” This designation is reserved for events deemed serious enough to page Amazon engineers overnight and have them work on the weekend to correct them. The publication revealed that the tool leaked unreleased features and shared the locations of Amazon Web Services data centers.
One employee wrote in the company’s Slack channel that Q could provide advice that is so bad that it could “potentially induce cardiac incidents in Legal.”
An internal document referring to the wrong answers and hallucinations of the AI assistant noted: “Amazon Q can hallucinate and return harmful or inappropriate responses. For example, Amazon Q might return out of date security information that could put customer accounts at risk.”
These are very worrying problems for a chatbot that the company is gearing toward businesses who will likely have data protection and compliance concerns. It also doesn’t bode well for the company in its quest to prove that it is not falling behind its competitors in the AI sphere, like OpenAI and Microsoft.
Amazon has denied that Q leaked confidential information. A spokesperson for the company noted: “Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon. No security issue was identified as a result of that feedback.
The company said it became interested in developing Q in response to many companies banning AI assistants from being used for business out of privacy and security concerns. It was essentially built to serve as a more private and secure alternative, and these leaks indicate that they are failing to meet their objectives with this project.
AI chatbots are prone to hallucinations
Q is far from the only generative AI chatbot to encounter major issues like hallucinations, the term given to the tendency for AI models to present inaccurate information as facts. However, experts suggest this characterization is not accurate as these language models are trained to provide plausible-sounding answers to prompts from users rather than correct ones. As far as the models are concerned, any answer that sounds plausible is acceptable, whether it is factual or not.
Although some companies have taken steps to keep these hallucinations under control to some extent, some computer scientists believe that this is a problem that simply cannot be solved.
When Google unveiled its ChatGPT competitor Bard, it provided a wrong answer to a question about the James Webb Space Telescope during a public demo. In another high-profile incident, the tech news site CNET had to issue corrections after an article it wrote using an AI tool provided highly inaccurate financial advice to readers. On another occasion, a New York lawyer got in trouble after using ChatGPT to conduct legal research and he submitted a brief with a series of cases that the chatbot invented.
There are so many ways that relying on this technology can go wrong, particularly when people use answers from chatbots to make decisions about their health, finances, who to vote for and other sensitive topics.
Sources for this article include: