WhatsApp AI Sparks Privacy Uproar

WHATAPP’S latest feature, a built-in artificial intelligence chatbot known as Meta AI, is drawing sharp criticism from users and privacy experts alike—despite Meta’s insistence that the tool is ‘entirely optional’. The chatbot, prominently displayed as a permanent blue circle in the corner of the Chats screen, cannot be removed, prompting frustration across social media platforms.

According to a report by the BBC, users in Europe and beyond have voiced their annoyance at being unable to disable the new tool, which is powered by Meta’s proprietary Llama 4 large language model. The AI assistant is designed to answer questions, provide information, and spark creative ideas—but critics argue its presence is intrusive and raises serious data privacy concerns.

AI feature here to stay—for now

The Meta AI logo appears as a colourful blue, pink, and green circle anchored to the bottom right of the WhatsApp interface. Tapping on it opens the chatbot interface, which also features a search bar labelled ‘Ask Meta AI or Search’. The feature mirrors similar implementations on Facebook Messenger and Instagram—both also owned by Meta.

Meta has stressed that the rollout is still limited to select countries, advising users that availability may vary even within the same region. Those with access are met with a lengthy disclaimer about Meta AI’s capabilities and limitations. On its official site, WhatsApp says the tool ‘can answer your questions, teach you something, or help come up with new ideas.’

The BBC tested the chatbot by asking for the weather in Glasgow. It quickly provided a detailed report, but one of its suggested links directed the user to information about Charing Cross railway station in London—not the similarly named area in Glasgow. The error fuelled further questions about the AI’s reliability and relevance.

Critics warn of privacy and consent issues

Privacy advocates have condemned the lack of an opt-out option. Dr Kris Shrishak, an AI and privacy adviser, told the BBC that Meta is ‘using people as test subjects for AI’ and accused the tech giant of exploiting its existing user base. ‘No one should be forced to use AI,’ he said. ‘Its AI models are a privacy violation by design.’

Dr Shrishak’s warning comes amid broader legal scrutiny over how Meta trains its AI systems. An investigation by The Atlantic revealed that Meta may have used millions of pirated books and academic papers sourced through Library Genesis (LibGen) to train Llama 4. Meta is currently facing lawsuits from authors’ groups for alleged misuse of their copyrighted work in the training process. The company declined to comment on that investigation.

The UK’s Information Commissioner’s Office (ICO) told the BBC that it is monitoring Meta AI’s implementation within WhatsApp closely. ‘Organisations who want to use people’s personal details to train or use generative AI models need to comply with all their data protection obligations,’ it said, especially when processing children’s data.

End-to-end encryption still intact—but not for AI chats

Meta has emphasised that regular chats remain end-to-end encrypted and that Meta AI ‘can only read messages people share with it’. However, experts stress that any message sent to the AI is not protected in the same way.

‘When you message your friend, end-to-end encryption ensures privacy,’ said Dr Shrishak. ‘But when you message Meta AI, one of the ends is Meta.’

The company has advised users not to share sensitive information with the chatbot, reiterating that any data shared could be retained and potentially reused.

For many, the controversy mirrors the recent backlash against Microsoft’s Recall feature—another always-on tool which was ultimately scaled back following widespread criticism. While Meta maintains that it is listening to user feedback, the lack of a disable option has only deepened the scepticism around the company’s handling of user consent and transparency.