WhatsApp is under pressure from users over a new artificial intelligence feature it says is optional—despite the fact that it cannot be turned off or removed from the app.
The feature, marked by a distinctive blue circle at the bottom right corner of the Chats screen, links to a chatbot designed to answer questions, offer suggestions, and help users with everyday tasks. However, the tool has caused frustration among users across Europe and beyond who want the option to disable it.
Meta, which owns WhatsApp, insists that the AI assistant is being rolled out gradually and is “optional,” though users currently have no way to hide or remove it. Some users have reported they do not yet have access, which Meta attributes to its ongoing limited release. The same AI feature is also being integrated into Facebook Messenger and Instagram, signaling a broader rollout across Meta platforms.
The chatbot, powered by Meta’s Llama 4 large language model, is designed to answer questions quickly. In one test, the assistant gave an accurate weather report for Glasgow within seconds but also included a confusing link to Charing Cross in London. The inconsistency highlights one of the growing concerns about generative AI systems: they can provide useful responses but also make mistakes that may mislead users.
Across social media platforms like X, Reddit, and Bluesky, users have voiced dissatisfaction, with some calling the feature intrusive. Critics argue that users should have the right to choose whether they want AI in their messaging apps at all. Guardian columnist Polly Hudson and other voices have criticized the decision to embed the tool by default, without an opt-out option.
Privacy experts have also raised alarms. Dr. Kris Shrishak, an adviser on AI and privacy, accused Meta of using WhatsApp users as unwitting participants in AI development. He said the feature could serve as another method of collecting data after the company faced legal action for allegedly training its AI models on pirated content.
Concerns about the ethics of Meta’s AI practices have been growing. A report by The Atlantic revealed that the company may have used millions of pirated books and academic articles to train its models—an allegation Meta has not commented on. Author groups are now mobilizing to pressure governments to regulate how personal and copyrighted content is used in AI development.
Meta states that only messages sent to the AI assistant are visible to the system, while other personal chats remain end-to-end encrypted. Despite these assurances, experts remain cautious. Dr. Shrishak emphasized that while encryption protects user-to-user messages, anything shared with the AI effectively opens a door for Meta to process that data.
The UK’s Information Commissioner’s Office has said it is keeping a close watch on how Meta AI is being used within WhatsApp, particularly in relation to personal data and the rights of children. The agency reminded companies that using personal information for AI must comply with strict data protection laws, and extra safeguards are necessary when dealing with minors.
Although Meta continues to promote the chatbot as a helpful assistant, the controversy reveals a deeper struggle between innovation and user autonomy. With AI tools becoming standard in digital platforms, users are asking whether they will still have control over what technologies become part of their everyday communication.