Google has just unveiled a game-changing AI upgrade for Android. But it has a darker side. Google’s AI will read and analyze your private messages, going back forever. So what does this mean for you, how do you maintain privacy, and when does it begin.
1/29 update below, this article was originally published on 1/27.
There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”
But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”
And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.
There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.
Such requests fall outside Google Messages newly default end-to-end encryption—you’re literally messaging Google itself. While this is non-contentious, it’s worth bearing in mind. Just as with all generative AI chatbots, including ChatGPT, you need to assume anything you ask is non-private and could come back to haunt you.
But message analysis is different. This is content that does (now) fall inside that end-to-end encryption shield, in a world where such private messaging is the new normal. Here the ideal would be on-device AI processing, with data never leaving your phone, rather than content uploaded to the cloud, where more processing can be put to work.
This is where the Android Vs iPhone battlefield may well come into play. Historically, Apple has been much stronger when it comes to on-device analysis than Google, which has defaulted to the cloud to analyze user content.
“Apple is quietly increasing its capabilities,” The FT reported this week, “to bring AI to its next generation of iPhones… Apple’s goal appears to be operating generative AI through mobile devices, to allow AI chatbots and apps to run on the phone’s own hardware and software rather than be powered by cloud services in data centres.”
1/29 update:
The latest update on Apple’s own efforts to introduce generative AI into iOS suggests its intent to keep everything on the device might not be as firm as expected.
Code just discovered by 9to5Mac in the new iOS 17.4 beta paints a picture as to the progress being made. “Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources.”
As Bloomberg’s Mark Gurman pointed out last fall “Apple largely sat on the sidelines when OpenAI’s ChatGPT took off like a rocket… It watched as Google and Microsoft rolled out generative AI versions of their search engines… The only noteworthy AI release from Apple was an improved auto-correct system in iOS 17.”
Unsurprisingly then, one of the “sources” helping out Apple, according to 9to5Mac, is ChatGPT, “Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models.”
According to 9to5Mac, “iOS 17.4 code suggests Apple is testing four different AI models. This includes two versions of [Apple’s internal model] AjaxGPT, including one that is processed on-device and one that is not.” Apple is seemingly checking results from its own AI models against ChatGPT, is including an iMessage interface as part of this, and it’s not all on-device.
The issue for Apple is that Google’s setup lends itself to the more performant edge/cloud architecture that drives generative AI like ChatGPT. Assurances have been given that message content analysis will be on-device only, but the reality is that users in their millions will push for new features and will happily sacrifice opaque privacy themes that are not easily understood in the process.
This puts Apple in a bind. The “what happens on your iPhone stays on your iPhone” philosophy is deep-rooted. The reality, though, is that there will be limits as to the AI processing available on the edge vs in the cloud, which will be driven by processing advances, hardware costs, battery life, heat and general operating limitations.
Gurman says iOS 18 “is seen within the company as one of the biggest iOS updates — if not the biggest — in the company’s history.” He points to that “debate going on [within Apple] on how to deploy generative AI: as a completely on-device experience, a cloud-based setup or something in between. An on-device approach would work faster and help safeguard privacy, but deploying Apple’s LLMs via the cloud would allow for more advanced operations.”
That decision on where AI processing takes place could be the biggest shake-up to Apple’s privacy principles. Apple controls its full ecosystem and so can bend physics in the way others cannot. But this is an entirely new level of complexity.
When Apple does begin to present AI integration into iMessage, we will see how the more vocal, privacy-focused amongst its user base respond. The heated backlash to the proposed CSAM analysis illustrated how contentious this could become if Apple does replicate Bard’s content analysis in some form. And so I don’t think we’ll see that anytime soon, with the initial focus being Siri-style requests and support.
For its part, Bard says “Google has assured that all Bard analysis would happen on your device, meaning your messages wouldn’t be sent to any servers. Additionally, you would have complete control over what data Bard analyzes and how it uses it.”
But I suspect we’ll see that on-device assurance watered down in practice. It will make sense to provide a more seamless interface between a smartphone and the cloud. The analysis or actual content may well get lost in the mix. We will need to find some means to explain the risks to a user group eager for newness.
You will have to judge whether whatever assurances are given provide you comfort enough to let Bard loose on your private content.
A word of caution. There’s a difference between what can’t be done, such as breaching end-to-end encryption, and what isn’t being done, such as policies as to where content analysis takes place. I would urge strong caution on opening up your content too freely, unless and until we have seen proper safeguards.
Bard agrees. “While Google assures on-device analysis,” it says, “any data accessed by Bard is technically collected, even temporarily. Concerns arise about potential leaks, misuse, or hidden data sharing practices. The extent of Bard’s analysis and how it uses your data should be transparent. Users deserve granular control over what data is analyzed, for what purposes, and how long it’s stored.”
What happens this year will define the landscape much more than anything we’ve seen thus far. Google and Apple both looking to their messaging apps as primary UIs for generative AI capabilities suggests this really will be the game-changer.
This integration of generative AI chat and messaging will transform texting platforms forever, it will quickly open up a new competitive angle between Google, Apple and Meta, whose smartphone ecosystems and apps run our lives.
“While an exact date is still unknown,” Bard says, “all signs point towards Bard’s arrival in Google Messages sometime in 2024. It could be a matter of weeks or months, but it’s definitely coming.” Meanwhile, what we’ve seen thus far remains buried deep inside a beta release and subject to change before release.
When it is live, think carefully before you unlock your Messages privacy settings. “Ultimately,” says Bard, “the decision of whether to use message analysis rests with you. Carefully weigh the potential benefits against the privacy concerns and make an informed choice based on your own comfort level and expectations.”
The analysis of your message history isn’t the only privacy debate here. Google’s deployment of Bard is just part of the shift from browser-based to directed search, and you will need to be increasingly cautious as to the quality of the results you’re being given. Bard isn’t a chat with a friend. It’s a UI sitting across the world’s most powerful and valuable advertising and tracking machine.
On which note, Bard left me with a final thought that might be better directed at its creators than its users: “Remember, you have the right to demand clarity, control, and responsible AI development from the companies you trust with your data.”
Read the full article here