For decades, the defenders of digital privacy have been warning the public that is more care with what we share online. And for the most part, the public has ignored them happily.
I am certainly guilty of this myself. Usually, click on “accept everything” in each cookies application that each website puts in front of my face, because I do not want to deal with what permits they are necessary. I have had a Gmail account for 20 years, so I am very aware that at some level that means that Google knows all the imaginable details or my life.
I have never lost too much dream because of the idea that Facebook would attack me with ads based on my presence on the Internet. I imagine that if I have to look at the ads, they could also be for products that you really want to buy.
But even for people indifferent to digital privacy such as Myelf, AI will change the game in a way that I find quite scary.
This is a photo of my son on the beach. What beach? OPENAI O3 points out only this photo: Marina State Beach in Monterey Bay, where my family went on vacation.

For my merely human eye, this image does not seem to contain enough information to guess where my vacation family stays. It’s a beach! With sand! And waves! How could I reduce it beyond that?
But navigation fans tell me that there is much more information in this image of what I thought. The pattern of the waves, the sky, the slope and the sand are all information, and in this case sufficient information to venture a correct assumption about where my family is customary for the holidays. (Disclosure: Vox Media is one of several editors who have signed association agreements with OpenAi. Our reports remain independent editorial. One of Anthrope’s first investors is James McClave, whose Fundación Foundation Futer Futer perfect).
Chatgpt Doess does not always get it in the first attempt, but it is more than enough to collect information if they were determined that we were stalked. And as AI will only become more powerful, that should worry us all.
When AI comes for digital privacy
For most of us that we do not careful unbearably our fingerprint, it has always been possible for people to learn a terrifying amount of information about us, where we live, where we buy, our daily routine, with whom we talk to, we talk to, we talk, we talk, we talk, Talk to you. But it would take an extraordinary amount of work.
For the most part, we enjoy what is known as security through darkness; It is barely worth a great team of people to study my movements with attention just to learn where I went on vacation. Only the most autocratic surveillance states, such as the Eastern Germany of the Stasi era, were limited by labor in what they could track.
But IA performs tasks that would have previously required a serious effort of a great team in Trivial. And it means that Farwer clues are needed to nail the location and life of some.
It was already the case that Google knows basically everything about me, but (maybe please) I didn’t really care, because most Google can do with that information is to serve myself ads and a coastal data track. Clue. Record. Now that degree of information about me could be available to anyone, including those with much more malignant intentions.
And although Google Hahes incentives not to have a great incident related to privacy, incident users would get angry with them, the regulators would investigate them and have many businesses to lose: the AI that the Companies proliferate to those who were fulfilled as bincion. (If they were more concerned about public opinion, they would need to have a significant business model, since the public son of Hate AI).
Be careful what chatgpt tells you
Then the AI has enormous implications for privacy. These were only beaten at home when Anthrope recently reported that they had discovered that, under the right circumstances (with the right notice, placed in a scenario in which the AI is asked to participate in pharmaceutical dates. Authorities, even if it does in the same circumstances as a human could.
Some people took this as a reason to avoid Claude. But almost immediately it was clear that it is not just Claude: users quickly produced the same behavior with other models such as Openais O3 and Grok. We live in a world in which not only Ais knows everything about us, but in some circumstances, they could even call the police.
At this time, they are only likely to do so in sufficiently extreme circumstances. But the scenarios such as “the AI threat to inform the government unless you follow its instructions” no longer seem science fiction as an inevitable headline at the end of this year or the next.
What should we do about it? The former advice of the defenders of digital privacy (be reflective about what he publishes, does not grant the permits of things they do not need, remains good, but seems insufficient radical. No one will solve this at the level of individual action.
New York is consulting a law that, among other transparency and proof requirements, would regulate AIS that act independently when they take measures that would be a crime if humans “record” or “negligigently.” Whether or not you like the exact approach to New York, it seems clear to me that our existing laws are inappropriate for this strange strange world. Until we have a better plan, be careful with your vacation photos, and what it tells your chatbot!