Why do EU countries ban social media only NOW?

 Why do EU countries ban social media only NOW?

Recently, there has been a surge in countries banning the use of social media. Australia enforcing a ban in December 2025, Spain announced in February 2026, France September 2025, Norway increasing the age from 13 to 15 in January 2026 and Denmark and Portugal planned implementations in 2026. Mostly deciding to ban the use of social media for teens under 15 or 16 years of age. 


While it makes sense that social media poses a risk factor for teenagers, the risks have been long known. Jonathan Haidt is a leading psychologist who had explored relationships between teenage depression and social media (In his book “The Anxious Genergation”) use and found that especially girls aged 13 to 16, had reported greater levels of depression and rates of self harm, from the moment Facebook rose to popularity. The suspicions of harmful affects of social media where also once again emphasized when Frances Haugen in 2021, released confidential internal Facebook documents that showed the harmful effects of the algorithms and conflict of interest between the society goals and Facebook business goals. 



There are several dynamics in social media, which go hand in hand with increased levels of anxiety.


  • Addictive elements, which are introduced by design. Notification icons trigger the dopamine system, leading to elevated and unbalanced base levels of dopamine. This increases smartphone usage.

  • Influencer role models set unrealistic standards of beauty and perfection. “Underperforming” compared to those role models creates feelings of being worse.

  • Social media posts that get more likes are often the ones that communicate extreme or radical views. These posts trigger responses and engagement, causing increased hostility and forces users to “pick sides”.

  • The very business model of tech companies that developed the Social Media platforms monetize the attention of users, meaning that they want users to spend more time of their day on platforms (in order to show more ads).


But there is still a question that why just now? When the risks have already been known for years. 


One reason is definitely that social media itself has become more scary. There is an increasing prevalence of AI generated content. Including, but not limited to:


  • AI-generated influencers trying to get the user to purchase products. These are called AI UGC’s and are a serious  branch of business. 

  • When you upload an image of video of yourself, other people may use apps to remove your clothes. They later may use these images to blackmail you. This has lately been very prevalent in the Grok LLM model and teenage women are at risk of being exploited.

  • When you upload voice recordings, other users may clone your voice using AI and use that in order to scam your relatives by pretending to be you. 

Technology independence from the US

Currently, all the major social media platforms used in Europe, apart from TikTok are made in the US: Meta (Facebook, Instagram, WhatsApp), X (former Twitter), Snapchat, Google (Youtube Shorts) and also smaller networks like Linkedin, Reddit, Pinterest etc. This forms a risk for non-US companies. Not only do these platforms monetize and draw payments from users in other countries, they also store the data of their users on American servers. This data can potentially turn into a powerful weapon.


Even in the case when the information is stored on European servers, the US can still demand access to the data that is owned by US companies through the “CLOUD act” (Clarifying Lawful Overseas Use of Data Act). What this law allows US companies to do is essentially:


  • Employ foreign surveillance: allowing the US to bypass European courts and get access to the private data of foreign minors. 

  • Using European citizen information in order to train AI algorithms. 

  • Limit non-US citizens. In December 2025, the US Customs and Border Protection, announced that they would start to check social media accounts with a history of 5 years back when entering the country.


Europeans are actively searching for ways to reduce the dependence on the US and find alternative solutions. Reducing the usage of US-based social media helps to increase Digital independence. 

Increased risks of AI


AI is an amazing technology, but it acts as an immense risk multiplier in the context of social media. AI makes social media immensely more risky, thus creating an incentive for countries to re-evaluate the age limit when social media should be used by teenagers. 

AI Sycofancy 

A very unambiguous risk of AI is called sycophancy, which is the tendency of an AI to agree with the user, over providing factual information, which causes enforcement of wrong information and skews the worldview of the user into a wrong direction. In the future, I will have a personal story on this blog involving this topic that details a personal experience with AI sycophancy. 



The chart above is from a research done by Microsoft (n=16 000) across 17 countries in July 2023. It shows that one use case of AI that excites people is in order to ask advice on personal topics such as relationships, careers and health. And another, to use AI as a friend or companion. These use cases in particular, are extremely concerning for teenagers with mental health problems. Where a pshychologist has a clear aim to improve the condition of the patient with mental health problems, an AI may prefer to give the answer that the teenager likes, and may further deepen issues such as the teens desires for isolation, self-harm or anorexia. There are many cases, which in particular came to light after the ChatGPT-4o update in April 2024. While the AI companies have been trying to tackle the issue. These issues are still happening by design (human enforced learning and next token prediction).



Data sensitivity

Tying back to the topic of Technology independence, while technology such as tracking cookies had always posed a certain degree of risk, and are therefore also strictly moderated by laws such as GDPR, the risk of data has always been relatively limited. While it sounds bad that a foreign company may have access to browser history, GEO locations or personalized information, then the possibilities pre-AI had been limited. There were limited ways to mass synthesize and analyze the data and it becomes an incredible time-intensive operation to put together the parts of the puzzle to really get an in-depth understanding of a person based on such data points.


With LLM models, this changed completely. 


Firstly, the fact that people speak with an LLM the same way they speak with a human, opens up the data received to a far more encompassing view of the user, where the user explicitly states personal information. Instead of knowing the GEO location of the user, the LLM now can know exactly that you are on the way to hamburger kiosk Staap, since you had just asked the LLM where to get the most saucy burger and asked directions how to get there. Depending on the intensity of your usage, the model can infer information about your desires to read more books (and which ones), which political party you prefer, what pimple on your neck concerns you etc.


Secondly, the LLM is also able to digest this information at scale, using a technology called RAG (Retrieval Augmented Generation). RAG allows the LLM to very efficiently search databases of information in the retrieval phase, interpret and modify this information in the Augment phase, and generated new output in the Generation phase. Using a system called the “Attention Mechanism”, LLM’s are able to quickly understand which parts of the information absorbed are the most relevant and important to pay attention to. 


Now an LLM could actually provide the owner of the technology with in-depth information sourced from their user. The owner of the tool could ask the LLM things like:


  • How many % of the users support Trump? Divide into yes, no and unknown and send me the names of the user IDs if you have them. 

  • Which of the users are dealing with mental health problems? Which users are most influencable?

  • Which people have been shown by their interactions to be prone to start a protest against their current government? Which politicians have been shown openness to bribes.

  • Who is radicalizing now?


Even when not used for such nefarious purposes. It is undeniable that much more information is being collected and more efficiently processed. Making it extremely important to take efforts to protect the population against excessive information sharing. 


In conclusion

It very much makes sense that due to the accumulation of reasons, nations have decided to increase the minimum age for using social media: 


  • Existing concerns about social media found since the launch of Facebook in 2010

  • Recent hostility of the US towards EU allies, after Greenland threats and trade war

  • Upcoming social media “features” that are driven by AI, such as deepfakes

  • New threats such as “nudification” apps and voice cloning

  • Data owners receiving excess and explicit user information

  • New possibilities of data owners to synthesize and analyze mass data points


My personal conclusion is that no matter what the regulation in Estonia says, I will not expose my own children to social media at least until the age of 16, and if possible longer.


Comments

Popular posts from this blog

"The War on Normal People (2020)” revisited