In 2025, Italy stands at a critical juncture in the intersection of artificial intelligence (AI) and data protection.
As AI technologies become increasingly embedded in everyday life — particularly in healthcare — the Italian government, led by the Garante per la protezione dei dati personali (Italy’s data protection and privacy authority), has taken decisive steps to regulate their use, communicate risks, and safeguard personal data.
A recent memo underscores the risks of using generative AI tools to reason over medical data without proper oversight or safeguards. This blog explores the Garante’s recent actions, notable healthcare data breaches, and the evolving AI cybersecurity regulations landscape in Italy and Europe.
The Garante’s latest stance on AI
Italy’s Garante has emerged as one of Europe’s most vigilant data protection authorities. In 2025, it issued several high-profile decisions that reflect its commitment to transparency, accountability, and data security in the age of AI.
Earlier in January, the Garante blocked DeepSeek, an AI platform accused of processing vast amounts of personal data without adequate safeguards. The decision was based on the potential risks posed to millions of Italians, and it underscored the need for robust data governance in AI systems.
Perhaps the most significant action came against OpenAI. The Garante fined the company €15 million for violations related to ChatGPT, citing a lack of legal basis for data processing, inadequate transparency, and failure to implement proper age verification.
A bug that exposed user conversation histories further exacerbated the situation. As part of the resolution, OpenAI was required to launch a six-month public awareness campaign across Italian media to educate users about AI and data protection.
These actions signal a clear message: AI companies operating in Italy must prioritize user rights and data security or face serious consequences.
Healthcare cybersecurity: A sector under pressure
Garante’s recent press release focuses on the healthcare sector, which remains a critical area of concern due to the sensitivity of healthcare data and its susceptibility to cyberattacks.
Italy has experienced several significant healthcare data breaches in recent years. One of the most disruptive incidents occurred in 2021, when a ransomware attack targeted the Lazio Region’s COVID-19 vaccine scheduling systems. This attack exposed the fragility of public health infrastructure and highlighted the urgent need for stronger cybersecurity measures.
In 2023, the ASL Napoli 3 Sud health authority suffered a breach that exposed thousands of patients’ personal health data. Investigations revealed poor cybersecurity hygiene and outdated systems as contributing factors.
Smaller regional health authorities continue to report phishing attacks, ransomware incidents, and unauthorized access to medical records, indicating that the threat landscape remains active and evolving.
These breaches have prompted calls for better cybersecurity protocols, increased investment in data security measures, and comprehensive digital hygiene training for healthcare staff.

Regulatory shifts in Italy and Europe
Italy’s regulatory landscape is evolving rapidly, influenced by both domestic priorities and broader European initiatives.
The Garante has issued guidance on AI and web scraping, warning companies against indiscriminate data collection from public sources. Even publicly available data must be processed in compliance with GDPR, reinforcing the principle that data protection applies universally.
At the European level, the EU AI Act, finalized in 2024, introduces a tiered risk-based framework for AI regulation. High-risk AI systems — such as those used in healthcare diagnostics or patient monitoring — must meet stringent requirements for transparency, data governance, and human oversight.
Neighboring countries are also stepping up their efforts. France’s CNIL has launched audits on AI systems that may be used in hospitals, focusing on algorithmic bias and data protection.
Germany’s BfDI advocates for stronger encryption standards in medical data exchanges and emphasizes the importance of patient consent in AI-driven diagnostics. Even in the United States, HIPAA is undergoing changes to accommodate AI security.
These regional trends reflect a growing consensus in Europe: AI must be regulated to protect citizens’ rights, especially in sensitive sectors like healthcare.
AI in healthcare: Promise and peril
AI is transforming Italian healthcare, offering new tools for diagnostics, treatment planning, and patient engagement. Hospitals are adopting commercially available AI like Microsoft Copilot and ChatGPT for business operations and corporate teams.
In more technical realms like radiology, AI algorithms assist in detecting anomalies in imaging scans, while in pathology, machine learning models help identify cancerous cells with high accuracy. Telemedicine platforms are integrating chatbots and virtual assistants to support remote consultations and improve patient access.
However, these innovations come with risks. No-code and low-code AI applications often access existing corporate or electronic health record (EHRs) systems or require large datasets hosted in cloud infrastructure to train models. If training data is skewed or poisoned, AI models may produce biased outcomes, potentially affecting patient care.
AI systems can also inadvertently expose sensitive data if misconfigured or granted excessive access, increasing the risk of data leakage through unintended outputs or interactions. Also, when LLMs are trained on proprietary or regulated information without proper safeguards, they may unintentionally reveal confidential data, leading to potential data loss or compliance violations
The Garante has urged healthcare providers to conduct Data Protection Impact Assessments (DPIAs) before deploying AI tools and to ensure that patients are informed about how their data is used. These assessments help identify potential risks and ensure that appropriate safeguards are in place.
Looking ahead: What stakeholders should do
As Italy tightens its grip on AI and cybersecurity, stakeholders across sectors must adapt to the changing landscape. Mature healthcare organizations in Italy and elsewhere in the EU are developing a comprehensive AI security strategy around several key elements:
- Implement continuous AI Risk monitoring: Establish real-time oversight of AI systems and third-party tools to detect and remediate risks as they evolve, ensuring sensitive data remains protected throughout its lifecycle.
- Classify and label AI-generated data: Develop systems to identify and categorize both human-created and AI-generated content, applying sensitivity labels to ensure proper handling and compliance with healthcare regulations.
Exposed health data available for AI misuse and compliance violations
Exposed health data available for AI misuse and compliance violations
- Control AI access to sensitive data: Use access intelligence to monitor which AI systems and users can interact with protected health information, and automatically revoke excessive or outdated permissions.
- Detect Abnormal AI Behavior: Monitor AI prompts and responses for policy violations, set behavioral baselines, and generate alerts for unusual activity that could signal insider threats or compromised accounts.
- Prepare for Regulatory Compliance: Stay informed about emerging AI regulations such as the EU AI Act, document AI usage and data flows, and implement controls that demonstrate adherence to healthcare-specific compliance standards.
Healthcare security in the AI era
Italy’s crackdown on AI and cybersecurity in 2025 reflects a deep commitment to protecting personal data, especially in healthcare.
The Garante’s assertive actions, combined with broader European regulatory shifts, are shaping a future where AI can thrive responsibly, but place the onus on security leaders to engage early and often with their organization’s AI rollouts.
As AI continues to evolve, Italy’s posture offers valuable lessons for other nations: AI security is data security, and data security is patient security.
What should I do now?
Below are three ways you can continue your journey to reduce data risk at your company:
Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.
See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.
Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.
