The EU Digital Rules Simplification: What Does It Mean for Business?
- 16.12.2025
- Business, Data Privacy, Top articles
Over the past few years, the world has been working to understand and implement GDPR, prepare for the AI Act, figure out the Data Act — and just when legal and technical teams finally began establishing processes, the rules are changing again. On November 19, the EU plans to release the Digital Simplification Package, which could become the most significant overhaul of digital regulation since 2016.
In this article, we’ll break down what exactly the Commission is proposing, why these changes have emerged, and most importantly: what they mean in practice for companies, DPOs, AI teams, and lawyers.
Table of Contents
Why Did the EU Initiate These Reforms?
Regulators have accumulated too many complaints about their own regulation:
-
- GDPR is too complex, inconsistent between countries, and scales poorly.
- ePrivacy never became a full replacement for the cookie chaos.
- AI Act enters a world where companies have already faced the fact that training models on real data is not just a legal question, but a question of survival.
- Data Act turned out to be too burdensome, especially for cross-border services and SMEs.
The Commission decided to take a pragmatic approach: reduce duplication, show businesses predictability, ease compliance requirements, but at the same time not abandon privacy standards and artificial intelligence security.
GDPR: Less Bureaucracy, More Focus on Risks
Single Incident Portal
What cybersecurity teams have been waiting for: instead of four different regulators and multiple forms — one portal and a report once, share many model (it will likely work something like this: “report once — then the system distributes it where needed”).
In practice, this means:
-
- large SaaS platforms will be able to reduce response time and eliminate parallel reports for NIS2, DORA, and GDPR;
- DPOs will be able to assess incidents faster rather than spending time on bureaucracy.
- large tech companies will be able to avoid situations where different regulators require different data sets.
In large organizations, this turned into parallel processes where the same data was filled out differently.
Raising the Notification Threshold + 96 Hours Instead of 72
The EU effectively acknowledges that many companies notified “just to be safe,” even when risks were minimal.
Now notification is required only if the incident poses a high risk to the data subject.
Example: if you accidentally sent an email to one customer instead of another, but it contains no data that can uniquely identify a person — most likely, notification is no longer required.
From 72 hours, the breach notification threshold increases to 96 — this is a compromise between business realities and oversight. This is especially important for companies operating across multiple time zones.
Limiting DSAR Abuse
The EU is acknowledging for the first time what lawyers have been saying for years: DSARs are often used not to protect rights, but as a pressure tool in employment and civil disputes.
Now, if a DPO sees that a request was submitted clearly with a different purpose (for example, an employee demands all their data right in the middle of legal proceedings to complicate the employer’s work), the company will be able to:
-
- refuse the request;
- or charge a fee for processing it.
This saves hundreds of hours of legal department work annually.
Privacy Notice Not Required?
Yes — if the user already knows who is processing the data and why.
But there are clear exceptions:
-
- cross-border transfer,
- automated decisions,
- any high-risk processing.
So the logic here is simple: remove obvious bureaucracy without losing transparency.
Don't Know How to Adapt to the New Rules?
ePrivacy Being Rewritten Almost from Scratch
The EU is effectively acknowledging: the current cookie consent model has failed. Now the main idea is to stop forcing users to click “Accept All” when tracking isn’t even involved.
Consent No Longer Needed for Analytics and Technical Security
The EU proposes a very clear criterion:
If a cookie or tracking technology is used only for:
-
- aggregated analytics (when data does not allow tracking an individual person),
- site protection (for example, DDoS prevention, fraud detection),
— consent is not required.
GDPR Becomes the “Primary Law” for Tracking
This is one of the most important changes. Currently, ePrivacy only requires consent. This led to endless banners popping up on every site. Now the logic is: if tracking technology affects personal data, then GDPR applies, not ePrivacy.
This means: the controller can choose another legal basis, including legitimate interest, if the balance of interests is met.
But for advertising network cookies, this still won’t work.
Transition to Universal Preference Mechanisms
The EU wants to replace the chaos of 20 types of banners with a single mechanism: the browser sends a machine-readable signal (similar to Global Privacy Control), sites must honor it. After 6 months of adaptation, this will become mandatory.
Only media companies will receive an exception, allowed to continue using their monetization models without “hard” restrictions.
Learn What Experts Think About AI Act Changes Three trainers from our leading AI compliance training program — Artificial Intelligence Compliance Professional for Europe — shared their insights on EU Digital laws changes in a webinar. Watch the recording on our LinkedIn profile.
DPIA and Special Categories of Data: Consolidation and Pragmatism
Common DPIA Template
The EDPB will receive a mandate to create EU-wide lists of situations when DPIA is mandatory, and a unified template. This means the end of the “patchwork quilt“: companies won’t have to guess what the local regulator in France or Poland meant.
What will this give businesses?
-
- global companies no longer need to adapt DPIAs for each country;
- the procedure becomes predictable and cheaper;
- DPOs no longer spend time on interpreting national requirements.
Narrow Definition of Special Category Data
Today the logic is: if data can be used to infer a sensitive attribute, it’s already a special category. The EU proposes moving to a model: special category — only what directly reveals the attribute.
This is a serious simplification for AI developers working with large text and behavioral datasets.
Exception for “Residual” Sensitive Data in AI
If a company tries to clean a dataset of sensitive data, but some such data still remains (this is inevitable with large data volumes), — the EU proposes allowing such processing with strong protection safeguards.
This allows:
-
- training LLMs,
- working with historical datasets,
- using previously collected data.
On-Device Biometrics Explicitly Permitted
If biometric data does not leave the device, but is used inside a phone, laptop, or IoT — it is not considered high risk. This codifies the already established practice of Apple, Android, and biometric lock manufacturers.
The AI Act is changing, despite its recent entry into force
Training Models on Personal Data as Legitimate Interest
The most discussed part of the reform — recognition that training models on personal data can be a legitimate interest.
This doesn’t mean carte blanche, but makes possible:
-
- training internal models for search, summarization, and classification;
- processing user interaction logs for fine-tuning;
- developing embedded AI functions.
However, the company must:
-
- conduct a Legitimate Interest Assessment,
- consider the impact on users,
- implement safeguards (anonymization, differential privacy, access restrictions, output data filtering, etc.).
Amendments to the General Data Protection Regulation (GDPR): AI Bias Detection
Proposals to amend EU digital legislation include introducing new exceptions to the prohibition on processing special categories of personal data (Article 9 GDPR). These changes are aimed at allowing processing of such data for purposes of bias detection and mitigation in artificial intelligence systems.
Organizations using this exception must comply with strict conditions:
-
- They must confirm that the objectives of bias detection and mitigation cannot be achieved by using synthetic or anonymized data.
- It is necessary to implement adequate safeguards and use state-of-the-art standards.
- Organizations will need to update their Records of Processing Activities (RoPA), including clear justification for why bias detection objectives could not be achieved using another type of personal data.
Extension of Deadlines for High-Risk AI Systems (AI Act)
Due to the fact that EU member states were unable to designate competent authorities for enforcement of the AI Regulation on time, changes have been proposed to the effective dates of obligations for high-risk systems.
Originally, the compliance deadline for high-risk systems was set for August 2026.
New proposed compliance deadlines:
-
- For high-risk AI systems listed in Annex III, the proposed new deadline is December 2, 2027.
- For high-risk AI systems listed in Annex I (related to safety), the proposed new deadline is August 2028.
After the European Commission confirms the availability of necessary standards and support tools for implementation, a countdown will begin for the obligations to take effect: six months for systems from Annex III and twelve months for systems from Annex I.
Supervision and Enforcement Under the AI Regulation
In order to centralize supervision and reduce governance fragmentation, the AI Act will be amended.
Centralized enforcement: The AI Office of the European Commission will have exclusive competence for supervision and enforcement of obligations regarding AI systems that are integrated into or are part of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs).
-
- Absence of a “One-Stop Shop” mechanism: Despite the push for centralization, the AI Regulation does not provide for a “one-stop shop” mechanism, unlike GDPR.
Simplifications for Small and Medium Enterprises (SMEs)
To reduce administrative costs and stimulate innovation, especially for small and medium enterprises (SMEs), it is proposed to expand simplified requirements.
-
- Lightened requirements: Small businesses face less stringent rules regarding technical documentation and quality management systems.
- Proportionality to size: Compliance obligations, including quality management system requirements, must be implemented in a manner that is proportionate to the size of the organization.
- Although requirements are simplified, organizations still retain their obligations to comply with the Regulation’s requirements.
Don't wait for the EU AI Act to completely change the regulatory landscape.
Learn the essential requirements for AI systems in our Artificial Intelligence Compliance Professional for Europe course.
Data Act and Repeal of Old Regulations: Bringing the Ecosystem into Order
This block of reforms concerns laws that regulate data exchange between companies, state access to data, and rules for cloud service operations.
Too many data laws have accumulated in the EU, causing confusion.
Data Governance Act (DGA) is one of them. It regulated the role of data intermediaries, data exchange conditions, and state access. Separately, there was another law — Free Flow of Non-Personal Data Regulation (FFNDR), which was meant to facilitate the storage and movement of non-personal data across the EU.
Both documents existed in parallel with the Data Act. The result was a complex system in which:
-
- the same issues were regulated by three different laws,
- requirements overlapped,
- it was difficult for companies to understand which law to apply.
The EU is doing a “spring cleaning“:
DGA and FFNDR are being repealed, and their useful parts are being transferred to the Data Act.
What does this mean in practice?
-
- less duplication of regulations,
- fewer legal disputes,
- easier to build products that work with data across the entire EU.
The State Will Be Able to Demand Data Only in Truly Emergency Situations
Previously, laws allowed state authorities to demand access to company data in cases of “exceptional necessity“. The wording was too vague: many feared abuse, especially technology companies and B2B data owners.
Now the rule is being clarified: access is permitted only in cases of a real emergency situation.
What is an “emergency situation“?
-
- natural disasters,
- epidemics and threats to public health,
- serious threats to public safety,
- large-scale cyberattacks on infrastructure.
This does not apply to routine requests from state bodies, tax audits, interest in customer data, etc.
For businesses, this reduces the fear that:
-
- customer data could fall into the hands of state bodies without serious grounds;
- or that companies will be required to change their data storage architecture due to uncertain requirements.
Companies Are Not Required to Disclose Trade Secrets if There Is a Risk of Leakage Outside the EU
Previously, the Data Act could require disclosure of certain data to another business or the state as part of access regulation. The problem: among this data could be trade secrets. Now an important protection is being introduced: if there is a risk that such data could end up in third countries with low protection (for example, through courts, subcontractors, national legislation), the company can refuse.
This protects:
-
- algorithms,
- technical specifications,
- internal models,
- sensitive business data.
In other words: companies will no longer be forced to share key know-how under the pretext of the Data Act.
Clarification of “Cloud Switching” Rules — Transitioning Between Cloud Providers
The Data Act established strict requirements aimed at enabling companies to easily switch cloud providers (for example, AWS → Azure). But in reality, many services cannot be transferred by simply copying data, especially:
-
- custom solutions,
- AI models trained on proprietary data,
- infrastructure with unique architecture,
- SaaS platforms built on proprietary technologies.
The EU recognizes this problem and is softening the rules:
-
- for custom solutions, switching may not be required,
- for small and medium companies — additional exceptions,
- requirements will only apply to new contracts after 2025.
The idea is simple: not to force businesses to do the impossible or economically senseless.
Pros and Cons of the New Digital Regulation
Pros
Less bureaucracy, more clarity.
Companies no longer need to guess which body to send notifications to and in what form. One portal, unified rules, less risk of making mistakes.
GDPR works again on the principle of “assess risk, not check boxes”.
The regulator is returning to the original logic: formal requirements are not what matters, but how genuinely dangerous data processing is.
Cookie banners stop being an endless annoyance.
If the technology is needed for the site to function or for honest analytics without profiling, consent is no longer required. Interfaces will finally become cleaner.
AI teams receive a transparent “green light” for training models.
Data processing for AI training is officially recognized as a legitimate interest when safeguards are observed. This removes a huge practical barrier.
DPIA becomes a unified process for all of Europe.
No longer need to adapt to 27 different national lists. One template, one set of criteria: simpler, faster, and cheaper for companies operating in multiple countries.
Cons
EDPB receives much more power.
When one body determines rules for all of Europe, it simplifies the system, but creates a risk of excessive centralization and stricter control.
The concept of personal data will become more “blurred”.
If what now matters is whether your specific organization can identify subjects, new gray areas of interpretation and disputes with regulators will emerge.
The media exception may cause dissatisfaction.
Less will be required from media: they will be allowed to ignore universal tracking opt-out signals. On one hand, this is support for independent journalism. On the other — it looks like a privilege.
Narrowing of special category data poses risks for vulnerable groups.
If user behavior data is no longer considered sensitive, even when much can be inferred indirectly from it, this could lead to increased risk of discrimination in AI models.
Absence of a “one-stop shop” mechanism for the AI Act
This absence could lead to potential duplication of jurisdiction and powers between various national authorities. As a result, the same company could face multiple simultaneous enforcement actions in different member states.
What Does All This Mean for Business?
The EU is gradually moving toward more realistic digital regulation: less symbolism, more applicability. In practice, we will see:
-
- simplification of compliance processes, reduction in the cost of meeting requirements;
- growth in the investment attractiveness of AI projects without the constant fear that “data processing is unlawful”;
- shift in focus from paper reporting to real risk management;
- need to review privacy notices, DSAR databases, cookie processing policies and tracking mechanisms;
- a new wave of guidance from EDPB that will determine how flexible these rules will be.
To summarize: The EU is not canceling the AI Act, GDPR, and Data Act — it is making them more suitable for a reality in which data, AI, and risks are evolving faster than legislation.
We help companies reach AI Regulation Compliance
Understanding the legislation is the first step. But you also need to quickly align your systems with compliance requirements. We’re here to support you through every phase of this process.
Our experts will conduct a comprehensive audit of your AI systems, identify risks and non-compliance issues, and develop a personalized roadmap to bring your business into compliance. This will help you avoid fines and protect your company’s reputation.
We’ll teach the fundamentals of artificial intelligence and its regulatory principles in Europe based on the EU AI Act. We’ll explain the connection between privacy and AI systems and how to minimize risks to personal data during their development.
A practical course that will give you and your team clear knowledge about the Regulation, its risks, and methods for safe AI usage. You’ll learn how to properly assess AI systems and implement compliance requirements in practice.
Reach Data Privacy & AI Compliance
Fill in the form and get a free consultation.
- Implementation of 7+ legal frameworks.
- Individual and corporate training on the GDPR, and international standards.
- Development of personal data protection systems within organizations.
- Custom services upon request.