· Maciej Maciejowski · 9 min read

EU AI Act for IT & Telecommunications

EU AI Act

Learn how EU AI Act affects IT & Telecommunications companies. Requirements, implementation steps, and FAQ. Check Plan Be Eco.

EU AI Act for IT & Telecommunications

What is EU AI Act?

The EU AI Act is the European Union's landmark regulatory framework governing the development, deployment, and use of artificial intelligence systems across all member states. Adopted in 2024 and entering into force in stages through 2026 and 2027, it establishes a risk-based classification system that places different compliance obligations on AI systems depending on the potential harm they pose to individuals and society. It is the world's first comprehensive legal framework for AI and sets a global precedent for how governments regulate intelligent systems.

EU AI Act and the IT & Telecommunications Industry

The IT and Telecommunications sector sits at the very center of the EU AI Act's scope. Telecom operators, cloud service providers, software vendors, and hardware manufacturers are simultaneously providers of AI systems, deployers of AI in their own operations, and the infrastructure layer upon which other industries' AI solutions run. This triple exposure makes compliance uniquely complex and far-reaching for the sector.

Network operators, for example, deploy AI-powered traffic management and quality-of-service optimization tools that make real-time decisions affecting millions of users. These systems must now be assessed for their risk classification under the Act. Similarly, telecommunications companies use AI-driven fraud detection engines that analyze call patterns and billing data — systems that touch on individual financial and behavioral data and therefore attract specific transparency and accuracy requirements.

Cloud and infrastructure providers such as hyperscalers and regional data center operators face obligations as they host and distribute AI models used by third parties. If a telecom company provides an AI-as-a-service platform, it qualifies as a provider under the Act and must ensure conformity assessments, technical documentation, and post-market monitoring systems are in place before making those services available in the EU market. Customer service chatbots deployed by telecom operators to handle billing inquiries, plan upgrades, or technical support fall under limited-risk AI provisions, requiring specific transparency disclosures to end users. Predictive maintenance AI running across fiber or cellular infrastructure, while lower risk, still demands auditability and data governance.

Key Requirements

  • Risk classification and conformity assessment: IT and telecom companies must classify every AI system they develop or deploy according to the Act's four-tier risk pyramid — unacceptable, high, limited, and minimal risk. High-risk systems, such as AI used in critical telecom infrastructure management or employment screening tools used by HR departments, require a formal conformity assessment before deployment.
  • Technical documentation and transparency: Providers of high-risk AI systems must prepare and maintain comprehensive technical documentation covering the system's design, training data, architecture, intended purpose, and performance metrics. This documentation must be available to national competent authorities on request.
  • Human oversight mechanisms: High-risk AI systems must be designed to allow human operators to monitor, intervene, override, or shut down the system at any point. For telecom network management AI, this means human engineers must retain the ability to override automated routing or capacity decisions.
  • Accuracy, robustness, and cybersecurity: AI systems classified as high-risk must meet defined standards of accuracy and must be resilient against attempts to manipulate their outputs. Telecom companies relying on AI for intrusion detection or network anomaly identification must ensure these systems are themselves hardened against adversarial attacks.
  • Data governance and training data quality: Training, validation, and testing datasets used for high-risk AI must be relevant, sufficiently representative, and free from errors that could lead to discriminatory or harmful outcomes. This is directly relevant to telecom providers training fraud detection models on customer behavioral data.
  • Registration in the EU AI database: High-risk AI systems must be registered in a publicly accessible EU database before being placed on the market or put into service. IT vendors supplying AI tools to telecom operators must complete this registration prior to commercial deployment.
  • Transparency obligations for limited-risk AI: Chatbots and AI-generated content tools used in customer interactions must clearly disclose to users that they are interacting with an AI system. This applies directly to the virtual assistants deployed across telecom customer service portals and mobile apps.
  • Post-market monitoring and incident reporting: Providers must implement post-deployment monitoring systems and report serious incidents or malfunctions to national authorities. For telecom infrastructure AI, this means establishing feedback loops between automated network management systems and compliance teams.
  • General-purpose AI model obligations: Large language models and other general-purpose AI systems that are integrated into telecom products or services carry additional requirements under the Act, including publication of training data summaries and compliance with EU copyright law.

Implementation Steps for IT & Telecommunications Companies

  1. Conduct a full AI inventory audit: Begin by mapping every AI system currently in use or under development across the organization. This includes not only externally facing tools such as customer service bots and recommendation engines, but also internal systems used in HR, network operations, fraud detection, billing, and predictive analytics. Document the purpose, data inputs, decision outputs, and current oversight mechanisms for each system.
  2. Apply the risk classification framework: For each identified AI system, determine its risk category under the Act. Use the Act's Annex III list of high-risk application areas as a reference point. Network management AI that can affect access to critical digital infrastructure warrants serious scrutiny, as do any AI systems involved in employee monitoring, customer credit scoring, or real-time biometric identification.
  3. Prioritize compliance for high-risk systems: Focus resources first on systems that fall into the high-risk category. This means commissioning conformity assessments, preparing technical documentation packages, establishing human oversight protocols, and validating training data quality. Engage legal counsel with AI regulatory expertise and coordinate with the technical teams who built and maintain the systems.
  4. Update contracts and supplier agreements: If your organization deploys AI systems built by third-party vendors — as most telecom companies do — review all supplier contracts to ensure the vendor has accepted appropriate responsibilities under the Act. Providers placing AI on the EU market bear the primary compliance burden, but deployers retain obligations as well. Include specific AI Act compliance clauses in all new technology procurement agreements.
  5. Implement transparency disclosures for customer-facing AI: Update terms of service, interface design, and customer communication flows to include clear, user-facing disclosures wherever AI systems are operating. For chatbots and AI-assisted support tools, add explicit disclosure language at the start of each interaction that complies with the Act's transparency requirements.
  6. Establish an internal AI governance function: Assign ownership of AI Act compliance to a dedicated role or cross-functional team. This should include representatives from legal, IT, data science, and risk management. Develop an internal AI governance policy that defines how new AI systems are evaluated, approved, monitored, and retired in line with the Act's requirements.
  7. Train technical and operational teams: Ensure that software engineers, data scientists, network engineers, and product managers understand the Act's requirements as they apply to their daily work. Compliance failures often originate in technical decisions made without awareness of regulatory constraints. Regular training and updated development guidelines are essential.
  8. Register high-risk systems and prepare for authority oversight: Complete registration of high-risk AI systems in the EU AI database before the applicable compliance deadlines. Prepare documentation packages that can be submitted to national market surveillance authorities on request, and establish internal protocols for handling regulatory inquiries and audits.

Frequently Asked Questions

Does the EU AI Act apply to telecom companies based outside the European Union?

Yes. The Act applies to any provider that places an AI system on the EU market or puts one into service within the EU, regardless of where the provider is established. A US-based cloud provider or a software company headquartered in Asia that sells AI-powered network management tools to European telecom operators must comply with the Act's requirements for those systems. This extraterritorial reach mirrors the approach taken by the GDPR and is central to the Act's design.

What are the penalties for non-compliance with the EU AI Act?

The Act establishes a tiered penalty structure. For violations involving prohibited AI practices — such as deploying social scoring systems — fines can reach 35 million euros or seven percent of global annual turnover, whichever is higher. For violations of other obligations applicable to high-risk AI systems, the maximum fine is 15 million euros or three percent of global turnover. For providing incorrect or misleading information to authorities, the ceiling is 7.5 million euros or one percent of turnover. For large enterprises operating in telecommunications, these figures represent material financial exposure.

How does the EU AI Act interact with the GDPR for telecom companies that process personal data through AI?

The EU AI Act and the GDPR operate in parallel and must both be complied with simultaneously. A telecom company training a fraud detection model on customer call records must satisfy both the AI Act's data governance requirements for training datasets and the GDPR's lawful basis, data minimization, and purpose limitation obligations. In many cases, a Data Protection Impact Assessment required under the GDPR will overlap with the conformity assessment process under the AI Act, and companies should coordinate these exercises to avoid duplication and ensure consistency.

Are AI systems used purely for internal network operations considered high-risk under the Act?

Not automatically. The risk classification depends on the specific function and potential impact of the system. AI used for routine internal network performance optimization in ways that do not directly affect individuals' access to services or critical infrastructure is likely to fall under a lower risk category. However, AI systems that manage access to essential digital services, make decisions that affect a large number of users simultaneously, or operate as part of critical infrastructure may attract a higher risk classification. Each system must be assessed individually based on its design and deployment context.

Summary

The EU AI Act represents a fundamental shift in how artificial intelligence must be governed within the European Union, and for the IT and Telecommunications industry — which both builds and depends on AI at every layer of its operations — the compliance challenge is substantial but manageable with early, structured action. Companies that begin their AI inventory, risk classification, and governance framework development now will be better positioned than those who wait for enforcement to arrive. Taking proactive steps today not only ensures regulatory compliance but also builds the kind of documented, auditable AI infrastructure that customers, regulators, and investors will increasingly demand.

Check which regulations apply to your company

Take a quick quiz and get a free personalized regulatory analysis.

Regulatory Quiz Try for free