Introduction
The European Union’s Artificial Intelligence Act (Artificial Intelligence Act – Regulation (EU) 2024/1689) was adopted in June 2024 and entered into force on 1 August 2024, with most of its provisions scheduled for phased application over the following months and years. As the world’s first comprehensive horizontal regulation on artificial intelligence, the AI Act is not confined to actors located within the Union. It is explicitly designed to capture third-country providers and users whose AI systems are placed on, or whose outputs are used in, the EU market.
From Türkiye’s perspective, this means that the AI Act functions less as a distant piece of foreign legislation and more as a de facto market standard for anyone wishing to do business with the EU in AI-enabled products and services.
This article first outlines the basic architecture of the EU AI Act and then examines, in the specific context of Türkiye, its practical reach, compliance implications, associated risks and emerging opportunities. The analysis draws on EU sources as well as recent Turkish policy documents and doctrinal discussions relating to AI governance.
I. The Core Architecture of the EU AI Act: A Risk-Based Model
Unlike purely sector-based regulations, the AI Act is built around a risk-based approach. AI systems are classified into four main categories: unacceptable-risk, high-risk, limited-risk (transparency obligations) and minimal-risk systems.
Unacceptable-risk systems – such as certain manipulative systems that substantially distort a person’s behaviour, or social scoring systems by public authorities – are prohibited outright. High-risk systems include, inter alia, AI used in healthcare, education, employment, credit scoring, critical infrastructure, law enforcement and justice, as well as certain AI components embedded in regulated products (e.g. medical devices, machinery, vehicles). For this category the Act lays down detailed obligations on risk management, data governance, technical documentation, logging, transparency and human oversight.
The Act also treats general-purpose AI models (GPAI) as a distinct object of regulation. Providers of such models are subject to transparency and documentation obligations and, for models posing “systemic risk”, to additional technical and governance requirements. Large language models and similar “foundation models” fall under this heading when offered in or towards the EU market.
II. Scope and Extraterritorial Reach: What It Means for Türkiye
One of the most consequential elements of the AI Act for Türkiye is its territorial scope. The Regulation applies not only to entities established in the EU but also to:
- Providers placing AI systems or general-purpose AI models on the EU market, regardless of their place of establishment, and
- Deployers and users whose AI systems are used within the EU, or whose AI-generated outputs are used in the Union in a way that affects individuals or legal entities there.
In practice, this means that a company established in Türkiye but offering its AI systems via the cloud to EU-based clients – or integrating AI into products exported into the EU – may fall within the scope of the AI Act even if all development work takes place on Turkish soil.
Turkish law firms and consultancy practices rightly underline this “targeting” or market-destination logic, emphasising that Turkish actors serving EU customers cannot simply ignore the EU regime on the ground that they are located in a third country. For many such actors the AI Act effectively becomes a mandatory external benchmark for market access.
III. Türkiye’s Existing Legal Landscape and Its Interaction with the AI Act
At present, Türkiye does not have a single, overarching “AI Framework Act” comparable in scope to the EU AI Act. Legal control over AI-related activities is largely exercised through general legislation and sector-specific rules, in particular:
- Personal data protection law,
- Consumer protection law,
- Competition and financial regulation,
- Sectoral regimes in banking, insurance and healthcare,
- Product safety and liability rules.
In parallel, Türkiye’s National Artificial Intelligence Strategy (2021–2025) and related policy documents explicitly stress the objective of developing a risk-based, fundamental-rights-oriented AI governance model that is broadly aligned with EU standards. This renders the AI Act not only an external constraint but also a normative template for future Turkish legislation.
Recent Turkish academic and policy work on AI governance has begun to analyse the AI Act in detail, particularly its risk classification, high-risk regime and corporate governance implications. There is a growing expectation that any forthcoming Turkish AI legislation will approximate, to a significant degree, the EU’s approach, both for reasons of value alignment and to preserve economic integration with the Union.
Accordingly, the AI Act exerts influence over Türkiye on two levels:
(i) a direct level, through its impact on private actors engaging with the EU market, and
(ii) an indirect level, through its role as a model for domestic legislative and policy choices in AI governance.
IV. Compliance Implications: Concrete Consequences for Turkish Companies
1. Determining the Relevant Role and Risk Category
For Turkish companies, the first step is to determine what role they play under the AI Act: provider, deployer (user), importer/distributor or manufacturer of products incorporating AI components. Each role attracts a distinct set of obligations.
A Turkish SaaS or API provider offering AI-based services to EU clients will typically qualify as a provider; a company supplying AI-enabled driver assistance systems to automotive manufacturers exporting to the EU may be both a provider and a manufacturer under product safety legislation.
The second step is to classify each AI system by risk level. Autonomous functionality in medical devices, credit scoring engines, candidate screening and ranking tools for employment and education, and many public-sector applications are strong candidates for designation as high-risk systems. By contrast, chatbots used solely for marketing copy or basic customer support are more likely to fall within the limited- or minimal-risk categories, though context can shift this assessment (e.g. if a chatbot provides quasi-professional legal or medical advice).
2. Organisational and Technical Duties for High-Risk Systems
For systems falling into the high-risk category, EU guidance highlights the importance of a life-cycle risk management system, robust data governance, detailed technical documentation, continuous logging and traceability, appropriate transparency measures and effective human oversight.
For Turkish companies, this is not merely a matter of “ticking boxes” for EU regulators. It amounts to the adoption of a new internal standard of “trustworthy AI by design”. In practical terms, it often requires:
- Establishing an AI governance function (committee or at least a designated officer),
- Introducing an internal approval process for AI projects, including risk assessment and classification,
- Documenting data sets, model design choices, performance tests and bias assessments,
- Implementing explainability and human-in-the-loop mechanisms for critical decisions,
- Updating contracts with EU clients and distributors to reflect AI Act obligations, audit rights and allocation of responsibilities.
In other words, compliance with the AI Act pushes Turkish companies towards a more mature and formalised AI governance culture.
V. Risks: Market Access, Sanctions and Regulatory Burden
The AI Act creates at least three major categories of risk for Turkish actors: market risk, legal/financial risk and regulatory burden.
From a market perspective, EU distributors and corporate clients are increasingly inclined – as already seen in the GDPR context – to favour “AI Act-compliant” products and suppliers. AI solutions that cannot demonstrate conformity with the relevant provisions, or whose providers are unable to furnish adequate technical documentation, risk being excluded from key segments of the EU market.
As to legal and financial risk, the AI Act provides for very significant administrative fines, in serious cases up to a substantial percentage of the provider’s global annual turnover. For larger companies this creates a strong deterrent, particularly when combined with the potential for product withdrawals, contractual claims for damages and reputational harm.
On the regulatory-burden side, there is ongoing debate within the EU itself on the scope and timing of implementation, especially for high-risk systems and general-purpose AI models. Some stakeholders have advocated for simplification or phased application in order to avoid unintended restraints on innovation. However, these discussions are about calibration, not about abandoning the regime. For Turkish actors, “waiting to see if it softens” may therefore entail considerable uncertainty and the strategic risk of falling behind early movers.
VI. Opportunities: Turning Compliance into Competitive Advantage
It would be misleading to view the AI Act solely through the lens of constraints and costs. For Türkiye, the Regulation also opens up significant opportunities for actors able to move early and intelligently.
First, Turkish companies that build AI Act-compliant products and processes can position themselves as trustworthy and regulation-ready partners in the EU market. In heavily regulated sectors such as finance, healthcare, automotive, HR tech and public services, compliance may become as important a differentiator as raw performance or price.
Second, the compliance wave itself generates a new professional market in Türkiye: AI law, AI governance, AI audit and RegTech. Law firms, audit companies and AI-focused tech outfits can offer services ranging from AI Act compliance projects and model audits to bias testing and hybrid technical-legal reporting, not only for Turkish clients but also for regional and international partners.
Third, the organisational practices promoted by the AI Act – risk management, data governance, explainability and human oversight – are not just external obligations. Properly implemented, they can raise a company’s internal quality, robustness and brand value in all markets, including those not yet subject to comparable regulation.
In short, for Turkish stakeholders, the AI Act can be viewed either as an external regulatory burden or as a lever for upgrading the domestic AI ecosystem and securing a stronger position in global value chains.
Conclusion
For Türkiye, the EU Artificial Intelligence Act matters on two interconnected levels. At the direct level, Turkish companies that develop or deploy AI systems for the EU market become subject, in practice, to the Act’s requirements; non-compliance can translate into market exclusion, financial penalties and reputational damage. At the indirect level, the AI Act is likely to shape the design of Türkiye’s own future AI legislation, encouraging a risk-based, fundamental-rights-oriented model that preserves compatibility with EU standards.
The central challenge for Turkish public authorities and private actors is therefore not merely to “withstand external regulatory pressure”, but to treat the AI Act as a reference standard for building a safer, more transparent and more competitive AI ecosystem at home. Those who approach compliance as a passive cost item will struggle. Those who treat early and well-designed compliance as a strategic investment are far better placed to emerge as winners in the next phase of AI-driven economic integration between Türkiye and the European Union.
Yanıt yok