EU AI Act: Implementation Guide for Turkish Companies
The EU's new Artificial Intelligence Regulation, with its extraterritorial application rule, may also cover technology companies in Turkey. In this article, we examine who falls within its scope, what obligations arise, and concrete preparatory steps for Turkish companies.
Av. Umut Zorer
Kurucu Avukat
Introduction
The EU Artificial Intelligence Regulation (AI Act / Regulation 2024/1689), which entered into force in August 2024, is recognized as the world's most comprehensive framework for technology regulation. The Regulation covers every actor developing, placing on the market, or using an artificial intelligence system within the European Union, and — which is the main subject of this article — it also directly affects systems that are established outside the EU and whose output is used by a user or in a process within the EU.
A significant portion of Turkey's technology ecosystem companies, knowingly or unknowingly, fall within this scope: SaaS products are sold to EU customers, machine learning models provide output to a production line in the EU, a chatbot service responds in Turkish-German to users in Germany. The Regulation can create liability for these structures, and simultaneously opens the door to administrative fines based on revenue — one of the EU's most severe sanctions (up to 7% of global annual turnover).
In this article, we address the architecture of the AI Act, the boundaries of its effect on companies in Turkey, the implementation timeline, and concrete steps that must be taken now.
Architecture of the AI Act: Risk-Based Approach
The AI Act categorizes artificial intelligence systems into four categories according to their risk levels. Each category carries a distinctly different package of obligations.
Prohibited applications (Article 5)
These may in no way be placed on the market or used in the EU:
- Systems that engage in cognitive behavioral manipulation and weaken a person's free will,
- Systems that exploit sensitive characteristics such as age, disability, or socio-economic status,
- Social scoring systems implemented by public authorities,
- Real-time remote biometric identification in public spaces for law enforcement purposes (with limited exceptions),
- Emotion recognition systems in workplaces and educational institutions (outside security grounds),
- Creation of facial recognition databases from internet or CCTV footage.
These prohibitions became applicable as of February 2, 2025.
High-risk systems (Article 6, Annex III)
This is the category carrying the most stringent obligations under the Regulation. Areas classified as high-risk include critical infrastructure management, scoring and access decisions in education, recruitment and employee assessment, credit evaluation, decisions on access to essential public services, law enforcement applications, border management, and use in democratic processes.
Obligations provided for systems deemed high-risk:
- Establishing and maintaining a risk management system,
- Data governance framework (quality, representativeness, and bias analysis of training/validation/test data),
- Maintaining technical documentation,
- Automatic recording of usage logs,
- Transparency and explanation to the user,
- Design of human oversight,
- Documentation of accuracy, robustness, and cybersecurity levels,
- Conformity assessment and CE marking (according to applicable classes),
- Registration in the EU database.
The vast majority of these obligations become applicable as of August 2, 2026.
Limited risk (Article 50)
In this category, the system's existence must be clearly disclosed to the user. Examples include: chatbots that directly interact with humans, emotion recognition or biometric categorization systems, systems that generate deepfake content. The obligation is more limited but must not be overlooked.
Minimal risk
A broad category to which the Regulation imposes no specific obligations. Recommendation engines, spam filters, and assistance tools fall into this category.
General-purpose AI models (GPAI)
The AI Act subjects foundational models (such as GPT, Claude, Gemini, and Mistral) to a separate regime. Transparency, copyright compliance, and technical documentation obligations have been applicable since August 2, 2025. Models carrying systemic risk (trained above a certain computational threshold) are subject to additional obligations.
Why and How Does the Regulation Affect Turkish Companies?
Article 2 of the Regulation addresses the extraterritorial scope of the Regulation. A company established in Turkey falls within the scope of the AI Act in the following situations:
- As a system provider: If it develops or supplies an AI system placed on the market in the EU (for example, a Turkish SaaS firm providing a high-risk AI module to its users in the EU),
- As a system user (deployer): If the output of the AI system is used within the EU (for example, if an AI-based recruitment system operated by a Turkish company ranks candidates at an EU subsidiary),
- As an importer or distributor: If it brings the AI system to the EU market.
Furthermore, when a Turkish company embeds an AI component into a physical product exported to the EU — for example, when adding an AI layer to a harmonized EU legislation-compliant product such as a medical device, automotive electronics, or security device — an AI Act obligation arises alongside the product safety framework in Annex I.
A critical point in practice: the AI Act is not a regime that can be avoided by virtue of establishment in Turkey. The Regulation also requires that providers established outside the EU appoint an authorized representative in the EU. If you supply a high-risk system, you must have an authorized representative in the EU.
Implementation Timeline
The AI Act foresees a phased implementation timeline; this timeline forms the backbone of preparatory planning for companies in Turkey.
- August 1, 2024 — Entry into force.
- February 2, 2025 — Application commencement for prohibited applications and AI literacy obligations.
- August 2, 2025 — Application commencement for GPAI, governance, notification to authorities, and enforcement provisions.
- August 2, 2026 — Application commencement for high-risk systems (Annex III) and other general provisions.
- August 2, 2027 — Application commencement for high-risk obligations in certain Annex I product categories.
Relationship with Turkish Legislation
As of the date of publication of this article, Turkey lacks comprehensive legislation specific to AI. Nevertheless, the existing legal framework provides practical application for a substantial portion of AI systems:
- Turkish Data Protection Law (KVKK) — automated decision-making, profiling, and data processing principles (Articles 4, 5, 11). It provides the foundational legal basis especially for AI-based recruitment, credit scoring, and targeted advertising systems.
- Consumer Protection Law (Act No. 6502) and Advertising Board practice — misleading or unfair AI-based marketing practices.
- Medical device legislation, Banking and Capital Markets legislation — sector-specific rules applicable to AI systems.
- Turkish Penal Code (Act No. 5237) — cyber crimes regime (criminal dimension of AI misuse).
In the coming period, it is likely that Turkey will enact its own regulations converging with the EU AI Act; AI Act-compliant structuring from today also provides a strong foundation for possible Turkish legislation.
Preparatory Steps for Turkish Companies
Preparation for AI Act compliance is not a one-time project; it is an ongoing governance practice that follows the lifecycle of AI systems. Recommended initial steps:
1. Conduct an AI inventory. Document all AI systems used and supplied by the company, their location, purpose, the market/process in which their output is used, and core model dependencies.
2. Determine risk categorization. Assess each system under Annex III and Article 5. Prioritize systems that fall or may fall into the high-risk category.
3. Map EU connections. On what basis does the system, as provider or user, relate to the EU? Does appointment of an authorized representative become necessary?
4. Establish a training data inventory. Where was each dataset sourced, what is the legal basis, does it contain personal data, how has it been evaluated in copyright terms?
5. Question the supplier chain. Request transparency documents from GPAI providers (OpenAI, Anthropic, Google, Mistral, Meta, etc.) under Article 53; review risk-sharing provisions in usage terms.
6. Create a skeleton of the high-risk technical file. Rather than producing multiple documents simultaneously in August 2026, establish the skeleton now and plan to complete it over the coming months: risk management plan, data governance document, human oversight model, usage log architecture.
7. Establish governance lines. AI policy, red-line list for prohibited applications, internal approval mechanism, incident response procedure.
8. Update contractual framework. Add appropriate liability allocation provisions to customer contracts, AI Act compliance commitments to supplier contracts, separate warranty clauses for AI components.
9. Achieve dual compliance with KVKK. Conduct DPIA for AI systems; ensure that automated decision-making processes are reflected in disclosure texts within the framework of Article 11.
10. Establish a monitoring rhythm. Following publication of this article, the AI Act will evolve through Commission guidance decisions, AI Board decisions similar to EDPB, and technical standards (CEN/CENELEC, ISO/IEC 42001). Regular update discipline is essential.
Conclusion
The AI Act is both a compliance obligation and an opportunity area for a Turkish company. Every company wishing to enter the EU market must be compliant; companies that already have EU customers face only 18 months remaining until August 2026 to begin preparation. Early structuring both reduces technical debt and provides resilience against the burden that prospective Turkish legislation will bring.
Artificial intelligence law is not a final destination; it is a rapidly developing framework. The correct steps taken today will determine the company's position within the global governance architecture that will take shape in the coming years.