Introduction
Artificial intelligence is rapidly transforming business, public administration, healthcare, finance, e-commerce, employment, education, advertising, legal services, cybersecurity, customer relations, and digital platforms in Turkey. AI systems can analyze large datasets, identify patterns, generate content, make predictions, personalize services, detect fraud, support recruitment, assess credit risk, recommend products, monitor employee performance, and automate decision-making processes. These opportunities are commercially powerful, but they also create serious privacy and personal data protection risks.
In Turkey, the main legal framework governing the processing of personal data is Law No. 6698 on the Protection of Personal Data, commonly known as KVKK. The purpose of KVKK is to protect fundamental rights and freedoms, particularly the right to privacy, in relation to the processing of personal data, and to regulate the obligations of natural and legal persons who process such data.
Although Turkey does not yet have a fully enacted, comprehensive AI-specific statute, AI-related personal data issues are already regulated through KVKK, secondary legislation, Personal Data Protection Board decisions, sectoral rules, consumer protection rules, employment law, cybersecurity obligations, intellectual property law, and general liability principles. Current legal commentary notes that no AI-specific legislation has yet been enacted in Turkey, while an AI legislative proposal submitted in June 2024 remains under parliamentary commission review.
Therefore, companies using AI in Turkey cannot wait for a future AI law before taking compliance measures. If an AI system collects, uses, analyzes, stores, transfers, generates, or infers personal data, KVKK applies. This article explains how Turkish personal data protection law applies to artificial intelligence, generative AI, automated decision-making, profiling, sensitive data, cross-border transfers, security, and compliance programs.
Why AI Creates Personal Data Protection Risks
AI systems usually depend on data. Machine learning models, generative AI tools, recommendation engines, predictive analytics systems, fraud detection tools, biometric recognition systems, HR screening platforms, and customer segmentation technologies may require large datasets for training, testing, validation, deployment, monitoring, and improvement.
These datasets may contain names, contact details, ID numbers, transaction histories, location data, IP addresses, device identifiers, behavioral data, health data, biometric data, employment records, financial data, customer complaints, call recordings, social media content, photographs, voice data, and other information relating to identifiable individuals.
Even where AI developers claim that data is “technical” or “anonymous,” KVKK analysis is still required. Data that appears non-identifying may become personal data when combined with other identifiers. For example, device IDs, user IDs, cookie IDs, voice patterns, facial features, browsing behavior, location histories, or unique transaction patterns may relate to an identifiable natural person.
AI also creates risks because it may infer new information about individuals. A system may predict health status, financial risk, political inclination, emotional state, job performance, consumer preferences, fraud risk, or creditworthiness from existing data. These inferences may affect individuals even if they were not directly provided by them.
Core KVKK Principles for AI Systems
AI projects must comply with the general principles under KVKK Article 4. Personal data must be processed lawfully and fairly, be accurate and kept up to date where necessary, be processed for specified, explicit, and legitimate purposes, be relevant, limited, and proportionate to those purposes, and be stored only for the period required by law or by the purpose of processing.
These principles are particularly important for AI because AI development often encourages broad data collection. A company may want to collect as much data as possible “for model improvement” or “future analytics.” However, KVKK requires purpose limitation and data minimization. A business cannot lawfully collect unlimited personal data merely because it may be useful for future AI training.
For example, an e-commerce company using AI-based recommendation tools should not process unrelated customer data beyond what is necessary for recommendation purposes. An employer using AI for performance analysis should not collect excessive behavioral, location, keystroke, or communication data without a strong legal basis. A health-tech company using AI for diagnosis support should ensure that health data is processed only for legitimate medical purposes and with strict safeguards.
AI systems should therefore be designed according to privacy-by-design and privacy-by-default principles. Personal data should be minimized, anonymized where possible, pseudonymized where appropriate, access-controlled, retained only as necessary, and used only for defined purposes.
Legal Basis for Processing Personal Data in AI
A common mistake is assuming that AI-related personal data processing always requires explicit consent. Under KVKK, explicit consent is only one legal basis. Personal data may also be processed without explicit consent where one of the legal grounds in Article 5 applies, such as processing expressly provided by law, necessity for contract performance, legal obligation, establishment or protection of a right, or legitimate interests of the controller provided that fundamental rights and freedoms are not harmed.
The correct legal basis depends on the AI use case. A bank using AI to detect fraud may rely on legal obligations, legitimate interest, or protection of rights depending on the facts. An e-commerce company using AI to recommend products may rely on contract performance or legitimate interest for limited personalization, but behavioral advertising or extensive profiling may require explicit consent. An employer using AI to evaluate employees must assess employment law, proportionality, transparency, and employee privacy risks carefully.
Consent should not be used automatically. If consent is used where processing is actually mandatory, withdrawal of consent may create confusion. Conversely, if AI processing cannot be justified under any statutory ground, explicit consent may be required. Consent must be specific, informed, and freely given.
Transparency and the Obligation to Inform
Transparency is one of the most difficult issues in AI compliance. AI systems may be technically complex, and users may not understand how their data is used. However, KVKK does not allow hidden or vague processing.
Under Article 10 of KVKK, data controllers must inform data subjects about the identity of the controller, processing purposes, recipients and transfer purposes, method and legal basis of collection, and data subject rights. The Communiqué on the Obligation to Inform states that the notice must use clear, plain, and intelligible language; it also provides that informing and obtaining explicit consent must be carried out separately where processing is based on consent.
For AI systems, privacy notices should explain the use of AI in a practical and understandable way. The notice does not need to disclose trade secrets, source code, or every technical detail of the model. However, it should explain what categories of personal data are processed, why AI is used, whether profiling or automated analysis is involved, whether data is transferred to third parties or abroad, how long the data is retained, and how data subjects may exercise their rights.
For example, an AI-powered recruitment platform should tell candidates whether their CVs, tests, video interviews, behavioral data, or assessment scores are analyzed by automated tools. A fintech platform should explain whether AI is used for fraud detection, risk scoring, identity verification, or credit assessment. A healthcare AI tool should explain whether patient data is used for diagnosis support, triage, treatment recommendation, or research.
Automated Decision-Making and the Right to Object
KVKK gives data subjects the right to object to a result against themselves arising from the analysis of personal data exclusively through automated systems. This right is especially relevant to AI, profiling, predictive analytics, automated scoring, algorithmic evaluation, and machine learning-based decision-making.
This may apply where an AI system produces a negative outcome for an individual, such as rejection of a credit application, denial of a service, automated fraud flagging, ranking a candidate as unsuitable, reducing access to platform features, or assigning a risk category that affects the person.
Current Turkish law does not contain a detailed AI-specific automated decision-making framework comparable to some foreign regimes. Legal commentary notes that the scope of this KVKK right has not yet been fully clarified by the Turkish authority and that general KVKK rules remain applicable to automated decision-making and profiling.
In practice, companies should not rely solely on black-box AI decisions where individuals may be seriously affected. They should ensure human review, explainability, appeal mechanisms, accuracy checks, bias testing, and documentation. The more significant the impact on the individual, the stronger the governance measures should be.
AI Profiling and Personalization
AI-based profiling is common in marketing, finance, insurance, employment, e-commerce, online platforms, and security systems. Profiling may involve analyzing a person’s behavior, preferences, location, transaction history, browsing activity, purchasing patterns, performance data, or social interactions to predict interests, risks, needs, or future behavior.
Profiling is not automatically unlawful, but it must comply with KVKK. The controller must identify a legal basis, inform the data subject, limit processing to legitimate purposes, avoid excessive data collection, prevent discriminatory outcomes, and respect data subject rights.
Marketing profiling requires particular caution. AI-based customer segmentation, dynamic pricing, behavioral advertising, microtargeting, retargeting, and product recommendations may involve personal data processing. If profiling is intrusive or used for advertising beyond the user’s reasonable expectations, explicit consent may be required.
Profiling children, employees, patients, debtors, or vulnerable individuals creates higher legal risk. For example, using AI to profile employees’ productivity through continuous monitoring may violate privacy and proportionality principles. Using AI to profile patients for commercial marketing based on health data may be highly risky and may require explicit consent or may be unlawful depending on the structure.
Generative AI and Personal Data Protection
Generative AI systems can create text, images, audio, video, software code, summaries, translations, legal drafts, marketing content, and synthetic outputs. These systems may process personal data in several ways: during model training, fine-tuning, prompt input, output generation, user monitoring, feedback collection, and service improvement.
The Turkish Personal Data Protection Authority published a Generative Artificial Intelligence and Personal Data Protection Guide on 24 November 2025. The Authority explains that generative AI creates new opportunities but also raises ethical, legal, and social risks; it emphasizes that such systems should be developed and used in a transparent, auditable, human-centered way respectful of human rights and fundamental freedoms.
Generative AI creates practical compliance risks. Employees may enter customer data, legal files, health records, employee data, trade secrets, or confidential correspondence into public AI tools. A company may use personal data to fine-tune an AI model without informing individuals. A chatbot may generate inaccurate personal information about a person. A model may reproduce personal data from training data. A company may fail to provide deletion or correction mechanisms for data used in training.
For this reason, organizations should adopt internal generative AI policies. Employees should be instructed not to upload personal data, confidential client information, special category data, trade secrets, or sensitive business records into public AI tools unless approved and legally assessed. Corporate AI tools should be governed by access controls, logging, vendor agreements, data retention rules, and cross-border transfer assessments.
Workplace Use of Generative AI
Workplace use of generative AI has become a major compliance issue. Employees may use public AI tools to draft emails, summarize documents, translate files, analyze data, generate code, prepare reports, or respond to customers. While this may improve efficiency, it can also lead to uncontrolled disclosure of personal data and confidential information.
The Turkish Personal Data Protection Authority published a document on workplace use of generative AI tools on 5 March 2026. The Authority noted that generative AI tools are increasingly used in workplaces for content generation and support activities, but their use may not always occur within a clearly defined corporate strategy, policy, or guidance framework, making institutional monitoring and management difficult.
Companies should therefore create AI usage policies. These policies should define which AI tools may be used, what data may not be entered, whether personal data may be processed, who approves AI tools, how outputs should be verified, how confidentiality is protected, and whether employees must disclose AI use in certain workflows.
For law firms, healthcare institutions, HR departments, financial institutions, and companies processing special category data, the risk is even higher. Uploading client files, patient records, candidate CVs, salary data, internal investigations, or litigation documents into uncontrolled AI tools may create serious KVKK violations.
Special Categories of Personal Data and AI
Special categories of personal data receive stronger protection under KVKK Article 6. These include data relating to race, ethnic origin, political opinion, philosophical belief, religion, sect or other beliefs, appearance, association/foundation/trade union membership, health, sexual life, criminal convictions and security measures, biometric data, and genetic data.
AI systems frequently process or infer sensitive data. Healthcare AI tools may process health and genetic data. Facial recognition systems process biometric data. HR AI tools may process disability, health, union, or criminal record information. Recommendation systems may infer religion, political opinion, sexual orientation, or health status from behavior, even if those data were not directly provided.
Processing special category data requires a valid Article 6 condition and adequate safeguards. AI developers and deployers should avoid using special category data unless strictly necessary. If such data is required, they should implement strict access controls, encryption, data minimization, pseudonymization, audit logs, limited retention, and enhanced transparency.
Biometric AI systems require particular caution. Facial recognition, voice recognition, emotion analysis, fingerprint recognition, gait analysis, and identity verification tools may involve biometric data. If the same purpose can be achieved through less intrusive methods, biometric processing may be disproportionate.
AI Training Data, Web Scraping, and Data Minimization
AI models may be trained on data collected from websites, social media, public databases, customer systems, internal documents, or third-party datasets. However, the fact that data is publicly accessible does not automatically mean it can be freely used for AI training.
Under KVKK, personal data made public by the data subject may be processed only within the limits of the purpose for which it was made public. If a person posts information online for professional networking, it does not necessarily mean the data can be scraped and used to train an unrelated commercial AI model.
Training data should be assessed for source, legality, purpose, proportionality, accuracy, and retention. Companies should document where the data came from, whether personal data is included, whether special categories are included, whether consent or another legal basis applies, and whether anonymization or synthetic data can be used instead.
Legal commentary on Turkish AI and data protection highlights that web-scraped training datasets may contain significant personal data, creating issues for data minimization, purpose limitation, transparency, correction, and erasure rights.
Data Subject Rights in AI Systems
Data subjects have several rights under KVKK Article 11, including the right to learn whether their personal data is processed, request information, learn the processing purpose, know third parties to whom data is transferred domestically or abroad, request correction, request erasure or destruction under legal conditions, object to adverse results from automated analysis, and claim compensation for unlawful processing.
AI systems must be designed so these rights can be exercised effectively. This may be difficult where personal data is embedded in training datasets, vector databases, embeddings, model weights, logs, prompts, outputs, or feedback systems. Nevertheless, controllers should create processes to locate, correct, delete, or restrict personal data where legally required.
For generative AI chatbots, companies should keep prompt and output logs only where necessary and for defined periods. If users enter personal data, the company must decide whether logs are stored, whether they are used for model improvement, whether they are transferred abroad, and how deletion requests will be handled.
Cross-Border Transfers in AI Systems
AI systems often involve cross-border data transfers. Foreign cloud infrastructure, model providers, API services, SaaS tools, annotation platforms, data labeling vendors, customer support systems, analytics tools, and global group companies may all receive or access personal data from Turkey.
KVKK Article 9 was amended in 2024. Under the amended rule, personal data may be transferred abroad if one of the processing conditions under Articles 5 or 6 is met and there is an adequacy decision for the relevant country, sector, or international organization. In the absence of an adequacy decision, transfers may be possible through appropriate safeguards such as standard contracts, binding corporate rules, or written commitments approved by the Board.
The Turkish Authority announced English translations of the By-Law on cross-border transfers and standard contract texts after the amendment of Article 9 by Law No. 7499.
For AI projects, this means that using foreign AI APIs or cloud-based AI tools may trigger Article 9 analysis. A Turkish company should not send customer data, employee data, patient data, biometric data, or confidential personal records to a foreign AI provider without assessing the transfer mechanism.
Data Security and AI Governance
KVKK Article 12 requires data controllers to take all necessary technical and organizational measures to ensure an appropriate level of security, prevent unlawful processing, prevent unlawful access, and protect personal data.
AI governance should include both legal and technical controls. Technical measures may include encryption, access control, role-based permissions, secure APIs, model access restrictions, logging, monitoring, pseudonymization, data masking, secure development practices, vulnerability testing, adversarial testing, prompt injection controls, and output monitoring.
Organizational measures may include AI policies, employee training, vendor due diligence, confidentiality undertakings, data processing agreements, human review procedures, model documentation, audit trails, incident response plans, privacy impact assessments, and approval workflows for new AI use cases.
The Turkish Authority’s AI recommendations state that the guide provides personal data protection recommendations for developers, manufacturers, service providers, and decision-makers operating in the AI field under Law No. 6698.
Agentic AI and Emerging Risks
Agentic AI systems are designed to pursue goals with higher autonomy and to interact with their environment. The Turkish Authority’s 2026 publication on agentic AI describes AI systems as technologies used for predictions, pattern analysis, recommendations, and supporting decision-making in sectors such as health, education, transportation, finance, and public services.
Agentic AI increases privacy risk because it may take multi-step actions, access systems, retrieve data, send communications, trigger workflows, and make recommendations with limited human intervention. If such systems access personal data, the controller must define clear boundaries: which data may be accessed, which actions may be taken, whether human approval is required, how errors are detected, and how unauthorized processing is prevented.
For example, an agentic AI customer service system should not access unrelated customer records. An HR AI agent should not make final dismissal or promotion recommendations without human review. A medical AI agent should not autonomously disclose health data to third parties.
Retention, Deletion, and Anonymization in AI
AI systems create complex retention issues. Personal data may exist in raw datasets, cleaned datasets, training sets, validation sets, logs, prompts, outputs, embeddings, fine-tuning data, audit records, and backups. KVKK requires personal data to be erased, destroyed, or anonymized when processing conditions no longer exist.
The By-Law on Erasure, Destruction or Anonymization of Personal Data requires disposal when processing conditions under Articles 5 and 6 no longer exist and defines anonymization as rendering personal data impossible to link with an identified or identifiable person, even by matching with other data. It also requires disposal operations to be recorded and stored for at least three years, excluding other legal obligations.
AI companies should create retention schedules for training data, prompts, logs, model improvement data, user feedback, and generated outputs. They should also distinguish anonymization from pseudonymization. Pseudonymized data may still be personal data if re-identification is possible.
Practical Compliance Checklist for AI Projects in Turkey
A company using AI in Turkey should follow a structured compliance program:
- Map all AI systems and use cases.
- Identify whether personal data is processed.
- Identify data categories, including special categories.
- Determine whether the company is a data controller, processor, or both.
- Define the legal basis for each AI processing activity.
- Prepare clear privacy notices explaining AI-related processing.
- Separate explicit consent from privacy notices where consent is required.
- Minimize personal data used for AI training and deployment.
- Use anonymized, synthetic, or pseudonymized data where possible.
- Assess automated decision-making risks.
- Provide human review for high-impact decisions.
- Test AI systems for bias, accuracy, and discriminatory outcomes.
- Implement strong technical and organizational security measures.
- Review third-party AI vendors and cloud providers.
- Map cross-border transfers and implement Article 9 safeguards.
- Create retention and deletion rules for AI datasets and logs.
- Establish data subject request procedures.
- Prepare internal generative AI workplace policies.
- Conduct privacy impact assessments for high-risk AI projects.
- Keep documentation proving compliance.
Common Legal Mistakes in AI and KVKK Compliance
One common mistake is using AI tools before mapping data flows. Another is uploading customer, employee, patient, or client data into public generative AI tools without legal assessment.
A third mistake is assuming that publicly available data can be freely scraped and used for AI training. A fourth mistake is failing to inform users that AI-based profiling or automated analysis is being used. A fifth mistake is relying on black-box automated decisions without human review.
Another major mistake is transferring personal data to foreign AI providers without Article 9 compliance. Companies also frequently fail to define retention periods for prompts, logs, outputs, training data, and embeddings.
Finally, many businesses treat AI compliance as a technical issue only. In reality, AI compliance in Turkey requires coordination between legal, IT, cybersecurity, HR, marketing, procurement, product, and management teams.
Conclusion
Artificial intelligence and personal data protection law in Turkey are now deeply connected. Even though Turkey does not yet have a fully enacted AI-specific statute, AI systems that process personal data are already subject to KVKK. Companies using AI must comply with core principles such as lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, data security, and respect for data subject rights.
AI creates special challenges because it may process large datasets, infer sensitive information, generate new personal data, support automated decisions, profile individuals, and transfer data through foreign cloud systems or AI providers. Generative AI and agentic AI add further risks because employees and systems may input, retrieve, summarize, or act upon personal data in ways that are difficult to monitor.
For companies operating in Turkey, the safest approach is to build AI governance into the entire lifecycle of AI projects: design, procurement, data collection, training, testing, deployment, monitoring, auditing, and retirement. A legally sound AI program should include data mapping, legal basis analysis, privacy notices, consent management, automated decision review, special category data safeguards, cross-border transfer controls, security measures, retention policies, data subject rights procedures, and internal workplace AI policies.
AI compliance is not only a regulatory issue. It is also a trust issue. Businesses that use AI responsibly, transparently, and lawfully will be better positioned to protect users, reduce liability, avoid regulatory scrutiny, and build sustainable digital services in Turkey.
Yanıt yok