Draft Guidance from the Israeli Privacy Protection Authority on the Application of the Privacy Protection Law on Artificial Intelligence Systems

May 2025

On April 30, 2025, the Israeli Privacy Protection Authority (PPA) published a draft guidance addressing the applicability of Israel’s Privacy Protection Law to artificial intelligence (AI) systems. The guidance outlines the PPA’s interpretation of the law as it applies to processing activities of personal data made by AI technologies, and is intended to serve as the basis for the Authority’s future enforcement activities under Amendment 13, which will come into force in August 2025.

This is the first official guidance from an Israeli regulator in connection with the required way AI systems should be developed and deployed in compliance with the local privacy law. This draft guidance comes while so far, Israeli authorities were issuing policy papers and position statements only, including the Ministry of Innovation, the State Comptroller, and financial regulators. The draft is open for public comment until June 5, 2025.

Key Highlights of the Draft Guidance

  1. Scope and Enforcement Powers

The PPA emphasizes that this guidance will be under their expanded administrative and criminal enforcement powers granted under Amendment 13, including the authority to impose significant administrative fines and initiate criminal proceedings.

  1. Legal Basis Required Throughout the AI Lifecycle

The guidance clarifies that a legal basis is required for processing of personal data at all stages of an AI system’s lifecycle, including the model training and its operational phases. For example, organizations must obtain valid informed consent from data subjects or rely on statutory authorization—both for the stage of training machine learning models and for using AI outputs in practice.

Public-sector entities cannot rely solely on data subject consent and must demonstrate explicit statutory authorization for such processing. Furthermore, in labor law contexts or where public bodies use AI systems, the processing must comply with the proportionality test, balancing privacy risks against expected benefits.

  1. Informed Consent and Transparency Requirement

The PPA reiterates the importance of complying with the informed consent and notice requirements under Section 11 of the Privacy Protection Law, prior to any collection of personal data for use in AI systems. Specifically, data subjects must be informed as follows:

  • A clear description of how the AI system functions, sufficient to support informed consent, subject to technical feasibility;
  • Explicit disclosure when interacting with an automated system or a “chatbot”;
  • A detailed explanation of the types and sources of data collected, intended purposes (including training), data recipients and their purposes, and any legal obligations to disclose the data.

This signals the PPA’s broadening interpretation of both consent and transparency obligations beyond what is explicitly required under the statute. This is also consistent with the authority’s approach reflected in its draft guidance regarding consent and disclosure.

Organizations are therefore encouraged to review their data subject consent and disclosure notices, in the context of AI model training. It is further recommended to examine dataset acquisition agreements used in collaborative projects. In this regard, it is further advisable to consider data minimization and conduct de-identification or anonymization measures prior to using personal data for model training or system improvement.

  1. When Opt-In Consent is Required

In situations involving complex uses of personal data, or when the purpose or processing do not meet data subject’s reasonable expectation, the PPA expects clearer indications of consent—namely, opt-in mechanisms.

This aligns with the PPA’s prior draft guidance on informed consent, reinforcing a narrow contextual application of consent and a demand for high granularity in disclosing AI system functionality.

  1. Data Accuracy and Data Subject Rights

The PPA emphasizes the obligation to uphold data subjects’ rights to access, correction, and erasure—even in the context of AI processing. This includes the obligation to ensure accuracy of personal data sourced from the European Economic Area (EEA), in line with Sections 13–14 of the Israeli Privacy Law and Regulation 5 of the privacy regulations governing data transfers from the European Economic Area to Israel (the “EEA Regulations”).

Significantly, pursuant to the drafgt guidance, the PPA proposes that data subject requests to correct inaccurate data may also apply to correcting faulty algorithmic outputs—a position that expands the scope of existing rights and poses technical feasibility challenges, particularly for “black box” AI systems.

The PPA encourages developers to implement mechanisms ensuring data accuracy and minimizing the risk of erroneous outputs.

In light of this broad interpretation, we strongly recommend maintaining robust documentation recording the datasets used to train AI models, including the types of personal data involved, how it was sourced, and the legal basis supporting its use.

  1. Good Faith Defense under Section 18

The good faith defense available under Section 18 of the Israeli privacy law usually supports legitimate interest protection, which is currently not available as a lawful basis for processing under Israeli privacy laws.

Unlike previous guidance, this draft omits language limiting the good faith defense to cases whereby consent could not reasonably be obtained. This may reflect growing acceptance by the PPA of the defense’s applicability even in AI-related contexts.

However, in this regard, the PPA presents a more stricter application of such good faith defense, placing a heavier burden on organizations that wish to use such defense in the context of processing data via AI systems.

Factors to be considered include the organizational characteristics, data categories, volume of data, and scope of access users. Notably, the Authority recommends conducting privacy impact assessments (PIAs) as a best-practice risk mitigation tool.

  1. Generative AI Use Policies

For the first time, the PPA recommends that organizations adopt internal policies for the use of generative AI systems, including third-party providers (e.g., ChatGPT, Gemini, and the like). These policies should address the following:

  • Risks arising from AI usage and mitigation measures;
  • Role-based access control and authorization management;
  • Permitted purposes and categories of data input;
  • Security risk assessments in accordance with Regulation 15 of the Data Security Regulations;
  • Data retention rules and restrictions on reusing inputs for model training;
  • Technical and organizational measures to reduce security and privacy risks; and
  • Employee training on responsible AI usage.
  1. Accountability and Duty of Care

The draft guidance emphasizes the need to implement organizational accountability practices and the fiduciary duties of officers, when developing and deploying AI systems.

For entities subject to PPA Guidance 1/2024, regarding the duties of the board of directors in complying with the Data Security Regulations, this includes board-level oversight of privacy risks, risk governance, and implementation of privacy-by-design measures.

Organizations that rely heavily on AI in their operations are advised to appoint a Data Protection Officer (DPO) to oversee governance in privacy and security—even if not legally mandated to do so. In addition, the PPA recommends (though does not require) conducting PIAs, particularly for public bodies or where proportionality obligations apply.

  1. Prohibition on Web Scraping for AI Training

The PPA expressly prohibits the unauthorized scraping of personal data from the internet for AI training purposes. Even if a person has made their information public online, this does not imply informed consent unless the website’s terms of use explicitly allow it and the individual has not restricted data visibility—e.g., through privacy settings on social media.

The guidance further notes that violations may constitute a criminal offense under Amendment 13, which introduces penalties for unauthorized data processing. Website operators are also expected to implement technical safeguards against scraping, which the PPA now considers a reportable data breach.

Our view is that this is a narrow interpretation which may limit the ability of developers to use publicly available data and could undermine the balance between privacy protection and innovation—especially for Israel’s competitiveness in the global AI landscape. The PPA is urged to clarify when explicit or separate consent will be required.

  1. Security Obligations for AI-Based Systems

The PPA may impose high-level security requirements for certain AI-based information systems, depending on the nature and volume of data processed. This would subject database owners to the technical and organizational security obligations under the Data Security Regulations.

  1. Database Registration Requirements

The PPA confirms it will use its authorized discretion to deny database registration where unlawful processing poses a clear risk to public safety or welfare.

Our Privacy, Cyber & IT Practice is available to advise on the implications of the draft guidance for your organization, including conducting privacy impact assessments, preparing for the Authority’s enhanced enforcement regime, reviewing consent and notice mechanisms, and drafting internal AI use policies.

To access the full draft guidance in Hebrew, click here.

*The above is for informational purposes only and does not constitute legal advice.

More Articles in Legal Update