HB-00057

AN ACT INSTITUTING THE NATIONAL ARTIFICIAL INTELLIGENCE CODE OF THE PHILIPPINES, CREATING THE BUREAU OF ARTIFICIAL INTELLIGENCE SYSTEMS UNDER THE DEPARTMENT OF INFORMATION AND COMMUNICATIONS TECHNOLOGY, AND ESTABLISHING A COMPREHENSIVE FRAMEWORK FOR THE SAFE, ETHICAL, AND INCLUSIVE GOVERNANCE OF ARTIFICIAL INTELLIGENCE SYSTEMS IN THE COUNTRY, AND FOR OTHER PURPOSES

(expand)
Proposed 2025-06-30|Pending since 2025-07-29|Official Source
Summary

This bill establishes the National Artificial Intelligence Code of the Philippines and creates a Bureau of Artificial Intelligence Systems to govern the ethical development and use of AI.

EXPLANATORY NOTE

Artificial Intelligence (AI) is projected to contribute USD 15.7 trillion to the global economy by 2030. This estimate comes from a 2017 study by PricewaterhouseCoopers (PwC) titled Sizing the Prize: What's the Real Value of AI for Your Business and How Can You Capitalize? Of this total, USD 6.6 trillion is expected from productivity gains and USD 9.1 trillion from increased consumer demand.

According to the World Economic Forum's Future of Jobs Report 2023, AI and automation may displace 85 million jobs globally by 2025. However, the same report projects the creation of 97 million new jobs, mostly in data, sustainability, and digital services.

For the Philippines, AI could add up to USD 92 billion to the national economy by 2030. This projection is based on a 2021 joint study by Kearney and EDBI (The Transformative Power of AI in Southeast Asia). Sectors such as manufacturing, retail, finance, and healthcare stand to benefit the most.

Despite this, the Philippines ranks 61st out of 193 countries in the Government AI Readiness Index 2022, published by Oxford Insights. In Southeast Asia, the country lags behind Vietnam (55th), Indonesia (57th), and Thailand (43rd). This indicates a gap between potential and institutional preparedness.

At present, there is no comprehensive law in the Philippines governing the ethical deployment, public accountability, and sectoral integration of AI. Existing statutes, such as the

Data Privacy Act of 2012 and the Cybercrime Prevention Act of 2012, are not designed to address autonomous decision-making, algorithmic opacity, or synthetic content generation.

This bill proposes the National Artificial Intelligence Code of the Philippines. It is a unified and enforceable legal framework for AI governance.

The bill creates the Bureau of Artificial Intelligence Systems (BAIS) under the Department of Information and Communications Technology (DICT). The Bureau will serve as the central regulatory, investigatory, and technical authority over AI systems across public and private sectors.

It affirms an Artificial Intelligence Bill of Rights. This guarantees every Filipino the right to transparency, redress, data sovereignty, and protection from harmful AI systems. These rights are matched with enforceable mechanisms such as registration, audits, risk classification, and documentation.

The bill prohibits harmful AI practices. These include social scoring, real-time mass surveillance without judicial oversight, impersonation through synthetic media, and opaque algorithmic manipulation in politics or finance.

To ensure public trust, the bill creates the National Panel of Experts and Resource Persons on AI. This is a multisectoral body tasked with issuing ethical opinions, reviewing regulatory proposals, and convening citizen consultations.

The bill mandates the publication of a Code of Artificial Intelligence Regulations (CAIR). This will serve as the official repository of technical rules and compliance standards.

The measure also creates a regulatory sandbox system for experimental AI deployments. It requires risk mitigation plans, time-limited exemptions, and public disclosure. It assigns sector-specific compliance obligations for AI in healthcare, education, finance, law enforcement, and elections.

Finally, the bill establishes an AI Safety Research Fund, to be administered by the BAIS and funded annually through the General Appropriations Act.

The proposed Code is designed for long-term use. It includes provisions for foresight, risk classification, moratoriums on emerging threats, and national infrastructure development. It is intended to be amended as technology evolves, but it sets a clear baseline. The Philippines needs a single law, coherent and constitutional, that defines what AI can and cannot do.

In view of the foregoing, the immediate approval of this measure is earnestly sought.

Republic of the Philippines HOUSE OF REPRESENTATIVES Quezon City

Twentieth Congress First Regular Session

HOUSE BILL NO. 57

Introduced by Representative RAYMOND ADRIAN SALCEDA

AN ACT

INSTITUTING THE NATIONAL ARTIFICIAL INTELLIGENCE CODE OF THE PHILIPPINES, CREATING THE BUREAU OF ARTIFICIAL INTELLIGENCE SYSTEMS UNDER THE DEPARTMENT OF INFORMATION AND COMMUNICATIONS TECHNOLOGY, AND ESTABLISHING A COMPREHENSIVE FRAMEWORK FOR THE SAFE, ETHICAL, AND INCLUSIVE GOVERNANCE OF ARTIFICIAL INTELLIGENCE SYSTEMS IN THE COUNTRY, AND FOR OTHER PURPOSES

Be it enacted by the Senate and House of Representatives of the Philippines in Congress assembled:

CHAPTER I General Provisions

SECTION 1. Short Title. - This Act shall be known as the "National Artificial Intelligence Code of the Philippines."

SEC. 2. Declaration of Policy. - It is the policy of the State to promote the ethical, secure, and inclusive development and deployment of Artificial Intelligence (AI) in a manner that strengthens national innovation, protects fundamental rights, preserves human dignity, and advances the public good. The State shall ensure that AI technologies deployed in the Philippines are human-centric, trustworthy, accountable, and aligned with constitutional and democratic principles.

SEC. 3. Scope and Coverage. - This Code shall apply to the research, development, deployment, use, importation, sale, and regulation of Artificial Intelligence systems across all sectors within the jurisdiction of the Republic of the Philippines. It shall govern both public and private entities engaged in AI-related activities and all AI systems used in Philippine territory, including those affecting Philippine citizens through cross-border digital platforms.

SEC. 4. Definition of Terms. - For the purpose of this Act, the following terms are defined:

(a) Artificial Intelligence (AI) refers to any system or software, whether physical or virtual, that performs tasks which typically require human intelligence, such as reasoning, perception, learning, decision-making, planning, or autonomous operation, using computational models, statistical methods, or other algorithmic techniques.

(b) AI System means a machine-based system that, for explicit or implicit objectives, processes data to generate outputs, such as predictions, classifications, recommendations, or decisions - which influence environments or individuals.

(c) High-risk AI System refers to an AI system that, by virtue of its function, deployment context, or scope, poses a significant risk to the rights, safety, security, or welfare of individuals or the public, as classified under SECTION 15 of this Act.

(d) Autonomous AI System means an AI system that, once activated, can operate independently of real-time human control or continuous supervision, and can alter its behavior within a bounded scope.

(e) Training Data refers to any data set used to train or fine-tune an AI model, whether labeled or unlabeled, structured or unstructured, synthetic or real.

(f) Model means a mathematical or computational representation, trained on data, that enables an AI system to generate outputs based on input patterns.

(g) Foundation Model refers to a large-scale AI model trained on broad data sets and designed to be adaptable for multiple downstream tasks, including through fine-tuning or prompting.

(h) Generative AI refers to any AI system that can autonomously produce content - such as text, images, video, audio, code, or synthetic data, intended to resemble or simulate human-generated outputs.

(i) Algorithmic transparency refers to the degree to which the logic, structure, assumptions, data inputs, and decision pathways of an AI system can be meaningfully explained to regulators, affected persons, or external auditors.

(j) Human oversight means the presence of operational procedures and technical mechanisms allowing humans to understand, intervene in, suspend, or override an AI system's behavior where necessary for safety, legality, or accountability.

(k) Deployment means the act of making an AI system operational in any setting where its outputs may affect real-world outcomes, users, or decisions, whether directly or through integration into products, services, or government functions.

(l) BAIS refers to the Bureau of Artificial Intelligence Systems created under SECTION 6 of this Act, which shall serve as the central regulatory, administrative, and enforcement body for AI governance in the Philippines.

(m) IACU refers to an Internal AI Compliance Unit established within a public or regulated private institution pursuant to SECTION 12 of this Act, tasked with ensuring internal adherence to this Code and its implementing rules.

(n) Code of AI Regulations (CAIR) refers to the official compendium of rules, procedures, templates, technical standards, and interpretive guidance issued by the BAIS pursuant to this Code.

(o) Regulatory Sandbox refers to a supervised and time-bound environment authorized by the BAIS in which an AI system may be deployed with conditional exemptions for the purpose of testing, research, or controlled innovation.

(p) Explainability refers to the property of an AI system by which a human can understand and articulate the basis of a particular output or behavior of the system, including how input data contributed to the result.

(q) Bias audit means a systematic evaluation of an AI system for disparate impacts or discriminatory patterns across demographic or protected groups, conducted using statistical and contextual tools.

(r) Red teaming means the process of adversarial testing of an AI system to uncover vulnerabilities, unintended behaviors, or ethical risks, often through stress scenarios or challenge simulations.

(s) Social scoring means the practice of evaluating or ranking individuals based on aggregated personal data, behaviors, or reputational metrics, typically used to assign access to services or benefits.

(t) Synthetic content refers to media or text generated or altered by AI that simulates the appearance, voice, writing style, likeness, or other attributes of a real or fictional person, object, or source.

(u) Incident report means a formal submission required under this Code describing an adverse event, failure, or harmful impact involving an AI system, including technical logs and mitigation measures taken.

(v) Profiling means the automated processing of personal data to evaluate, analyze, or predict aspects of an individual's behavior, preferences, or characteristics.

(w) Downstream deployer refers to any entity that uses, integrates, or embeds an AI model developed by another party into its own products, systems, or services.

(x) Public-sector AI refers to any AI system developed, procured, operated, or used by government agencies, including for administrative, operational, policy, or enforcement purposes.

(y) Adverse algorithmic outcome refers to any materially harmful result or decision generated by an AI system that affects an individual's rights, access to services, employment, education, liberty, or safety.

(z) Digital dignity means the principle that individuals interacting with AI systems shall retain their sense of self-worth, agency, and human status, and shall not be treated solely as data points or optimization targets.

CHAPTER II Artificial Intelligence Rights and Governance

SEC. 5. Bill of Rights in Artificial Intelligence. - In recognition of the transformative impact of Artificial Intelligence on human life, work, and society, and in order to preserve human dignity, democratic accountability, and the public good, the State affirms the following rights of all persons within the jurisdiction of the Republic of the Philippines in relation to the design, deployment, and use of Artificial Intelligence systems:

(a) The Right to Human Agency. - Every individual shall have the right to meaningful human oversight in decisions that affect their rights, freedoms, opportunities, or safety. No AI system shall have final authority over such decisions without a mechanism for human review, correction, or override.

(b) The Right to Algorithmic Transparency. - Individuals shall have the right to know when they are subject to decisions, services, or interactions driven by AI. Such systems must be explainable in clear, accessible terms, including their purpose, logic, and impact.

(c) The Right to Protection from Unjust Harm. - All persons shall be protected from algorithmic discrimination, profiling, coercion, or surveillance that infringes upon their dignity, privacy, or equal treatment under the law.

(d) The Right to Data Sovereignty. - Individuals shall retain meaningful control over their personal data used in AI systems, including the right to access, correct, delete, or object to its use, consistent with law. AI systems must be designed to minimize data collection and preserve contextual integrity.

(e) The Right to Safe and Accountable Systems. - All persons shall be protected from unsafe or malfunctioning AI systems. High-risk AI systems shall be subject to pre-deployment review, documentation, and ongoing accountability by a clearly identified human or institutional actor.

(f) The Right to Contestability and Redress. - Individuals adversely affected by an AI system shall have the right to seek redress, including timely human consideration, correction of harmful outcomes, and access to legal remedies under this Code and other applicable laws.

(g) The Right to Non-Obsolescence. - The dignity of labor and human capability shall be preserved. Workers displaced or transformed by AI shall be entitled to just transition mechanisms, including retraining, reskilling, and employment protections, in accordance with national development goals.

(h) The Right to Participate in Governance. - Citizens shall have the right to be consulted on AI policies that affect public life, and to access information, institutions, and processes that govern the design and deployment of AI technologies.

The rights enumerated in this Title shall be interpreted consistently with the operative provisions of this Code, including but not limited to risk classification, registration, algorithmic audits, consent requirements, civil remedies, and administrative penalties. Nothing in this Title shall be construed to limit or diminish any other right or protection provided by law.

SEC. 6. Creation of the Bureau of Artificial Intelligence Systems (BAIS). - There is hereby created a Bureau of Artificial Intelligence Systems (BAIS) under the Department of Information and Communications Technology (DICT). The BAIS shall serve as the national regulatory, technical, investigatory, and administrative body on matters relating to the governance of Artificial Intelligence systems in the Philippines.

The BAIS shall operate under the direct supervision and control of the Secretary of Information and Communications Technology, who shall exercise overarching authority over its strategic direction, regulatory priorities, and administrative oversight. All decisions, issuances, and enforcement actions of the BAIS shall be subject to the policy guidance and institutional coordination of the DICT.

The DICT shall ensure that the BAIS is fully resourced, staffed, and integrated into the Department's digital governance infrastructure. It shall exercise final authority in resolving jurisdictional questions involving the BAIS and other DICT bureaus, and may issue supplemental orders to ensure the coherence of national digital policy.

SEC. 7. Mandate and Powers of the Bureau. - The BAIS shall have the following powers and functions:

(a) Formulate and enforce rules, standards, protocols, and technical requirements for the registration, use, and development of AI systems; (b) Classify AI systems according to levels of risk and sensitivity, and determine which systems shall be deemed high-risk for the purposes of this Code; (c) Maintain the National Registry of Artificial Intelligence Systems and require all deployers of AI systems to comply with registration requirements; (d) Conduct pre-deployment reviews, risk assessments, and post-deployment audits of high-risk AI systems; (e) Issue orders to suspend, prohibit, or modify the deployment of AI systems that pose a significant threat to safety, rights, or national security; (f) Exercise visitorial powers and conduct compliance inspections of public and private entities subject to this Code; (g) Refer violations of this Code to the Department of Justice and the appropriate law enforcement or regulatory agencies for criminal or civil prosecution; (h) Interpret this Code and its implementing regulations, and issue binding advisory opinions;

(i) Establish and maintain the Code of Artificial Intelligence Regulations (CAIR), which shall compile all technical rules, standards, and administrative issuances under this Act;

(j) Promote the development of domestic AI expertise and systems in accordance with ethical and human-centered standards;

(k) Perform all other functions necessary to implement the provisions of this Code.

SEC. 8. Organizational Structure of the Bureau. - The BAIS shall be headed by a Bureau Director, who shall be appointed by the Secretary of the DICT. The Bureau shall be composed of at least the following divisions:

(a) The Division on Algorithmic Standards and Safety, responsible for risk classification, audits, and certification;

(b) The Division on Ethical Risk and Human Oversight, responsible for enforcing ethical safeguards and human fallback mechanisms;

(c) The Division on Sectoral Integration and Compliance, responsible for coordinating with public agencies and regulated sectors;

(d) The Division on Public AI Infrastructure and Research, responsible for national talent development, open-source resources, and sandbox environments;

(e) The AI Investigations and Enforcement Division, with full administrative authority to investigate violations, issue subpoenas, and recommend sanctions.

SEC. 9. Interpretive Authority. - The BAIS shall have exclusive authority to interpret this Code and the Code of Artificial Intelligence Regulations. Its interpretations shall be binding upon all government offices and private entities, unless reversed by the Supreme Court. In cases of ambiguity, the interpretation that advances human dignity, algorithmic transparency, and the protection of constitutional rights shall prevail.

Interpretations made in the exercise of quasi-legislative or quasi-judicial functions by the Bureau shall be presumed valid unless clearly contrary to law.

SEC. 10. Rule-Making and the Code of AI Regulations (CAIR). - The BAIS shall promulgate rules, regulations, and technical standards to implement this Code. These shall be compiled into a unified compendium known as the Code of Artificial Intelligence Regulations (CAIR), which shall be made publicly available and updated quarterly. The CAIR shall serve as the exclusive repository of all enforceable regulatory issuances relating to AI.

SEC. 11. Consultation and Publication. - The BAIS shall ensure that all proposed regulations undergo public notice and consultation for not less than fifteen (15) calendar days. All final rules shall be published in the Official Gazette or a newspaper of general circulation and posted online through the DICT portal.

SEC. 12. Internal AI Compliance Units (IACUs). - All public agencies, government- owned or -controlled corporations, and regulated private entities that develop or deploy AI systems shall establish Internal AI Compliance Units (IACUs). These units shall be responsible for internal monitoring, documentation, and coordination with the BAIS. The Bureau may deputize such units to conduct self-audits, submit technical reports, or enforce immediate suspension of unsafe AI use.

SEC. 13. Annual Report to Congress. - The BAIS shall submit an annual report to Congress, detailing:

(a) the status of AI deployment in the Philippines, (b) the number and nature of high-risk AI systems registered, (c) actions taken in enforcement and compliance, and (d) emerging trends and recommendations for policy or legislative action.

CHAPTER III Ethical and Legal Standards for Artificial Intelligence Systems

SEC. 14. Principles of Ethical AI. - All Artificial Intelligence systems developed, deployed, imported, or used in the Philippines shall adhere to the following core principles:

(a) Human Dignity - AI systems shall be designed to respect the inherent dignity, autonomy, and rights of all persons.

(b) Fairness and Non-Discrimination - AI systems shall not result in unjust or disproportionate discrimination based on race, gender, class, ethnicity, religion, political beliefs, or other protected characteristics.

(c) Transparency and Explainability - Individuals shall have the right to a meaningful explanation of how AI systems that affect them function, including their logic, significance, and foreseeable consequences.

(d) Privacy and Data Protection - AI systems shall comply with the Data Privacy Act of 2012 and other applicable laws on personal data, and shall minimize unnecessary data collection.

(e) Accountability - There shall always be a clearly identifiable human or institutional actor legally and operationally responsible for the outcomes of an AI system.

(f) Safety and Resilience - AI systems shall be developed and tested to ensure their robustness against errors, attacks, or unintended consequences.

(g) Proportionality - The level of scrutiny, human control, and regulatory compliance required of an AI system shall be commensurate to its level of risk and potential harm.

SEC. 15. High-Risk AI Systems. - The BAIS shall classify certain AI systems as high- risk based on the following factors:

(a) Potential to affect fundamental rights or liberties;

(b) Use in critical sectors such as health, finance, education, security, law enforcement, or government decision-making;

(c) Degree of autonomy and lack of meaningful human oversight;

(d) Use for profiling, biometric identification, surveillance, or predictive policing;

(e) Scale and scope of deployment affecting large populations or essential services.

High-risk AI systems shall be subject to pre-deployment review, mandatory registration, risk mitigation requirements, and continuous audit under this Code.

SEC. 16. Right to Human Review. - Any person affected by a decision, recommendation, classification, or denial made primarily by an AI system shall have the right to request human intervention, review, or override of that decision by a competent and authorized officer.

SEC. 17. Disclosure Obligations. - Entities deploying AI systems shall disclose, in a clear and timely manner, the following:

(a) That an AI system is being used; (b) The general purpose or function of the AI system; (c) Whether automated decisions are involved; and (d) Channels for appeal, redress, or correction.

SEC. 18. Consent and Data Ethics. - No AI system shall be trained, tested, or deployed using personal data obtained without informed consent or lawful basis. Data used in AI systems shall be relevant, accurate, and, where possible, anonymized or de-identified. Individuals shall retain rights over their personal data in accordance with applicable law.

SEC. 19. Algorithmic Transparency Requirements. - High-risk AI systems shall be accompanied by the following documentation, subject to BAIS audit:

(a) A description of the model architecture and training dataset; (b) Evidence of bias testing and mitigation measures; (c) Explanation of decision-making pathways or output generation; (d) Records of post-deployment performance monitoring.

SEC. 20. Prohibited Practices. - The following uses of AI systems are strictly prohibited:

(a) Development or deployment of AI systems for generalized social scoring of individuals or populations by the State or private entities;

(b) Use of AI to intentionally deceive, impersonate, or manipulate individuals without clear disclosure, especially in political or financial contexts; (c) Deployment of autonomous AI systems in contexts where there is no effective mechanism for human intervention, such as law enforcement use of lethal force; (d) Use of AI to exploit vulnerable populations, including children, persons with disabilities, and the elderly, through manipulative digital behavior or targeting.

CHAPTER IV Sector-Specific Rules and Obligations

SEC. 21. AI in Government Services. - Government agencies deploying AI systems for administrative, service delivery, policy implementation, or public interface shall:

(a) Register such systems with the BAIS prior to deployment; (b) Ensure that AI-generated outputs remain subject to human validation and legal accountability; (c) Publish a clear statement of the AI system's function, limitations, and appeal mechanisms on official platforms; and (d) Maintain audit logs and records of automated decisions.

The BAIS may exempt internal-use AI tools or sandboxed systems from full compliance, provided risks are minimal and documented.

SEC. 22. AI in Education. - AI systems used in educational institutions shall:

(a) Reinforce, not replace, the role of human educators; (b) Avoid profiling or ranking of students beyond legitimate pedagogical objectives; (c) Provide accessible explanations of AI-driven outputs to students and parents; and (d) Be disclosed in all relevant school policies and platforms.

The implementation of this section shall be undertaken by the BAIS in consultation with the Department of Education and the Commission on Higher Education. For experimental AI uses in education, the BAIS may authorize sandbox deployment subject to review.

SEC. 23. AI in Healthcare and Medical Decision-Making. - No AI system shall be deployed for diagnostic, triage, treatment, or prioritization without:

(a) Review and certification by the Department of Health, in coordination with the BAIS;

(b) Continuous human oversight by a licensed health professional; and (c) Full disclosure to patients about the system's role and limitations.

The BAIS may authorize provisional use of emerging systems under a regulatory sandbox framework for clinical testing, with appropriate safeguards and reporting protocols.

SEC. 24. AI in Finance, Insurance, and Credit Risk Modeling. - Financial institutions using AI for pricing, risk modeling, credit scoring, or policy decisions shall:

(a) Disclose the use of AI in such decisions to affected clients; (b) Conduct and retain bias audits for BAIS review; (c) Maintain a process for human appeal or override of AI-based denials; and (d) Submit high-risk models for registration and classification.

BAIS may waive specific reporting requirements for low-volume or low-impact AI applications, provided that the institution maintains internal compliance documentation.

SEC. 25. AI in Employment and Human Resources. - Employers using AI in recruitment, evaluation, promotion, or termination must:

(a) Inform applicants or employees about the use of such systems; (b) Conduct fairness assessments and mitigate bias; (c) Allow for human intervention or appeal in employment-related AI decisions; and (d) Retain relevant AI documentation for inspection.

Microenterprises and SMEs may be subject to simplified compliance standards, to be developed by BAIS in coordination with the Department of Labor and Employment.

SEC. 26. AI in Law Enforcement, Security, and Surveillance. - AI systems used for profiling, surveillance, forensic analysis, or predictive policing shall be considered high-risk and may only be deployed under the following conditions:

(a) Written authorization from the Secretary of the Interior and Local Government and review by the BAIS; (b) Prohibition of indiscriminate, real-time mass surveillance of public spaces, unless specifically allowed by law and subject to judicial oversight; (c) Real-time human control over system outputs; and (d) Independent post-deployment evaluation.

The BAIS may allow pilot testing of public safety AI systems in a controlled, time limited regulatory sandbox with municipal consent and clear public notice.

SEC. 27. AI in Critical Infrastructure and Public Utilities. - Operators of critical infrastructure - such as power grids, water services, telecommunications, or mass transport - shall:

(a) Report any AI system affecting operational safety or continuity to the BAIS; (b) Conduct simulation-based stress tests under BAIS supervision; (c) Maintain human override capabilities at all decision layers; and (d) Submit AI incident reports following significant service disruptions.

The BAIS, in consultation with relevant regulators, may establish differentiated compliance levels based on the function and criticality of the infrastructure.

SEC. 28. AI in Media, Entertainment, and Content Generation. - Entities using AI to generate, curate, or manipulate content for public dissemination shall:

(a) Clearly disclose AI-generated content through visible notices; (b) Label all political, journalistic, or commercial material that involves synthetic or altered output; (c) Prohibit impersonation of public figures or real individuals without their prior consent; and (d) Implement internal controls to prevent the spread of AI-generated disinformation.

The BAIS may provide model disclosure templates and standards, and shall allow sandbox testing for AI-driven creative tools that are not deployed at public scale.

CHAPTER V Offenses, Penalties, and Remedies

SEC. 29. General Liability Framework. - Any person, entity, or institution that violates the provisions of this Code, whether by act or omission, shall be liable under civil, administrative, or criminal law, depending on the nature and gravity of the violation.

In determining penalties, the following shall be taken into account:

(a) The level of autonomy and risk involved in the AI system; (b) Whether the violation was willful, reckless, negligent, or in good faith; (c) The extent of harm or rights infringement caused;

(d) The presence or absence of prior compliance efforts; and (e) Whether corrective action was taken promptly upon notice.

SEC. 30. Unauthorized Deployment of High-Risk AI. - The deployment of any high-risk AI system without prior registration, pre-deployment review, or mandatory documentation shall be subject to:

(a) An administrative fine not exceeding Five Million Pesos (₱5,000,000) per violation; (b) Immediate suspension or prohibition of system use by order of the BAIS; and (c) Mandatory corrective filing within thirty (30) days.

The BAIS may allow first-time violators to remedy noncompliance within a grace period, provided no substantial harm has occurred.

SEC. 31. Failure to Disclose AI Use. - Failure to disclose the use of AI in contexts where this Code or the Code of AI Regulations requires notification shall be subject to:

(a) An administrative fine of up to Two Million Pesos (₱2,000,000); (b) Mandatory issuance of disclosure or public notice within fifteen (15) days; and (c) Public posting of an errata statement or audit result, where applicable.

SEC. 32. Deployment of Prohibited AI Practices. - The development or deployment of AI systems for the following prohibited purposes shall constitute a criminal offense:

(a) Generalized social scoring of individuals or groups; (b) Unauthorized biometric surveillance or profiling of the general public; (c) Deepfake impersonation of public officials or real individuals without consent; and (d) Use of AI for deceptive political or financial manipulation.

Violators shall be punished by imprisonment of not less than six (6) months nor more than six (6) years, or a fine of not more than Ten Million Pesos (₱10,000,000), or both, without prejudice to prosecution under other applicable laws.

SEC. 33. Corporate and Institutional Liability. - If a violation is committed by a juridical person, such as a corporation, partnership, association, university, or public agency, the following shall be jointly and severally liable:

(a) The entity itself; (b) The officer or employee who authorized, directed, or allowed the violation; and

(c) The compliance officer or IACU head who failed to report or prevent the violation, if gross negligence is established.

SEC. 34. Civil Remedies. - Any person whose rights were infringed upon as a result of unlawful AI deployment or use may file an action for:

(a) Injunctive relief; (b) Actual, moral, or exemplary damages; (c) Correction or deletion of personal data; and (d) Public retraction, if warranted.

SEC. 35. Administrative Sanctions. - In addition to fines, the BAIS may impose any of the following:

(a) Blacklisting from government procurement or partnership;

(b) Public disclosure of non-compliant entities;

(c) Revocation of deployment licenses or certifications; and (d) Mandatory third-party audits for a defined compliance period.

SEC. 36. Whistleblower and Safe Harbor Protections. - Individuals who report violations of this Code in good faith shall be protected from retaliation. Likewise, entities that self-report noncompliance and act to promptly correct the same may, at the discretion of the BAIS, be granted mitigation of penalties or exemption from prosecution.

CHAPTER VI Innovation, Public Infrastructure, and National Capacity

SEC. 37. National AI Infrastructure. - The Department of Information and Communications Technology, through the Bureau of Artificial Intelligence Systems, shall establish and maintain shared national infrastructure to support the ethical and inclusive development of AI technologies. This shall include:

(a) Public compute facilities accessible to accredited academic and research institutions; (b) National datasets and data commons with appropriate anonymization protocols; (c) Version-controlled libraries of open-source AI models and codebases; and (d) Secure environments for privacy-preserving data sharing and experimentation.

SEC. 38. Regulatory Sandbox Programs. - The BAIS shall implement regulatory sandbox programs that allow for time-limited, supervised deployment of AI systems for research, innovation, or emerging applications. The sandbox shall:

(a) Permit conditional exemptions from specific compliance requirements; (b) Be subject to application, review, and publication of key terms; (c) Require risk mitigation plans and user transparency; and (d) Be time-bound, with mandatory reporting of results and learnings.

SEC. 39. Support for Domestic AI Model Development. - The State shall promote the development of locally built AI systems and language models, particularly those tailored to Philippine languages, cultural contexts, and institutional needs. This shall include:

(a) Grants and challenge programs to stimulate ethical innovation; (b) Preferential research support for projects aligned with public interest goals; and (c) Partnerships with academic, private, and international institutions, subject to transparency standards.

SEC. 40. National AI Research and Testing Facilities. - The DICT and the Department of Science and Technology shall coordinate the establishment of National AI Research and Testing Centers to serve as neutral platforms for:

(a) Model validation and bias testing; (b) Benchmarking performance standards; (c) Human-in-the-loop design evaluations; and (d) Training of public officials and regulatory staff on emerging technologies.

SEC. 41. AI Education and Talent Development. - The Commission on Higher Education, in consultation with the BAIS, shall issue curriculum standards and faculty development programs to promote AI literacy, ethics, and technical skill-building. These shall include:

(a) Undergraduate and graduate degree tracks in AI and data science; (b) Ethical AI modules across technical and non-technical disciplines; (c) Scholarships and research fellowships for Filipino AI talent; and (d) Training-of-trainers programs to broaden faculty capacity.

SEC. 42. Technology Transfer and Knowledge-Sharing. - The DICT shall issue policies to encourage technology transfer from foreign entities operating AI systems in the Philippines, including:

(a) Disclosure of model characteristics and documentation; (b) Publication of localized performance evaluations; and (c) Sharing of best practices, toolkits, and deployment safeguards.

Foreign AI providers that do not meet minimum transparency thresholds may be barred from certain public sector contracts or sandbox participation.

SEC. 43. AI Safety Research Fund. - There is hereby created an AI Safety Research Fund, to be administered by the BAIS and funded through annual General Appropriations. The fund shall support:

(a) Independent safety and bias evaluations; (b) Grants for red-teaming and adversarial testing of AI systems; (c) Academic and policy research on long-term risks and safeguards; and (d) Development of AI safety benchmarks for national adoption.

CHAPTER VII Roles of National Government Agencies

SEC. 44. Role of the Department of Information and Communications Technology. - The Department of Information and Communications Technology (DICT) shall serve as the principal agency responsible for the implementation of this Act. Through the Bureau of Artificial Intelligence Systems (BAIS), the DICT shall exercise regulatory, administrative, investigatory, and enforcement functions over Artificial Intelligence systems in the Philippines as provided in Title I of this Code.

SEC. 45. Role of the Department of Science and Technology. - The Department of Science and Technology (DOST) shall support the implementation of this Act by working with the DICT and the BAIS in establishing National AI Research and Testing Centers. It shall provide funding, technical support, and laboratory infrastructure for AI-related scientific research and model development especially in areas such as health, agriculture, climate, and disaster risk reduction. It shall facilitate technology transfer and help build local innovation ecosystems for AI applications. It shall also assist in creating science-based metrics for evaluating AI performance, detecting bias, and ensuring safety.

SEC. 46. Role of the Commission on Higher Education. - The Commission on Higher Education (CHED) shall ensure that Artificial Intelligence is integrated into relevant higher education curricula. It shall coordinate with the BAIS in setting academic standards for AI- related programs, recognizing faculty and research centers of excellence, and supporting postgraduate research aligned with the goals of this Act.

SEC. 47. Role of the Department of Trade and Industry. - The Department of Trade and Industry (DTI) shall promote responsible adoption of Artificial Intelligence in the private sector. It shall assist micro, small, and medium enterprises in adopting AI technologies and shall promote the development of trustworthy and export-ready AI products. It shall also coordinate with the BAIS in setting sector-specific standards and in supporting companies with compliance and technical guidance.

SEC. 48. Role of the Department of Health. - The Department of Health (DOH) shall ensure that Artificial Intelligence systems used in healthcare are safe, ethical, and well- regulated. It shall approve systems intended for diagnostic, triage, and treatment purposes in coordination with the BAIS and shall monitor post-deployment effects on public health. It shall also take part in any pilot or sandbox programs involving AI in clinical settings.

SEC. 49. Role of the Department of Education. - The Department of Education (DepEd) shall work with the BAIS and CHED to support ethical and effective integration of Artificial Intelligence in basic education. It shall ensure that AI tools used for instruction, grading, or administration do not violate students' rights or compromise pedagogical integrity. It shall also establish standards that promote transparency, safety, and educational value in school-based AI applications.

SEC. 50. Role of the Department of Labor and Employment. - The Department of Labor and Employment (DOLE) shall monitor the impact of Artificial Intelligence on employment patterns and workplace fairness. It shall work with the BAIS to design compliance standards for AI use in recruitment, evaluation, and termination. It shall also lead programs that equip workers with skills needed to remain competitive in a labor market affected by automation.

SEC. 51. Role of the National Privacy Commission. - The National Privacy Commission (NPC) shall coordinate with the BAIS to ensure that Artificial Intelligence systems comply with the Data Privacy Act of 2012. It shall advise on consent