Copyright © Media Law International 2025. All Rights Reserved.               Privacy Policy     Contact Us     Unsubscribe                                                                                                                                       

      M E D I A   L A W   

      I N T E R N A T I O N A L   ®

      

Guide to the Global Leaders in Media Law Practice

  REGIONS WE COVER

  Middle East and North Africa

  North America

  Asia-Pacific

  Western Europe

  Central and Eastern Europe

Unlike human employees, AI systems aren't subject to labour laws, ethical codes, or confidentiality obligations. They operate on probabilistic logic rather than human judgment, often trained on data with unclear origins or legal status. As AI influences strategy and decision-making, organizations face legal and reputational risks by delegating sensitive functions to unaccountable, unaudited, or ungovernable systems. Few companies have formal policies to govern, limit, or even document AI use.


AI systems are not legal persons, yet they function within companies, producing work and influencing decisions without formal recognition or accountability. Their presence requires internal rules and legal awareness.


This document explores how the European Union and the United States are setting benchmarks for AI oversight, examines Mexico’s legal responses, and offers practical guidance for companies designing internal policies that keep AI use under human and legal control.


The European Union’s AI Act: A Risk-Based Blueprint

The EU AI Act (Regulation EU 2024/1689) is the first comprehensive framework to ensure AI systems are safe, ethical, and trustworthy. It uses a tiered risk-based approach: minimal-risk systems remain unregulated, while high-risk applications (e.g., employment, education, or critical infrastructure) must comply with strict obligations like impact assessments, human oversight, transparency measures, and data governance standards. Systems deemed unacceptable, such as emotion recognition in workplaces or real-time biometric identification in public spaces, are prohibited.


The AI Act also addresses general-purpose AI models, requiring transparency obligations and safeguards for systemic risks. While geographically limited, its effects are global: similar to the GDPR. For Mexican organizations, it offers a structured model for preemptive compliance and internal governance.


The U.S. Approach: Deregulation and State-Level Fragmentation

The U.S. lacks a comprehensive federal AI law. While previous administrations introduced ethical oversight tools, such as the AI Bill of Rights and Executive Orders on trustworthy AI, current policy prioritizes innovation and minimal interference, intended to enable AI leadership. In the absence of federal rules, states like Colorado and Illinois have enacted risk-based or sector-specific AI frameworks.


For Mexican companies working with U.S. partners, clients, or investors, particularly in regulated sectors such as legal services, finance, or health, this fragmented landscape creates challenges and opportunities. Expectations around transparency, data protection, and accountability are growing, even if voluntary or state-based. Understanding U.S. framework can help Mexican organizations build trust, align with international expectations, and avoid friction in service relationships.


Mexico’s Blind Spots: Legal Uncertainty and the Case for Internal Governance

Despite being considered a priority, Mexico lacks a specific AI law. However, despite not being created with AI in mind, the Privacy and Copyright Laws touch on AI-related issues, especially data protection, authorship, and confidentiality.  


A 2023 case rejected copyright registration for AI-generated artwork, claiming the law requires human authorship. Another pending Supreme Court case claims AI should have legal personality and standing. This highlights uncertainty: companies using generative AI may not own certain outputs, exposing them to disputes over authorship, enforceability, and commercial use. Gen-AI outputs ownership is still to be clarified.


Civil law builds liability on authorship; whoever is the author of an unlawful act is held liable. AI creates dilemmas. If an AI system works without human oversight, for example, without an author, and causes damage, who is liable?


Confidentiality further complicates this. Licensed professionals, such as lawyers, doctors, and accountants, are bound to protect client information, but few organizations have policies on using AI tools with sensitive or privileged content. Consider attorneys using AI to draft, summarize or translate a contract containing confidential information, doctors using it to generate patient letters or explain diagnoses, or finance workers using it in preparing due diligence reports or evaluating investment portfolios.


Also, these tools often operate through external servers that cannot always guarantee how the information is handled, raising real trust and secrecy concerns about data leakage, unauthorized storage, or future model training using confidential prompts.


This creates a double corporate vulnerability: legal uncertainty surrounding ownership, liability, and confidentiality, and the operational risks of relying on tools that remain outside formal controls. The absence of law does not mean absence of risk; it means the burden of control shifts to the company itself.


As the use of AI grows, Mexican Congress is considering AI-related legislative proposals, from algorithmic discrimination to AI-generated impersonation, extortion, and deepfakes. While still under discussion, they reflect that a more structured regulatory framework is on the horizon.


However, internal governance is critical. Proactively adopting a tailored AI use policy can transform AI from a vulnerability into a managed,

trusted asset.


From Risk to Policy: A Practical Framework for Internal AI Governance

One overlooked vulnerability is “shadow AI” — tools adopted independently by employees without formal approval or oversight. From a marketing manager using ChatGPT to draft client content or an analyst querying sensitive data through a public API, these actions can unintentionally expose confidential information, violate licensing terms, or rely on flawed or biased outputs. Well-intentioned experimentation can quickly escalate into legal, reputational and security risks.


To mitigate this, companies must build an enforceable governance structure that addresses shadow AI directly. This includes: i) a registry of approved tools, ii) a vetting process for new technologies, iii) protocols for reporting unauthorized use, iv) noncompliance sanctions.


Only by formalizing these rules can scattered, invisible risks be transformed into a controlled, auditable capability that aligns with business goals and legal duties.


In the absence of binding regulation, the safest path forward is not waiting, it is leading. Companies that proactively define and implement internal AI governance will not only reduce exposure to data breaches and regulatory scrutiny, but will also position themselves as reliable, resilient actors in an increasingly complex digital ecosystem.


Inspired by current EU-regulated principles, internal AI policies should not be static documents drafted by IT departments in isolation. They should function as multidisciplinary governance instruments, reflecting the organization’s operational context, legal obligations, and ethical standards.


A robust internal policy should include the following pillars:


1.   AI Use Mapping and Risk Identification

The first step is visibility. The policy must require organizations to identify and register all AI systems in use, whether developed in-house or contracted through external vendors.


Each tool must be mapped to a business function and evaluated for:


2.   Risk Classification and Use Restrictions

Since not all AI systems pose the same level of risk, not all should be governed the same. Internal corporate policies should adopt a tiered, proportionate approach, classifying AI use based on the potential impact on individuals, clients, operations, and legal exposure.


This approach ensures that innovation is not limited, but that critical safeguards are in place, and allocates oversight and compliance resources efficiently, prioritizing high risks.


An EU-based internal classification system might consider:


High-risk systems must be subject to the following safeguards prior implementation:


This allows companies responsible use responsibly, protects client rights, aligns with regulations, preserves reputation, and facilitates adaptation to future Mexican laws.


3.   Confidentiality, Data Minimization, and Access Control

Mentioned Mexican law demands sensitive information protection. AI tools that consume, process, retain and transform vast amounts of data may pose unique challenges to these obligations, particularly if done beyond the user’s control or understanding. A single prompt in a public gen-AI platform can result in inadvertently transfering personal data to a third party; a violation of client-attorney privilege or professional secrecy; or the incorporation of sensitive content into the training dataset of a foreign system.


Thus, internal policies should include:


These safeguards should be embedded in the company’s broader compliance and cybersecurity frameworks. They also serve a deeper strategic purpose: upholding client trust, professional reputation, and service integrity in sectors where discretion and accuracy are non-negotiable.


4.   Human Oversight and Decision Accountability

AI tools can assist, but they cannot assume legal, ethical, or contractual responsibility. Any process where rights, obligations, or business outcomes are affected, the final judgment remains human, such as:


To uphold professional standards and avoid liability, organizations should:


Oversight is not about mistrusting the tool, it is about preserving accountability, explainability, and trust. Delegating outputs is acceptable; delegating responsibility is not.


5.   Governance Roles and Incident Management

Managing AI use in a company requires clear roles and processes. The policy should name someone responsible, like a Chief AI Officer, or form a small team to oversee how AI is used. Each department that uses high-risk AI tools should also have a person in charge of making sure the rules are followed. All approved tools and changes should be tracked in one place.


The policy should also include simple steps for reporting problems, such as incorrect results, data leaks, or unauthorized use. There should be a plan to fix issues quickly, notify the people affected if needed, and review what went wrong. Regular checks on how AI tools are being used, and whether they are working as expected, help prevent future problems. Finally, all tools should be tested and reviewed before being used in real tasks, and that process should be clearly recorded.


6.   Integration of Global Standards and Local Principles

Even without a national AI law, companies can develop strong internal policies by using global frameworks as guidance, such as those mentioned above, the NIST AI Risk Management Framework for building and improving internal practices, and the OECD AI Principles for ensuring fairness, accountability, and transparency.


A well-rounded policy should also reflect fundamental rights such as privacy, non-discrimination, access to information, and responsible use of data—principles that are widely recognized in democratic legal systems and expected by clients, regulators, and the public alike.


7.   Continuous Training and Cultural Integration

AI policy is not just a document, it is a culture. For it to be effective:


Updates on regulatory developments should be shared internally, creating awareness of the evolving landscape.


An effective policy must be reviewed and updated regularly, ensuring it evolves with technology, law, and organizational needs.


AI at Work: The Employee Without a Contract


Written by Xavier Careaga and Diego Leal

Galicia


Xavier Careaga


Diego Leal

AI systems are no longer futuristic novelties; they're deeply embedded in the daily operations of organizations. From drafting documents and summarizing emails to screening resumes and assisting customer support, AI performs core business functions with minimal human supervision. These systems operate continuously, without contracts or defined accountability, acting as a new class of corporate actor: ever-present, tireless, efficient, and largely internally unregulated.

 MEXICO

Xavier Careaga is a seasoned lawyer in Internet matters, having dedicated his entire professional career to technology law and to helping technology companies create, develop and implement their products and features within safe margins of the law. He designed extensive multi-jurisdictional and multi-disciplinary strategic defenses to protect the companies’ business model, reinforcing the social benefits of technologies involved while mitigating any unintended negative externalities.


Diego Leal is an associate in the Technology, Media, and Telecommunications practice, focusing on financial law with extensive experience in the Fintech sector. He is recognized for his work in structuring and negotiating corporate and financial transactions, including advising on the authorization process for financial technology institutions before the National Banking and Securities Commission, as well as their operational and regulatory compliance as financial entities.



Biographies

Top

     FEATURED ARTICLE


     M L I


  Firm Rankings

  Enhance Your Coverage

  About MLI

  Order a Copy

  Contact Details

TAKE PTAKE PART.

TAKE PART.

TAKE PART.

TAKE PART.

ART.

TTAKE PART.

TAKE PART.

TAKE PART.

TAKE PART.

TAKE PART.

AKE PART.

TAKETAKE PART.DDD

TAKEDDDD PADRT.DD

 PART.

TAKDDDDD PART.TADDKE PART.DD

TAKE PART.