Intellectual Property, Trade Secrets, and Artificial Intelligence: A Forensic Perspective
By: Luisa Fda. Barragán Ávila

Intellectual property has historically been one of the fundamental pillars in protecting human creativity, granting exclusive rights over inventions, literary and artistic works, industrial designs, and distinctive signs used in commerce. However, in recent years, the emergence of disruptive technologies—particularly those based on artificial intelligence (AI)—has created new tensions around authorship, originality, and legal protection of products generated by autonomous or semi-autonomous algorithms. This context presents significant challenges for both the law and digital forensics, which must evolve to effectively respond to violations, disputes, and litigation arising from these emerging realities.

From a regulatory standpoint, Colombia has a general legal framework that addresses intellectual property issues primarily through Law 23 of 1982 and its subsequent updates, as well as Law 1755 of 2015, which introduces reforms aimed at strengthening the protection of digital assets. Nevertheless, these instruments have not been sufficiently developed to provide clear responses to complex scenarios in which AI systems actively participate in content creation or the production of predictive models that may include elements protected by copyright, patents, or registered trademarks.

At the international level, the TRIPS Agreement (Agreement on Trade-Related Aspects of Intellectual Property Rights), under the auspices of the World Trade Organization (WTO), establishes a minimum protection framework requiring member states to adopt national measures ensuring effective enforcement of intellectual property rights. Alongside this agreement, other treaties such as the Berne Convention for the Protection of Literary and Artistic Works , and the WIPO Copyright Treaty (WCT) , have provided a basis for regulating aspects related to the protection of works generated through electronic or automated means. However, none of these treaties explicitly address situations where the creators are autonomous technological systems, creating interpretative gaps that must be resolved through judicial or legislative action.

One of the most relevant global debates centers on the authorship of works generated by artificial intelligence. While some countries, such as Australia and South Africa, have recognized certain rights for AI systems in patent registration processes, others—like the United States and the European Union—have clearly stated that copyright and patent ownership necessarily require a human author or inventor. This divergence not only complicates cross-border protection of intangible assets but also affects the ability of digital forensic laboratories to conduct comparative analyses between software versions, trained models, and data repositories—especially when identifying potential plagiarism or unauthorized derivations.

An emblematic example of this issue is the Waymo vs Uber case in the United States, in which a formal complaint was filed alleging the theft of trade secrets related to autonomous vehicle technology. Anthony Levandowski, an engineer previously affiliated with Waymo (a subsidiary of Alphabet), was accused of downloading thousands of sensitive files before founding Otto, a company later acquired by Uber. The forensic investigation reconstructed access patterns, internal information transfers, and suspicious activities on personal devices, proving crucial in demonstrating deliberate misappropriation of intangible assets. This case highlights how digital forensics can play a decisive role in the protection of trade secrets, particularly in highly technified environments.

Another significant case involved GitHub Copilot , a tool developed by GitHub (acquired by Microsoft) and OpenAI, designed to assist programmers in writing code using an AI-based language model. In 2021, the platform faced multiple lawsuits from independent developers and open-source communities who argued that the tool generated code fragments protected by specific licenses without proper attribution or compliance with legal conditions. Forensic labs were involved in analyzing the origin of the code blocks used to train the model, tracing possible violations of GPL, MIT, and Apache licenses. This type of analysis involves advanced data mining techniques, digital traceability, and metadata verification, underscoring the importance of having standardized methodologies to validate evidence in judicial or administrative contexts.

In the UK, another relevant case involved DeepMind (also part of Alphabet) and the National Health Service (NHS). DeepMind signed an agreement with the NHS to use medical data from millions of patients to develop predictive algorithms aimed at improving clinical diagnoses. Authorities found that explicit patient consent had not been obtained, constituting an ethical—and potentially legal—violation. Digital forensics played a key role in reviewing how data was processed and stored, helping establish a framework for corporate accountability. This case illustrates how the protection of sensitive data goes beyond mere regulatory compliance and enters the realm of technical auditing and expert evaluation.

The use of artificial intelligence in artistic content creation has also generated controversy. Platforms like DALL·E , Midjourney , and Stable Diffusion have been sued by artists and photographers who claim these tools violate copyright by reproducing styles and compositions without crediting their original creators. In many of these cases, forensic labs are developing methods to identify semantic similarities and visualize derivation patterns to determine whether there is plagiarism or legitimate use under doctrines such as “fair use.” These investigations involve the use of computer vision techniques, neural matrix analysis, and generative output comparisons—representing a significant advancement in machine learning forensics.

From a technical perspective, digital forensic laboratories face multiple challenges in analyzing intangible assets related to artificial intelligence. Among them are:

  • Analyzing datasets used to train AI models
  • Conducting semantic comparisons between models trained on similar datasets
  • Detecting traces of sensitive or confidential data within neural networks
  • Validating licensing compliance of code used in development frameworks
  • Identifying security gaps in systems storing trade secrets

These activities must be conducted following standardized protocols, such as those established by ENFpol (European Network of Forensic Science Institutes), the National Institute of Standards and Technology (NIST) , and other international bodies, ensuring the legal validity of findings.

In Colombia, although no high-profile cases similar to those mentioned above have yet occurred, there have been instances in which tech companies have reported leaks of sensitive information to competitors, often involving employees as extraction vectors. In these cases, digital forensic labs have analyzed browsing histories, emails, remote connections, and changes in local repositories to reconstruct digital events and determine the possible existence of illegal conduct.

The evolution of artificial intelligence and its integration into production processes, product design, and content generation continues to transform traditional notions of creation and authorship. This shift necessitates a rethinking of conventional protection and oversight mechanisms. Therefore, it is essential to promote international cooperation, establish common forensic analysis standards, and invest in multidisciplinary training that integrates law, technology, and data science.

Only through an integrated, holistic approach will it be possible to safeguard innovation rights and ensure fair competition in an increasingly automated economy, guaranteeing that digital forensics remains a crucial ally in protecting future intangible assets.

Scroll al inicio