ARTICLE
21 July 2025

Legal Implications Of General AI Agents: The Case Of Manus

LO
Llinks Law Offices

Contributor

Llinks Law Offices is at the vanguard of PRC law, with a dynamic presence that spans both national and international territories. With a robust network of offices in Shanghai, Beijing, Shenzhen, Hong Kong, and London, we’re committed to propelling our clients’ business ambitions and delivering top-shelf professional services. We strike a balance between technical precision and business acumen, approaching legal challenges with pragmatism and a constructive spirit.

In a groundbreaking achievement, China has launched the first fully autonomous AI agent, Manus. This milestone has quickly attracted global attention and ignited discussions...
China Technology

In a groundbreaking achievement, China has launched the first fully autonomous AI agent, Manus. This milestone has quickly attracted global attention and ignited discussions on the legal implications of such advancements.

As Manus takes center stage, it's crucial to understand the unique characteristics of this General AI agent and address the potential legal challenges it poses, including privacy concerns, intellectual property rights, potential misuse, and liability and accountability.

1. How is Manus Different from Prior AI

Manus is considered the first fully autonomous AI agent. Unlike its predecessors, Manus possesses the ability to perform any intellectual task that a human can. It exhibits a level of understanding, learning, and problem-solving that extends beyond specific tasks or applications. As a General AI agent, Manus can adapt to new situations, reason abstractly, and autonomously improve its performance without human intervention.

In comparison, prior AI systems, known as narrow or weak AI, are designed to perform specific tasks, such as image recognition, natural language processing, or playing chess. These systems operate within predefined boundaries and lack the flexibility to generalize their capabilities across different domains. Manus, on the other hand, transcends these limitations, making it a versatile and adaptive entity capable of tackling a wide range of tasks.

In short, Manus, as a General AI agent, acts more like a human being. It may be limited by peripherals—specifically, the tools it has access to for exporting its outputs—but its thoughts are boundless.

Thus, Manus acts beyond merely a tool for human beings, and users may have weak or limited control over its behaviors. As a result, it brings extensive legal issues that need to be considered.

2. Privacy Concerns

The deployment of Manus, a fully autonomous AI agent, brings several significant privacy concerns that need careful examination. Let's delve into the three major issues—transparency, security, and prevention of misuse:

Transparency Issue

Transparency is a fundamental requirement in the data law regime, particularly regarding the handling of personal data. In many jurisdictions, the law mandates that individuals are clearly informed about how their data is collected, used, and processed. These regulations are designed to protect user privacy by ensuring that organizations provide explicit notifications about their data practices—empowering individuals with the knowledge necessary to grant or withhold consent. The expectations set by these transparency requirements help create an environment of accountability and trust, where users are aware of the data lifecycle and can seek redress when there is a breach of established protocols.

Fully autonomous AI agents, like Manus, however, challenges these transparency norms with its autonomous operations. Its ability to independently gather and process data from various sources means that it functions without direct human oversight, making it exceedingly difficult to ascertain how and why certain data is being handled. This lack of transparency in its decision-making process, hidden within complex algorithms and evolving datasets, results in an unpredictable methodology for processing personal information. As the precise scope, methods, and purposes remain obscured, compliance with legal transparency obligations becomes increasingly complex, undermining the foundational principles of user awareness and accountability.

Security Issue

Manus processes vast amounts of sensitive personal data autonomously, which inherently raises significant concerns about data security. The possession of sensitive data, without direct human supervision, magnifies the risk of unauthorized access, breaches, or data leakage. When the system operates independently, any vulnerability in its data handling protocols could potentially expose large volumes of personal data, heightening the stakes of privacy violations and prompting serious concerns about the overall security of such pervasive technology.

Given these amplified risks, deploying robust security measures becomes essential, yet presents a formidable challenge. Effective safeguards—such as advanced encryption, strict access controls, and comprehensive cybersecurity protocols—are crucial for protecting sensitive data. However, implementing these protective measures on an autonomous system like Manus is complex; it necessitates continuous monitoring, regular updates, and adaptive strategies to counter ever-evolving threats. 

Prevention of Misuse

Manus' advanced reasoning capabilities and its ability to collect diverse publicly available information pose significant misuse risks. If exploited, Manus could infer sensitive details such as individuals' preferences, family structures, and personal habits. This kind of misuse could lead to privacy violations or even facilitate criminal activities. Preventing misuse requires stringent controls, ethical guidelines, and oversight mechanisms to ensure Manus operates within legal and ethical boundaries.

Addressing these concerns involves striking a balance between harnessing Manus' potential and safeguarding individuals' privacy. Adopting transparent practices, ensuring robust security protocols, and establishing accountability measures are essential steps to mitigate these challenges effectively.

3. Intellectual Property Rights

The rise of general AI agents such as Manus challenges longstanding assumptions in intellectual property (IP) law. At the center of the debate is a twofold question: (1) whether works created autonomously by an AI like Manus be eligible for IP protection; and (2) If they are protected, who should hold the rights.

IP Rights in Works Created by AI

Traditional IP law is built on the premise that creativity arises from human ingenuity. Under U.S. law, for example, the Constitution requires that IP rights be vested in natural persons. Judicial and administrative interpretations reinforce that copyrights and patents are intended to protect works or inventions that have a human origin. Similarly, Chinese intellectual property laws largely attribute authorship and ownership to human creators. These positions imply that when intelligence is generated solely by an autonomous agent with no human creative input, the output may not be eligible for protection. In effect, if Manus's self-driven creations do not meet the threshold of human authorship, they could potentially fall into the public domain—allowing anyone to use them freely.

This situation raises a critical question: with AI-generated outputs increasingly influential in areas such as drug discovery, programming, and industrial designs (domains that traditionally relied on human expertise), is it fair or desirable for these achievements to remain unprotected? Without adequate safeguards in the legal framework, the significant investments made into AI and the subsequent benefits derived from its innovations might be undermined by an inability to secure exclusive rights.

Ownership of IP Rights

If we consider scenarios where AI-generated works are granted protection, the next challenge is determining the rightful owner. Two primary positions have emerged:

One is tool model. When artificial intelligence is viewed simply as an advanced tool, the natural person or entity that controls or manipulates Manus is considered the owner of the resultant work. This perspective hinges on the idea that although Manus may generate the content, the creative input or decision-making, however minimal, ultimately comes from the human operator. Many legal systems would support this view based on the traditional requirement of human authorship. In such cases, even though Manus autonomously produces content, the person who directs its tasks or sets its parameters would be seen as the “author” and thus the IP owner.

The other is developer model. This view posits that the individuals or organizations that developed Manus should claim ownership of its creations. After all, they provided the foundational framework—both in terms of algorithms and technological infrastructure—that enables Manus to operate autonomously. Under this model, the developer's intellectual input, investment, and control over the core functionality could justify vesting all generated outputs with rights, even if the developer played no direct role in each creative act. However, this approach raises significant questions: Is it fair for a developer to claim ownership of outputs that are generated long after the technology has been deployed? And might this concentration of rights stifle creativity by disconnecting the user's input from the reward of ownership?

There is also a third, more radical option: treating works produced by a fully autonomous general AI as public domain from the outset. This approach would acknowledge the absence of genuine human authorship by allowing no one—from users to developers—to secure exclusive rights. While this may promote broader access and dissemination of knowledge, it could also diminish the incentives for further investment in AI innovation.

4. Liability and Accountability

Determining liability and accountability for the actions of a General AI agent like Manus is a complex legal challenge. During the era of normal artificial intelligence which is viewed as tools, those who use the artificial intelligence are liable for the acts which such artificial intelligence performs. However, with the introduction of fully autonomous AI agent like Manus, the negligence-based liability system is facing challenges.

Unpredictable Behaviors

Fully autonomous AI systems such as Manus are designed to learn and adapt independently of constant human oversight. In contrast to traditional AI tools—which operate in a predictable, controllable manner— Manus develops decision-making processes that hinge on complex algorithms and vast, evolving datasets. This capacity for unsupervised learning introduces a significant degree of unpredictability. The decisions it generates remain inscrutable, as the underlying factors and weightings are obscured within the intricate layers of its "black box" architecture. Such opacity, inherent to the technology itself, means that even those closely involved in crafting the system may struggle to pinpoint errors when unexpected behavior arises.

Taking the concept of auto-decision-making as a specific example, Manus can autonomously determine courses of action without direct human input. Although it may offer a simplified explanation of its decision process, what remains hidden is the nuanced interplay of variables that ultimately drives its conclusions. This auto-decision process can lead to outcomes that are not only biased or erroneous but may also result in harmful consequences. Consequently, when an unforeseen decision is made, affected parties are left without a clear path to determine the precise origin of the error, thereby complicating efforts to seek redress or assign accountability.

Shared Responsibilities

In the era when AI systems were viewed primarily as tools, liability was typically straightforward—the person or entity using the tool was held accountable for its actions. However, Manus dispels that simplicity by operating with a high degree of autonomy. Its entire lifecycle now involves a network of stakeholders: the developers, the manufacturers, and the users. Each of these parties plays an integral role in the creation, operation, and evolution of the AI, reflecting a shift from a solitary user conception to a collective responsibility approach.

When Manus produces an unforeseen result, no single party can be said to have deviated from proper care or industry standards. This diffusion of responsibility means that even when harm occurs, none of the stakeholders can be deemed negligent under current legal standards. The autonomous nature of Manus creates a situation where no one truly controls its decisions or can be held liable for every unpredictable outcome, presenting a complex legal puzzle for accountability that challenges traditional negligence-based regimes.

The Need to Reshape the Tort Legal Regime

Traditional tort law relies on notions of clear negligence and foreseeability—principles that crumble when applied to the enigmatic decision-making process of a fully autonomous system like Manus. In cases where unpredictable outcomes lead to damage or injury, establishing causation becomes a formidable challenge. None of the stakeholders may have acted negligently, as each has fulfilled their role per the established norms and regulations. 

This reality calls for a fundamental rethinking of the tort regime. New legal frameworks might incorporate no-fault compensation schemes or strict liability models tailored to address the intrinsic risks of autonomous AI. Additionally, establishing comprehensive regulatory standards that specifically account for the “black box” nature of these systems could provide greater clarity and fairness for all parties involved.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More