There are no omnibus laws governing AI in Singapore, but there is a combination of the following:
- general guidelines that set out recommendations for all uses of AI and generative AI – that is (collectively, the “Model AI Governance Frameworks”):
-
- the Model AI Governance Framework; and
- the Model AI Governance Framework for Generative AI;
- regulations that govern the use of AI in certain industries (see question 3); and
- sector-specific guidelines that provide recommendations on the use of both AI and generative AI in different industries (see question 3).
The Model AI Governance Frameworks promulgated by the Infocomm Media Development Authority (IMDA), which provides companies that are keen to develop AI and/or adopt AI with a practical framework to understand how best to implement compliance measures by translating ethical principles into implementable / actionable recommendations. The sector-specific guides take guidance from the Model AI Governance Frameworsk to contextualise the requirements for specific industries. The Model AI Governance Frameworks and the sector-specific guides do not have the force of law or are not directly applicable as legal instruments. They are persuasive and organisations should consider them when implementing their own AI initiatives.
This is in contrast to sectoral-specific regulations (“Sector Regulations”) which may have direct or indirect legal effect. Regulations can have an indirect effect if they are promulgated or issued by sector regulars in the form of instruments (e.g. circulars, advisory guidelines or other instruments) that are intended to apply to licensees under existing sectoral licensing regimes. The sector-specific regulations have the force of law and organisations in those industries must take measures to ensure compliance with such regulations, under threat of penal sanctions.
Although Singapore does not have an omnibus AI law, there are Sector Regulations that are issued under established statutory laws and licensing frameworks for relevant industries. These apply to entities that are licensed by regulators, or which carry out activities which are licensed.
There have also been extensions / changes to existing laws that do not relate specifically to an industry or sector but were brought about in view of the evolving AI landscape – for example:
- a key amendment to Sections 243 and 244 of the Copyright Act 2021 which introduces a potential defence of copyright infringement for machine learning; and
- the introduction by the Personal Data Protection Commission of the Advisory Guidelines on use of Personal Data in AI Recommendation and Decision System as a supplement to the Personal Data Protection Act 2014.
Further, as Singapore is a common law jurisdiction, there are also common law, with its references to equitable principles and tortious legal principles that apply in addition to the Sector Regulations. Reported cases on these principles are rare, with at least one notable exception being a Court of Appeal decision in connection with automated contracts and in the law of mistakes that has particular relevance to agentic AI.
With respect to automated contracts, the Electronic Transactions Act 2010 provides that:
A contract formed by the interaction of an automated message system and a natural person, or by the interaction of automated message systems, is not to be denied validity or enforceability solely on the ground that no natural person reviewed or intervened in each of the individual actions carried out by the automated message systems or the resulting contract.
In the case of the law of mistake, the Singapore Court of Appeal had the opportunity to consider the applicability of the law of unilateral mistake in Quoine Pte Ltd v B2C2 Ltd [2020] 2 SLR 20, where parties to a transaction had entrusted their dealings to AI. The court took the view that, notwithstanding that the parties had entrusted computers and AI to carry out their trades, this did not preclude the applicability of the law of unilateral mistake, albeit that the law must be adapted to the “new world of algorithmic programmes and artificial intelligence, in a way which gives rise to the results that reason and justice would lead one to expect”. This is a critical development in the age of agentic AI, as organisations increasingly entrust functional duties and tasks to ever more sophisticated AI agents.
As mentioned in question 1.2, the general expectation is that the tortious principles should continue to apply, although there have been no reported cases in Singapore where the courts have had to consider the precise applicability of such principles. The precise standards of care and the content and scope of the duty of care remain to be settled by case law; but a potentially useful reference is to consider benchmarks and guidance under Model AI Governance Frameworks which addresses best practices in the development and deployment of AI at scale.
As mentioned in questions 1.2 and 1.3, the general expectation is that the common law, with its references to equitable principles and tortious legal principles should continue to apply, albeit that certain adaptations may be made. To date, there have been no cases in Singapore where the courts have been required to consider the precise applicability of such principles to the context of robots or mobile AI. However, given the stature of the bench in the Court of Appeal case of Quoine Pte Ltd v B2C2 Ltd [2020] 2 SLR 20 (which is Singapore’s apex court), it is expected that the position advocated in this case will be a key point of reference in future lawsuits relating to AI should they come before the courts.
Yes, sector-specific guidelines do apply in the absence of broad cross-sector legislation. Please see question 3, where we look at the different laws governing specific industries and use cases arising from those industries.
Singapore has concluded a number of agreements with the major powers in relation to the regulation of AI, including:
- a memorandum of cooperation on Collaboration on the Safety of Artificial Intelligence signed with the secretary of state for science, innovation and technology of the United Kingdom on 6 November 2024 in relation to collaboration in the areas of:
-
- AI safety research;
- information sharing; and
- collaboration on international AI safety standards and protocols;
- an administrative agreement on collaboration on AI safety that was entered into on 20 November 2024 with the executive vice president of the European Commission for “A Europe Fit for the Digital Age” in relation to cooperation in the areas of:
-
- information exchange;
- joint testing;
- evaluation of general purpose AI models;
- development of tools and benchmarks;
- AI safety research; and
- the sharing of insights on emerging trends;
- a memorandum of understanding signed with the Australian Ministry for Industry and Science on 16 December 2024 that seeks to establish a framework to improve collaboration on civil applications of AI technologies between Australia and Singapore;
- a memorandum of understanding with the United Arab Emirates on collaboration in AI; and
- a joint mapping exercise between the Infocomm Media Development Authority’s AI Verify and the US National Institute of Standards and Technology’s AI Risk Management Framework, for the purpose of working towards harmonisation of international AI governance frameworks to reduce the industry cost of meeting multiple requirements.
Singapore, as a member of the Association of South East Asian Nations (ASEAN), has also worked with ASEAN to release the ASEAN Guide to AI Governance and Ethics, which:
- aims to put forward a set of guidelines for governments and businesses to follow while they develop and adopt AI by establishing common principles for trustworthy AI; and
- suggests best practices for how to implement trustworthy AI in ASEAN.
As at this time, there is no stated policy to issue omnibus AI legislation, and hence no cross sector enforcement agency. By contrast, Sector Regulations are enforced by specific sector regulators.
The IMDA, which is part of the Ministry of Digital Development and Information (MDDI), is responsible for developing the Model AI Governance Framework and the Model AI Governance Framework for Generative AI. However, as mentioned in question 1.1, both the Model AI Governance Framework and the Model AI Governance Framework for Generative AI do not have the force of law and are at most only persuasive. Accordingly, the IMDA does not play the part an “enforcement” regulator, and there are no current plans to introduce such a regulator / office.
In respect of the Sector Regulations, these are enforced by the respective regulators of the specific industries – for example:
- the Personal Data Protection Commission regulates the use of personal data in areas relating to AI model training;
- the Monetary Authority of Singapore regulates the use of robo-advisers; and
- the Ministry of Health regulates the use of AI in the development of AI healthcare platforms;
- etc.
Such sector-specific regulators apply various legislative instruments, so it is essential to seek tailored legal advice to ensure that appropriate instruments are identified.
The general regulatory approach has been to focus on developing practical and implementable frameworks relying on multi-stakeholder co-development, in lieu of a cross sector omnibus regulation on AI. The approach has been to take an open and collaborative approach to lifting standards. An example of this stance can be seen in the executive summary shared by the IMDA in the Model AI Governance Framework for Generative AI, the intent is to take a “systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation in order to foster a trusted AI ecosystem”.
At the individual organisation level, the overall emphasis is for organisations that adopt, use and implement AI to be accountable for their use. At the societal level, the emphasis is twofold:
- the prevention of harm from the use of AI; and
- the promotion of public good in the use of AI.
Sector Regulations- allow for more tailored and focus regulatory regimes focusing on specific risks and issues particular to a sector. Where a sector is regulated, guides or guidelines (or other instruments) issued by a regulator may appear to be non-legally binding, but breach of the standards may be taken into account when reviewing the status of licensing over a licensee’s activities.
Different industries will reflect different use cases and comprehensive cross-sector and cross-tier industry data is not available. Anecdotally – and to the extent that the scope of national grants is an indicator of nationally prevalent use cases for AI adoption – there is a focus on productivity solutions, although the most embedded applications will vary according to industry sector and segment. For instance, in the legal industry, there is an increasing push for the use of:
- AI applications in:
-
- document management systems;
- practice management systems;
- matter management; and
- collaboration platforms and risk assessment solutions in conducting know-your-customer/anti-money laundering assessments.
These are likely to be more widely adopted given that grants and financial assistance are provided to law firms to adopt pre-approved IT solutions in each of the abovementioned categories. This funding is provided under the productivity solutions grant administered by Enterprise Singapore.
Please see question 2.1.
Singapore’s AI ecosystem is rich and complex, with AI development being pursued by organisations of varying sizes, maturity, funding and strategies. The structures of these companies are not particularly different from the general corporate holding structures seen in standard business operations. Although no official or published statistics are readily available in this regard, most organisations are for-profit corporations (private limited companies), with other vehicles (eg, corporations limited by guarantee, societies) mainly pursuing community-building or standard setting/advocacy purposes.
There are no consolidated, official or published statistics in this regard, but Singapore’s AI developer and user ecosystem is varied, with:
- in-house development being financed from (or developing as an outgrowth of) internal innovation or operations; and
- outsourced innovation being arranged through:
-
- private equity;
- startup funding (including seed investments by funds and private investors); or
- grants (with government initiatives such as financing from the National Research Foundation).
Another method of financing is through:
- agencies such as the Agency for Science, Technology and Research, Singapore’s leading public sector R&D agency; or
- government agencies such as the Monetary Authority of Singapore.
Please see question 12.1.
(a) Healthcare
AI is generally regulated in two respects in the healthcare space – that is, in relation to:
- its general use and adoption; and
- the product lifecycle of medical devices.
In October 2021, the Ministry of Health of Singapore (MOH), the Health Sciences Authority (HSA) and Integrated Health Information Systems – the healthtech agency under the MOH – published guidelines on the use of AI in healthcare. The guidelines constitute “a set of recommendations to encourage the safe development and implementation of primarily AI-Medical Devices [AI-MDs], and secondarily any other AI implemented in healthcare settings”. The guidelines are intended to complement the existing guidance on AI-MDs that were issued by the HSA. The guidelines advocate general principles of fairness, responsibility, transparency, explainability and patient-centricity for the regulation of AI, which are largely similar to those set out in the Model AI Governance Frameworks.
On 31 July 2023, the MOH further issued MOH Circular 51/2023, which informs public healthcare institutions “of the policy positions, standards and/or guidelines on the use of Generative Artificial Intelligence (AI) technologies, as well as considerations for the public healthcare sector to enable the responsible and safe use of such technologies and information”. This circular was issued after the release of ChatGPT in late 2022, given the increasing rate of adoption and use of this application, including in the healthcare sector.
The HSA also issued guidelines on certain considerations specific to medical devices, which include within their scope:
- AI devices or systems with a medical purpose that are incorporated into hardware medical devices; and
- AI applications that are intended to be used for medical purposes, such as the investigation, detection, diagnosis, monitoring, treatment or management of any medical condition, disease, anatomy or physiological process.
(b) Security and defence
Publicly available documentation and statements on Singapore’s defence or military AI strategy is limited, as much of this information is classified. However, the Cyber Security Agency has published guidance on cybersecurity risks in AI adoption, in the form of its Guidelines and Companion Guide on Securing AI systems. These guidelines address security risks in the civil sector.
(c) Autonomous vehicles
The Land Transport Authority of Singapore regulates the trial use of autonomous vehicles (AVs) and has issued legislative provisions that address this (the Road Traffic (Autonomous Motor Vehicles) Rules 2017). It also operates assessment tracks for both deployable and developmental AV solutions. This is typical of Singapore’s sector-specific approach, with gradual and incremental regulation.
(d) Manufacturing
The Singapore government has taken steps to promote the adoption and use of AI in the manufacturing space with the establishment of an AI centre of excellence for the manufacturing sector. The initiative includes the establishment of the Sectoral AI Centre of Excellence for Manufacturing, and programmes such as AI apprenticeship or training programmes.
There are no Sector Regulations applicable to AI use and adoption in the manufacturing space, and the primary reference point here would be the Model AI Governance Frameworks.
(e) Agriculture
Singapore’s agricultural sector is limited, given the lack of non-urbanised arable land.
(f) Professional services
Generally, as at the date of the writing of this article, professional services and their regulators are in the process of developing AI standards, including announced guidelines for legal professional services. Key issues for the AI adoption curve in the professional services include:
- security issues (ie, establishing a framework for determining the secure use of AI solutions); and
- the availability of cost-effective solutions.
Existing guidelines concerning the protection of client confidentiality in regulated professions highlight confidentiality and security as vital issues to be addressed and contextualised.
(g) Public sector
There is no legislation or hard law regulating the use of AI in the public sector. While there are certain guidelines and/or circulars for specific public sectors that regulate the use and adoption of AI (eg, see question 3.1(a)), there are none at a generic level. Resources include a Responsible AI Playbook applicable to Whole-of-Government projects that involve AI system integration, ensuring compliance with responsible AI principles, targeted at application developers in the government.
Additionally, certain positions have been advocated through guidelines based on key risks in relation to the use and adoption of generative AI tools. In particular, the Ministry of Communications and Information (MCI) (now the Ministry of Digital Development and Information), the Smart Nation and Digital Government Group (SNDGG) and the Government Technology Agency (GovTech) have stated (either jointly or singly) their positions on the use of certain AI technologies.
For example, the MCI issued guidelines on AI usage in the civil service to all civil servants in early May 2023. This was followed by a joint statement issued by the MCI and the SNDGG on 23 May 2023 stating that the guidelines issued to all civil servants include guidance “on the use of tools powered by large language models like ChatGPT and Microsoft Bing”, which is “aimed at general users of these apps and those developing apps for the Government”. The guidelines go on to provide that: “Officers should also vet all AI-generated work to ensure the work they submit is accurate and in line with copyright laws.”
To further assist the public sector with AI adoption, the SNDGG and GovTech have issued a Public Sector AI Playbook that “provides public officers, especially non-technical officers, [with] a guide on how AI can be adopted in their areas of work and shares a range of AI projects implemented throughout the public service”.
Singapore’s data protection regime is set out in the Personal Data Protection Act 2012 (PDPA). The PDPA is a baseline cross-sector law which sets out standards for organisations that collect, use, disclose and process personal data. Sector-specific laws may impose further standards that will prevail over the PDPA, but the PDPA is the primary starting point for data protection/data privacy compliance. The PDPA is supplemented by advisory guidelines and guides issued by the national regulator, the Personal Data Protection Commission (PDPC), which augment and provide further practical context and guidance on the PDPA and its application in different contexts.
To date, the PDPC has not issued a generally applicable guide on AI; but it has issued an advisory guideline on the use of personal data in AI recommendation and decision systems. The guideline addresses:
- the application of the PDPA to AI systems to provide organisations with certainty on when they can use personal data to develop and deploy systems that embed machine learning models; and
- the circumstances in which certain consent exceptions to the consent obligation may apply to such usage (the PDPA uses a consent-centric framework to address the basis for processing).
The guideline has legal effect in that it must be used as a reference point when interpreting the application of the PDPA to AI systems.
The impact of the PDPA on AI companies and applications is that it imposes regulatory requirements on the training, development and deployment of AI systems. As the PDPA requires a regulatory and compliance framework to be in place, organisations that are looking to develop, deploy or use AI systems must extend their PDPA compliance framework to any AI project or system.
If an organiser wishes to minimise the impact of the PDPA, it must adopt sound and secure practices in anonymising any personal data in all aspects of the development, deployment and usage of the AI system.
Singapore has two legislative mainstays regarding cybersecurity:
- the Computer Misuse Act 1993 (CMA), which governs cybercrime; and
- the legislation establishing a national framework for regulating certain designated systems and certain companies that have an impact on critical information infrastructure (and their related cybersecurity threats, incidences and governance), as well as cybersecurity providers, in the form of the Cybersecurity Act 2018.
The CMA primarily criminalises what is traditionally known as ‘hacking’ (including acts done to support and enable such activities). The primary authority that acts under the CMA is the Attorney General’s Chambers (AGC). The Cybersecurity Act is administered and enforced by the Cybersecurity Agency of Singapore (CSA).
To date, the AGC has addressed cybercrime and the rise of deepfakes, fraud and other activities involving the use of AI through advisories and public education initiatives, among other things. The CSA has issued the Guidelines and Companion Guide on Securing AI Systems, which are intended to help system owners to secure AI throughout its lifecycle and focus on both:
- classic cybersecurity risks such as supply chain attacks; and
- novel risks such as:
-
- adversarial machine learning; and
- data poisoning.
While the instruments issued by both the AGC and the CSA do not necessarily impose compliance obligations in all cases, the standards being promulgated and recommended should be considered in the event of a cybercrime/cybersecurity incident when:
- navigating the legal liabilities of perpetrators of cybercrime using AI under the CMA; and
- determining whether the owners of designated systems and certain companies that have an impact on critical information infrastructure have met their obligations under the Cybersecurity Act.
The Competition and Consumer Commission of Singapore (CCCS) shared a note during the 143rd Organisation for Economic Co-operation and Development competition committee meeting on 12–14 June 2024 highlighting the following specific challenges:
- potential increased risks of price-fixing by industry players, as competitors can leverage price-monitoring AI tools to collude on price and monitor deviations from collusive agreements;
- potential risks of groups of smaller firms monopolising key aspects of AI development, such as in areas relating to hardware, data and technical aspects. For example, there is a potential risk of companies in the cloud computing industry abusing their dominance to lock out competitors, given that AI is also highly reliant on cloud computing; and
- lack of access to the training and/or input data of AI models, which can impede the enforcement and investigation powers and capacity of the CCCS.
In order to address these risks, the CCCS has started to leverage some of the existing AI compliance toolkits developed by the Infocomm Media Development Authority, such as AI Verify. The aim is to extend the capability of the AI Verify tool to allow companies to self-assess their AI systems before or after deployment to determine whether these systems could potentially raise competition concerns. The CCCS has also established its Data and Digital (D2) Division, which “designs the CCCS’ internal technology infrastructure and systems, performs data analytics to generate market intelligence, and even conducts digital market investigations and studies”. The D2 division is also looking at developing machine learning tools that will enhance the investigation and enforcement capabilities of the CCCS. Finally, the CCCS has begun engaging public agencies in other industries to develop and implement a ‘whole-of-government’ digital strategy.
The Ministry of Manpower (MOM) had to respond to the risks of AI in the employment space as early as 2018. The specific challenges relating to AI are twofold:
- the potential displacement of jobs by AI and the role that the MOM is playing to support this shift; and
- potential discriminatory or unfair hiring or promotion practices that result from the use of AI in facilitating and/or making such decisions.
In relation to the first risk, the Singapore government has rolled out many different initiatives, including:
- the Adapt and Grow initiative;
- Workforce Singapore;
- the National Trades Union Congress’s (NTUC) Employment and Employability Institute, which offers employment facilitation services such as:
-
- career coaching;
- employability workshops;
- job fairs; and
- job matching;
- the Professional Conversion Programme and Place-and-Train Programme, which provide wage and training support for employers to retrain workers to enter new occupations or sectors; and
- the Career Support Programme, which encourages employers to afford opportunities to mature, retrenched professionals, managers, executives and technicians by providing wage support.
In relation to the second risk, the minister for manpower has specifically clarified, in response to a parliamentary question raised on 11 November 2024, that the Tripartite Guidelines on Fair Employment Practices – which promote fair and merit-based employment practices, and which will eventually be replaced by the Workplace Fairness Act 2025 – will continue to apply to employers notwithstanding the use of AI in assisting employers in making decisions on hiring and promotion. As such, employers remain accountable for compliance with these guidelines and in future with the Workplace Fairness Act 2025. The minster added that the government will continue to closely monitor trends in AI adoption and work with the tripartite partners in Singapore (ie, the MOM, the Singapore National Employers’ Federation (which represents the interests of employers) and the NTUC (which represents the interests of employees)), the Institute for Human Resource Professionals and the broader HR community to regularly assess whether existing guidelines and regulations are adequate.
There is no broadly applicable cross-sector law on AI in Singapore. Instead, provisions on data manipulation and integrity are set out in:
- sector-specific laws and regulations;
- the Computer Misuse Act and the Cybersecurity Act;
- common law (where harm arises from the failure to meet standards of care); and
- the Personal Data Protection Act, in relation to data breaches involving personal data.
Thus, different laws may apply to AI companies and solutions, including depending on the use cases and the type of data involved.
This in itself can pose a challenge, in that it is necessary to map legal risk to each use case and AI deployment to means test the viability and legal exposure of AI solutions (and the organisations that develop, deploy and use them), keeping the full lifecycle of the AI system in mind.
The Cybersecurity Agency of Singapore’s Guidelines and Companion Guide on Securing AI Systems further:
- address specific recommended standards; and
- highlight issues concerning data manipulation and integrity.
At this point, the guidelines are advisory in nature, but they may be relevant when considering whether the owners of designated systems and certain companies that have an impact on critical information infrastructure have met their obligations under the Cybersecurity Act.
In terms of industry practice, some companies involved in training and developing AI models have invested in the quality of data pipelines and data sharing platforms, addressing legal incidences such privacy enhancing technologies (eg, federated learning) and other solutions. The landscape is evolving rapidly.
The Singapore government is mindful of the need for balance in the regulation of AI and this has been at the forefront of decisions on the approach to regulating AI. As such, one best practice that we are observing in relation to AI regulation is to ensure that there is no single broad-brush approach towards AI regulation; instead, the specific ministries that regulate various industries have been left to address the regulation of AI in their respective sectors. This also means that at the central level, only key principles relating to AI regulation are being promulgated, which the ministries must bear in mind when seeking to regulate AI within their purview.
Accordingly, a best practice approach is to formulate compliance policies based on principles that are adopted at the corporate level when companies start their own AI compliance and governance frameworks. Caution should be exercised when deciding on the compliance steps required in each department and/or function, which should be calibrated based on the use cases that are most relevant to those departments and/or functions.
A good AI compliance and governance framework should address the following issues:
- Understand the maturity level of the organisation as a first step towards building an AI compliance and governance framework. This will involve conducting a maturity assessment to understand the level of knowledge, capability and ease of adoption of AI in your organisation. As AI can be a rather complex field – even more so when translated into compliance frameworks – organisations should be slow to insist on a single policy to regulate AI throughout the organisation.
- Establish an AI governance lead and committee to help with AI oversight and set the general direction of AI compliance within the organisation.
- Identify the company’s AI principles when regulating AI, based on existing principles that the company has already established. This will help to contextualise the various AI compliance requirements in a relatable manner for employees.
- Undertake a compliance review to assess risks from the use case instead of solely the AI technology. A unique aspect of AI is the possibility for multifaceted use; thus, any attempt to regulate the tool itself without understanding the use case will lead to either under or overregulation.
- Establish proper contractual clauses or even an AI contract playbook which incorporates the company’s policy requirements as well as AI principles.
- Develop your company’s AI policies and guidelines in line with level of maturity assessed, and, with further iterations of these policies and guidelines, progressively raise standards and complexity as the maturity level advances.
- Adopt a circular and agile approach when it comes to AI compliance and governance. The ever-evolving AI risks and legislative framework make it difficult to have a framework that is cast in stone and does not allow for change.
Please see question 8.2. Most importantly, engage the right partners with experience of AI compliance and governance to assist you on your AI journey.
It is challenging to delve into each and every risk that the use of AI may present from a contractual perspective, but we can look at these at a high level from a general overview perspective. In this regard, such risks can largely be categorised as follows:
- the changing nature of how contracts are executed and concluded;
- emerging issues and facts that may not have been litigated before the courts and for which there are no case precedents; and
- the fast-evolving technology and use cases of AI, which make it different to conduct analysis based on previous court decisions.
These risks can be mitigated if contractual risks relating to contracts for the procurement, use and adoption of AI and contracts created using AI are regulated partially by judge-made laws but also by statutes. Statutes will be most applicable for regulating:
- tools and technologies that may be relied up on to conclude and create contracts; and
- specific contractual principles, such as formation of contracts.
In the absence of statutes and regulations that help in deciding the allocation of liabilities among the different players in the development, distribution, use and adoption of AI, the allocation of liabilities becomes strictly a private affair to be managed by the parties to an arrangement or contract. This can lead to:
- the potentially unfair allocation of risks and liabilities where there is an apparent discrepancy in the bargaining powers of the parties to the engagement; or
- uncertainty for the parties, as they will not know for certain the extent of their liabilities, given that they are not even sure:
-
- whether they are liable; and
- if so, which parts of the transactions they are liable for.
By contrast, the implementation of an omnibus AI legislation can inadvertently lead to “over-regulation” or “ineffective” regulation in the form of:
- ill-fitting / inflexible regulatory regimes that are not tailored to specific use cases or industry risks (in this regard Sector Regulations are preferable since sector regulators are typically more familiar with issues of their stakeholders and constituents);
- a “tick the box” culture of notional compliance – where templated assessments, processes and forms drive the behaviour of aiming at completing “compliance” as a mere task, as opposed to thoughtful and effective evaluation of risks and harms to be addressed.
In addition, before the introduction (if ever) of any omnibus legislation, it is important to raise the maturity level of stakeholders (businesses, employees, users, excluded stakeholders who have no ability in decision making / influence, the general public) and this would entail continued and increased work and resourcing to educate stakeholders across all AI supply chains and use case scenarios – up to and including the beneficiaries of any outputs generated by the use of AI tools – so that sufficient information is disclosed to each stakeholder for them to make the appropriate decisions on such usage. Singapore is tackling this by investing various training initiatives at both the educational institutions including institutes of higher learning, as well as in worker re-skilling or upgrading programmes, training initiatives by various ministries and through collaborations with professional associations such as the Singapore Computer Society and AI Singapore.
It is not practical to list all risks with regard to potential bias and discrimination in the use of AI, but these can largely be summarised into two broad categories:
- bias and discrimination inherent in the data sets used to train and input into the AI model; and
- bias and discrimination inherent in the AI model itself by virtue of its ability to hallucinate.
The first risk can be mitigated if:
- companies adopt an AI governance framework; and
- prior to training any AI model or using any data with AI, a data governance review and regulation process is undertaken, to ensure that the data that will be used with the AI model has at least been reviewed.
In relation to the second risk, which may be inherent in the model, human intervention and review will be required with respect to the outputs generated by the AI. However, given that even humans may have certain unconscious biases, it may be challenging even for humans to pick up on such biased and discriminatory outputs. Unfortunately, it may not be possible to completely eradicate such human-related risks; accordingly, greater caution should be exercised prior to considering using AI tools in areas where the risks of bias and discrimination may be more pronounced.
‘Protection’ in the sense of an exemption or clearance from regulatory liability is not specifically afforded for AI but can exist through regulatory sandboxes. The scope and accessibility will vary according to the sector-specific regulatory framework. Additionally, regulators have established sandboxes which do not offer ‘protection’ from liability, but instead aim to encourage adoption, such as the Infocomm Media Development Authority’s Gen AI sandbox (where entry provides access to resources rather than regulatory privilege or exemption).
Protection is also available in the sense of establishing or managing proprietary rights over the commercialisation of AI solutions through IP rights.
The Singapore government has sought to position itself as being pro-AI in respect of the development of AI solutions. For example, key amendments to the copyright regime were introduced in the form of the computational data analysis (CDA) provisions in Sections 243 and 244 of the Copyright Act 2021. The CDA provisions were intended to provide a legal exception/legal defence to copyright claims where copyrighted works must be copied or communicated in order to help develop and train AI models.
The CDA provisions strike a balance between the interests of content owners and those of AI developers, particularly in relation to what were traditionally known as ‘text data mining operations’. There are no reported cases as yet on their application, including whether the provisions are suited for generative AI; but it is worth noting that the policy intent behind the CDA provisions was to encourage the responsible development of AI models.
On the question of other IP rights, including patents, Singapore’s IP laws still anchor the subsistence of such rights to human authors, designers and inventors; thus, it is not yet possible to recognise copyright or grant a patent or registered design where the author, designer or inventor is an AI model.
Incentivisation can take the form of:
- access to financing; and
- gradated sector-specific regulatory guidance (ie, targeted guidelines and other documents) to provide greater certainty.
However, in the main, the key strategic approach has been to incentivise development through a pragmatic, ‘light-touch’, multi-stakeholder approach, as reflected in the Model Governance Frameworks. In the area of AI model development, Singapore is one of the few jurisdictions which provides explicit (though structured and not unqualified) exemptions from copyright infringement for the copying of works for the purposes of training AI models, through a set of provisions known as the ‘computational data analysis provisions’ of the Copyright Act.
If a company wishes to hire foreign talent to work in Singapore, it will need to consider applying for work passes. There are different types of work passes available in Singapore, which vary depending on:
- the intended salary for the role;
- the nature of the role; and
- to a certain extent, the needs of Singapore for foreign talent in that specific role.
AI companies that wish to engage foreign talent in the AI space should consider the following passes:
- the Tech-pass;
- the entrepass; and
- a five-year employment pass.
The Tech-pass is a visa that allows established global tech talent – including entrepreneurs, leaders, investors and technical experts – to undertake multiple economic activities and plug into Singapore’s vibrant tech ecosystem. It is issued by the Singapore Economic Development Board instead of the Ministry of Manpower (MOM). To qualify for a Tech-pass, the individual must have:
- a last drawn fixed monthly salary (in the past year) of at least S$22,500; and
- at least five cumulative years of experience in a leading role in:
-
- a tech company with a valuation/market capitalisation of at least US$500 million or with at least US$30 million funding raised; or
- a tech venture capital firm with at least US$500 million in assets under management.
The entrepass, which is administered by the MOM, is intended for serial entrepreneurs, high-calibre innovators and experienced investors who wish to operate a business in Singapore that is venture backed or owns innovative technologies. As such, for companies that are looking to hire, the entrepass may not be suitable.
Finally:
- individuals who meet certain requirements to fulfil one of the roles on a shortage occupation list in the infocomm technology sector can qualify for a five-year employment pass; and
- individuals who are employed as AI scientists or engineers have a potentially greater chance of securing an employment pass to work in Singapore.
The different avenues through which AI companies can bring foreign talent into Singapore should facilitate the expansion of the talent pool in Singapore for AI-related work.
Generally, across different industries, it may be a challenge to attract academics and practitioners who specialise in AI to come to Singapore, given that uptake of AI in Singapore is still low. More extensive and widespread adoption will thus be required before AI generates sufficient quantities and quality of use cases where AI will be utilised to attract the talent. There will be a need to work closely with the Singapore government and the relevant ministries to intensify local AI development activities, while also working towards scaling up the AI knowledge and capabilities of the existing talent pool, so that overseas specialist talent can engage with local AI experts.
Another step that would enhance talent in the AI space would be to encourage the small and medium-sized enterprises that comprise a large portion of the enterprise ecosystem in Singapore to increase their use and adoption of technologies that operate on AI models and platforms. This in turn would incentivise their existing workforce to upgrade their current skills to include the use of AI in their daily operational jobs while retaining their industrial knowledge.
The Smart Nation Initiative, led by the Ministry of Digital Development and Information (MDDI), presented details of the Singapore National AI Strategy 2.0 in 2023. In this strategy document, the MDDI set out 15 actions that Singapore will undertake in order to support its ambitions over the next three to five years. These actions will touch on areas relating to:
- promoting research and development in AI;
- developing and attracting more talent to the AI space in Singapore; and
- establishing the necessary infrastructure to provide a trusted environment for AI innovation.
In the same strategy document, the MDDI outlined measures that Singapore will adopt to enhance trust in the use and adoption of AI. These measures seek to:
- “establish a trusted environment for AI, where people can have the confidence that their interests are protected when interacting with AI”; and
- “take a pragmatic approach – supporting experimentation and innovation, while still ensuring that AI is developed and used responsibly, in line with the rule of law and the safeguards we have put in place”.
The strategy paper further highlights the need for a combination of regulatory updates and voluntary guidelines, while also stressing the need for design interventions that:
- “are risk-based, tiered, and adapted for specific vertical sectors and horizontal applications”; and
- recognise “that every use case carries a different set of considerations and risks, and would therefore require different risk thresholds and context-specific risk management approaches”.
However, at the same time, the MDDI did not discount the possibility of updating broader standards and laws.
Taking a holistic review of the propositions advocated by the MDDI, we believe it is likely that – at least in the coming 12 months – an industry and use case-specific approach will be adopted by the MDDI in regulating AI, rather than a broad-brush general law on AI equivalent to the EU AI Act. However, as the MDDI has also acknowledged, this remains an evolving space and there is no discounting the possibility of a different approach being adopted at some point in the medium or long term.
Prior to entering Singapore and establishing a commercial model in Singapore, companies should engage the right experts to assist with validating (if the company already has one) or establishing an AI compliance and governance framework which both:
- looks at managing risks internally; and
- considers how to support the adoption of AI by potential customers and clients during their AI adoption journey.
If companies are entering Singapore for the first time and have no idea on how their AI tool may be used in different industries, a good starting point is to avoid more heavily regulated industries such as healthcare and automobiles. Alternatively, if the intent is to work in such industries, consider reaching out to the relevant authorities and/or ministries in these industries (eg, the Ministry of Health and the Ministry of Transport) to explore participation in regulatory sandboxes for the implementation of AI tools in these regulated industries. The biggest risk is to rush into the jurisdiction without an appreciation of its needs and with misplaced expectations in the belief that AI should be used for the sake of AI even if there is no real demand for it.