View the publication here.

Market intermediaries and asset managers that use artificial intelligence (“AI”) and machine learning (“ML”) are recommended to abide by guidelines and measures set forth by the International Organisation of Securities Commissions (“IOSCO”) published on 25 June 2020[1]. The proposed guidance reflects an expectation of high standards of conduct across market intermediaries and asset managers.

This article aims to provide a brief overview of the six measures proposed by IOSCO in its guidance.

Who is IOSCO?

IOSCO is the leading international policy forum for securities regulators and is recognised as the global standard for securities regulation. The organisation’s membership regulates more than 95% of the world’s securities markets in over 115 jurisdictions[2].

IOSCO Mandate

The use of AI and ML may benefit market intermediaries and asset managers, such as by increasing efficiency and reducing costs. However, it may also create or amplify risks, potentially undermining financial market efficiency and causing harm to investors and other market participants. Consequently, regulators are focusing increasingly on the use and control of AI and ML in financial markets.

The IOSCO Board identified AI and ML as an important priority in 2019 to protect against potential risks and prevent investors from possible damage caused by such practices[3]. In this respect, the IOSCO Board asked its Committee on Regulation of Market Intermediaries and its Committee on Investment Management to examine best practices arising from the supervision of AI and ML, and also to propose guidance that member jurisdictions may consider adopting to address the conduct risks associated with the development, testing and deployment of AI and ML.

The deadline for comments on the consultation report was 26 October 2020.

Defining the Terms AI and ML

AI can be understood as a combination of mass data, sufficient computing resources and ML, which can accomplish simple, repetitive tasks, or can be more sophisticated, to some degree, self-learn and perform autonomously, based on a system that mimics human cognitive skills or human capabilities.

ML is a subset and application of AI, which focuses on the development of computer programs that analyse and look for patterns in large quantities of data, with the aim of building knowledge to make better future decisions. A successful ML algorithm will learn and evolve over time and will possibly make recommendations that were not explicitly envisaged when it was created.

How firms are using AI and ML techniques

The IOSCO engagement with market intermediaries and asset managers revealed that AI and ML are allocating resources to focus on the more cognitive aspects, such as strategy, portfolio selection and generating investment ideas.

Market intermediaries are deploying these technologies in:

  • advisory and support services;
  • risk management;
  • client identification and monitoring;
  • selection of trading algorithm; and
  • asset management / portfolio management.

The use of AI and ML by asset managers appears to be in its nascent stages and are mainly used to support human decision-making. Asset managers are deploying these technologies in order to:

  • optimise portfolio management;
  • suggest investment recommendations; and
  • improve internal research capabilities, as well as for back office functions.

Identified potential risks and harms

From the IOSCO engagement with market intermediaries and asset managers, seven key areas of potential risks and harms in relation to the use of AI and ML have been identified: (i) governance and oversight; (ii) algorithm development, (iii) testing and ongoing monitoring; (iv) data quality and bias; (v) transparency with which the practice can be explained; (vi) outsourcing; and (vii) ethics.

Proposed Guidance

The proposed guidance is mainly based on good practices currently being carried out by some companies or expected by several regulators across the globe.

The proposed guidance, which consists of six measures, contains the following:

Measure 1

Appointing a designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. This includes implementing a documented internal governance framework, with clear lines of accountability. Senior management should designate an appropriately senior individual, with the relevant skill set and knowledge to sign off on initial deployment and substantial updates of the technology.

Measure 2

Adequately testing and monitoring the algorithms to validate the results of an AI and ML technique on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI and ML:

  • behave as expected in stressed and unstressed market conditions; and
  • operate in a way that complies with regulatory obligations.

Measure 3

Allocating adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the company utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on their level of knowledge, expertise and experience.

Measure 4

The market intermediary or asset manager should understand their reliance upon and manage their relationship with third party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, the market intermediary or asset manager should have a clear service level agreement and contract person in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should clearly determine sanctions for poor performance.

Measure 5

Regulators should consider what level of disclosure of the use of AI and ML is required by the companies, including that:

  • regulators should consider requiring the market intermediary or asset manager to disclose meaningful information to clients around their use of AI and ML that impact client outcomes; and
  • regulators should consider what type of information they may require from market intermediary or asset manager using AI and ML to ensure they can have appropriate oversight of those companies.

Measure 6

Appropriate controls in place to ensure that the data that the performance of the AI and ML is dependent on is of sufficient quality to prevent biases and sufficiently broad for a well-founded application of AI and ML.

Conclusion

Several world’s leading financial and capital market participants, such as the Global Financial Markets Association (GFMA) and BlackRock welcomed the opportunity to input into IOSCO’s work on AI and ML and provided high-level responses to the IOSCO consultation report within the deadline of 26 October 2020. IOSCO is currently processing the various contributions received.

Whereas the proposed guidance is mainly based on good practices, despite the fact that it is non-binding, IOSCO members, might consider adopting the proposed measures in the context of their legal and regulatory frameworks, irrespective of the outcome of the consultation, and then expecting these high standards of conduct across their market intermediaries and asset managers.

No items found.