REGULATION OF HEALTH-RELATED ARTIFICIAL INTELLIGENCE IN MEDICAL DEVICES: THE CANADIAN STORY.

AuthorDa Silva, Michael

INTRODUCTION

New technologies offer the hope that Artificial Intelligence (AI), which we define below, will positively transform Canada's healthcare system, and help address the system's many access, quality, and safety problems. For example, avoidable errors on the part of healthcare professionals is a leading cause of injury even in advanced healthcare systems. (1) Studies suggest that AI tools could dramatically improve healthcare quality by using big data to improve surgical safety. (2) For instance, the OR Black Box is designed to identify surgical distractions and errors using data capture and AI analysis; it should eventually provide real-time recommendations to surgeons that will minimize the chances of preventable errors in surgery. (3) It is also hoped that efficiency gains from AI may free healthcare providers from mundane, routine tasks and provide them with the "gift of time" to work with complex patients and provide compassionate care. (4)

Despite its transformative potential, we must also acknowledge and address the many challenges posed by health-related AI. For example, potentially unsafe AI could be widely deployed through the healthcare system and, if left unchecked, could greatly harm patients. Other challenges include concerns about AI violating privacy rights or the possibility that AI may entrench or create new unfounded biases such that historically marginalized groups continue to receive less or inappropriate care. (5) Whether health-related AI delivers on its potential will partly be a function of whether it is possible to design and implement functional governance and regulatory frameworks. Any governance scheme must seek to realize the benefits of health-related AI whilst minimizing any harms. (6) As Kate Crawford suggests, maintaining this balance calls for "[m]uch stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed... [and for] different voices in these debates--including people who are seeing and living with the downsides of [AI-enabled] systems." (7)

In terms of the existing governance of health-related AI, Canadian federalism results in a complex and yet incomplete web of regulations that may apply. (8) Federal and provincial governments each have regulatory powers over aspects of healthcare. (9) Primary jurisdiction for healthcare professionals has been interpreted to rest with the provinces and, in turn, each province delegates some powers to sub-provincial bodies (e.g., regulatory colleges who establish and enforce health professional responsibilities). Private law may also incentivize different behaviours: for instance, tort liability could incentivize innovators to take care in their design of health-related AI or the risk of liability could chill the uptake of health-related AI by healthcare professionals. (10) Our analysis here focuses on one part of this regulatory ecosystem: the role of Canada's medical devices regulator, Health Canada, which determines whether devices meet safety standards and can accordingly be manufactured, imported, and sold in Canada. We thus focus primarily on the safety and efficacy issues Health Canada is charged with addressing in the medical devices context. As we will discuss below, Health Canada's mandate may need to be broadened to address the relevant issues more effectively. For example, Health Canada can currently only regulate devices "sold" in Canada (with a caveat for some research studies) and this commercial understanding of Health Canada's mandate leaves devices implemented for in-house use by a hospital for non-research purposes outside of Health Canada's regulatory scrutiny. (11) In other instances, a shift in how the regulator interprets its mandate--and the criteria it applies in regulating products--appears necessary in view of the issues AI presents. Most notably, "safety" needs to be understood more broadly to address issues surrounding an AI product's potential biases. (12) Yet our work builds on an understanding that Health Canada is--at its core--mandated to ensure product safety and efficacy and therefore that new issues presented by AI, such as the risk of bias, can and should be subsumed into how the regulator reviews AI technologies in the service of its consumer protection mandate. (13)

In Part I, we highlight the need to regulate the sale of medical devices with AI to address the possibility of error in AI analyses. This is where we argue for an expansive interpretation of "safety" to include not only errors in programming or other malfunctioning of AI but also risks of algorithmic bias and related issues of privacy and data governance. In Part II, we explain how Health Canada currently regulates medical devices, including recent changes that potentially provide for an increased role for post-market surveillance. In Part III, we discuss in more detail how the existing regulatory scheme applies to health-related AI devices. In Part IV, we discuss Health Canada's new initiative to provide bespoke licencing pathways for novel technologies, including "adaptive" machine learning AI. Such pathways are now permitted under recent amendments to the Food and Drugs Act and Health Canada is presently developing details on how they will be operationalized. (14) Finally, in Part V, we assess whether recent changes are sufficient to meet the concerns we identified and whether the pathways in Part IV could address any gaps.

By providing the first detailed description of Canadian regulations of medical devices and AI in healthcare, we hope to provide a baseline from which we and others can better consider reform options. (15) We conclude by arguing that Health Canada should strongly regulate health-related AI both in the pre-market and post-market phases and not see any attention to the latter as condoning a "lighter" approach to the former. Both are critical. We further argue that in pre-market assessment, Health Canada should explicitly address bias and privacy issues within its remit on safety and that it should work towards transparent evidentiary standards for "safe" AI. In the course of these analyses, we highlight the need for federal investment in Health Canada's regulatory efforts and the development of representative datasets on which AI can be trained.

  1. THE NEED FOR REGULATION

    1. THE PHENOMENON

      Let us start with definitions.

      Artificial Intelligence (AI) is the use of computer systems to perform tasks traditionally requiring human cognition without direct human intervention after initial programming. (16) Humans provide inputs into a computer system that permits a computer or computer-controlled "artificial" entity to perform the tasks without further human aid.

      Machine learning AI (ML) is AI that collects new inputs through its own operation and adapts to produce new, hopefully improved, outcomes (predictions, analyses, etc.). (17)

      Neural networks connect many simple processing nodes into algorithmic processes that can train to respond to similar nodes in a manner aiming to mimic human cognition. Neural networks are a form of ML. (18)

      Deep learning is a high-powered species of neural network with numerous "layers" of nodes, many of which may be hidden (viz., are not, strictly speaking, input or output layers). (19)

      Health-related AI is heterogenous and ranges from computers that read and interpret medical scans, smartwatch-based heart monitors, and algorithms that make staffing recommendations to algorithms that make initial triage assessments about whether a patient needs to see a doctor and those that provide real-time recommendations on how to improve surgery. (20) Health-related AI may be used under the supervision of humans or autonomously, such that, e.g., AI "robots" could eventually even perform surgery without human oversight. (21) The enormous heterogeneity of AI reflects the heterogeneity of healthcare needs and responses and makes the design of appropriate governance more complex, as discussed further below.

    2. THE ISSUES

      The use of AI in healthcare raises safety concerns. One issue is how to ensure that only safe AI tools are on the market--whether adopted into public "Medicare" or available for sale and use in the private sector. Some features of AI make it especially difficult to predict risks or identify adverse events. For instance, "adaptive" ML changes over time, by definition, as it learns. Consequently, not all risks stemming from ML use are identifiable before ML is adopted into healthcare; an ex ante licence demonstrating ML safety at the time an application is made to the regulator may then be of limited value. (22) Health Canada has not yet approved a medical device with adaptive ML for the Canadian market, but now plan to permit adaptive ML products through a new framework detailed below. (23) The regulatory challenge is significant. The nature of some AI "decision makers", like deep learning neural networks, means that neither regulators nor healthcare professionals can understand how a decision is reached. (24) Such AI "reasoning" can even remain opaque to the ATs developers, rendering the AI into a "black box". (25) Opacity may make it hard to identify problems (adverse events, bias, etc.) in advance or in practice. (26) Requiring "explainable" AI may not resolve all these issues: "explanations" often provide a proxy for how AI works, not direct understanding, and some AI tools could be safe and effective without being explainable. (27)

      Another safety-related concern is that of algorithmic bias. AI is only as good as the data fed into it ("rubbish in, rubbish out"). (28) If training datasets used by AI programmers systematically under-represent groups (e.g., women, Indigenous Peoples, Black Peoples, and Peoples of Colour, or other groups that have been marginalized), bias can result. (29) Bias is linked closely to safety concerns, for bias can undermine the accuracy of diagnosis or treatment recommendations, as, e.g...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT