2022 Review

Regulation starts to catch up with AI in pharma

As AI takes over drug discovery and clinical research, governments prioritise the regulation of AI in pharma, writes Akosua Mireku.

Artificial intelligence (AI) continued to stay in the news with several high-profile deals last year, as the pharmaceutical industry readily took to adopting AI models to improve drug discovery. But as the field grows in leaps and bounds, many authorities have prioritised the release of new guidelines, frameworks, and regulations to keep pace with these advances. 

AI applications such as ChatGPT, an Open AI chatbot that understands human speech and produces in-depth writing, have taken the world by storm and expanded the possibilities of using AI. Across all sectors, AI applications are being used to increase efficiency and reduce costs. This is true not only in the case of the general public using applications for creating images or text, but also for pharma companies to improve drug discovery, clinical trial recruitment, and finding new biomarkers. 

According to a cross-sectorial survey by the consulting firm Mckinsey & Company, 52% of respondents reported investing in AI this year compared to 40% in 2018. However, despite increasing AI adoption, there has reportedly been no substantial increase in AI-related risk mitigation for companies that used AI in at least one function since 2019. This could spell high-stake consequences for consumers whose privacy and safety could be at risk if AI models are not regulated. Moreover, concerns about fairness, bias, and accountability of AI systems have increasingly been at the forefront of the industry. 

Major AI deals in 2022

As of June 2021, companies such as Bristol Myers Squibb, Bayer and Pfizer led the AI deals space for the pharmaceutical sectors. However, in 2022, other major players have also stepped up. 

Sanofi stood out with a spree of partnerships, including a $100 million collaboration with Exscientia. At the recent Financial Times Global Pharma and Biotech 2022 Summit in London, Frank Nestle, Sanofi’s global head of research and chief scientific officer explained the company’s interest in AI as one where, “[AI] meets HI (human intelligence) and gives us predictive models. These predictive models give us novel hypotheses, which will hopefully speed up drug discovery, make it more affordable and give us more better quality medicines”. 

In August, Sanofi signed a $20 million strategic multi-target research collaboration with Atomwise for AI-based drug discovery. In this collaboration, Sanofi is leveraging Atomwise’s Atomnet platform for the discovery and research of up to five drug targets. In a similar deal in November, Sanofi agreed to pay $21.5 million upfront to Insilico Medicine to leverage its AI platform Pharma.AI for drug discovery. 

In an email, Alex Zhavoronkov, the founder and CEO of Insilico Medicine, told Pharmaceutical Technology, “Data privacy and protection are critical to our business, and any businesses utilizing AI, as is compliance with all international laws and regulations... I expect that these measures will become more stringent as the technology continues to evolve”.  

Other companies have chosen to expand upon previous partnerships with AI giants. Last month, Evotec announced a €15 million ($15.8 million) investment in Exscientia to support the companies’ continued partnership. Previously, drug candidates identified from the Evotec-Exscientia and Sumitomo Dainippon-Exscientia collaborations entered human clinical trials in 2021. 

In January, AstraZeneca announced the expansion of its partnership with BenevolentAI for another three years for drug discovery in systemic lupus erythematosus and heart failure. This partnership has allowed the British pharma company to discover two additional novel AI- generated targets for chronic kidney disease and idiopathic pulmonary fibrosis. At the same FT Summit, Werngard Czechtizky, Head of medicinal chemistry, respiratory, and immunology at AstraZeneca, spoke of how the company is using AI, “The machine learning path allows us to have really good targets… [and] “We are hoping we can improve the speed and quality of what we’re doing”. 

According to GlobalData, global AI revenues in the pharmaceutical, medical, and healthcare sectors are expected to reach almost $21 billion by 2025. GlobalData is the parent company of Pharmaceutical Technology

Alongside major AI-related deals in the pharmaceutical industry, this year regulators attempted to put in safeguards to prevent possible negative effects of this technology. Here are some of the initiatives that have been in the news in 2022. 

UK sets out to set global AI standards

The year began with the UK making major moves to shape privacy standards with AI research. In January, the Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NPL) formed a partnership to shape global technical standards for AI. This action was part of the UK’s National AI Strategy of which the third pillar is “Governing AI effectively”. This strategy and the new standards made from it, aim to improve privacy standards for any data used by AI technology and reduce biases that may arise from data. In pharmaceutical research, this may help secure the privacy of patient data used by AI systems and decrease bias due to ethnicity, sex and other factors.

FDA and EMA action

Last month, the FDA and the Office of Digital Transformation released its Cybersecurity Modernization Action Plan, which aims to “strengthen the FDA’s ability to protect sensitive information, modernize cybersecurity capabilities, and improve situational awareness to decrease overall security risks to the Agency”. This plan, amongst other things, aims to improve cybersecurity of AI systems. 

In the European Union, the proposed AI Act, touted as the first law specifically pertaining to AI from a major regulator, assigns risk of different AI applications into three different categories, depending on the level of risk they pose. In September, the European Commission adopted proposals to update liability rules for manufacturers, including those in pharma, and harmonise these on a national level, such that they complement the AI act.

Canada makes moves for privacy

In November, the Digital Charter Implementation Act (DCIA) was tabled in the House of Commons in Canada, of which one portion deals with AI. The stated purpose of the Artificial Intelligence and Data Act (AIDA), included in the DCIA, is to “regulate international and interprovincial trade and commerce in AI systems,” and propose rules for the design, development and use of those systems, among other things.

China takes an aim at recommendation algorithms 

In March, the Chinese government announced a mandatory registration system in which AI algorithms need to be filed within the Internet Information Service Algorithm Filing System if deemed to have “public opinion characteristics” and “social mobilization capabilities”. China’s algorithm regulation mostly focuses on the role recommendation algorithms play in disseminating information, ensuring that providers don’t “endanger national security or the social public interest” and making sure that they “give an explanation” when they harm the legitimate interests of users. 

In the pharma context, China’s legislation may require companies to file relevant algorithms, but most predict that this broadly aims to prevent the ethical risks and biases of certain AI algorithms. 

Given the prevalent trends and a growing role of AI in pharma, companies may need to increasingly consider new legal parameters when conducting research with AI technology.

Main image credit: Getty Images/ SDI Productions