Bringing principles of ethics to AI and drug design

Researchers believe that artificial intelligence has the potential to usher in an era of faster, cheaper and more fruitful drug discovery and development.

Over the years, researchers have used AI to analyze troves of biological data, scouring for differences between diseased and healthy cells and using the information to identify potential treatments. More recently, AI has helped predict which chemical compounds are most likely to effectively target SARS-CoV-2.

But with AI’s potential in drug development comes a slew of ethical pitfalls — including biases in computer algorithms and the philosophical question of using AI without human mediation.

This is where the field of biomedical ethics — a branch of ethics focused on the philosophical, social and legal issues in the context of medicine and life sciences — comes in.

In mid-March, adjunct Stanford University lecturer Jack Fuchs, PhD, moderated a discussion about the need for clearly articulated principles when guiding the direction of technological advancements, especially AI-enabled drug discovery.

Russ Altman, MD, PhD, a Stanford Medicine professor of bioengineering, genetics, medicine and biomedical data science, and computer science, and Kim Branson, PhD, global head of AI and machine learning at the pharmaceutical company GlaxoSmithKline, joined Fuchs in the discussion .

Branson said that, when thinking about AI and drug development, “You suddenly realize that you need an ethical framework.”

“These aren’t abstract things or gray goo scenarios or what-ifs,” he said. “These are real things that are happening now that we actually have to make decisions about.”

The future of AI and drug development

There is no question that AI has been a tremendous boon to drug development, said Altman. For example, when combing through large genomic databases, AI is basically mandatory for finding the genetic variants correlated with diseases of interest, he said. Those genetic variants can turn out to be effective drug targets. AI is also good at detecting patterns, which can be useful in searching electronic medical records for groups of patients with similar characteristics, Altman noted.

AI can also help scientists visualize the three-dimensional molecular structure of proteins, which is critical for developing drugs that target those molecules. “That whole three-dimensional structure and molecular understanding of drug action is about to be revolutionized,” said Altman.

But ethical questions remain: Big genomic databases, for example, tend to include information mostly from people of European ancestry, which can be problematic when translating findings from the data to the entire population. Using AI to scan electronic medical records also has potential for breaches in patient privacy.

Ethical science is better science

In medicine, ethical questions can arise in a variety of settings, said Altman. They can appear when a health care provider must make a decision regarding a patient, or in clinical trials. For example, if there is already a treatment available for a disease, you can’t have a placebo group in your study that is not receiving any treatment, Altman explained. “That would be unethical for half of your patients to not even receive the standard of care.”

And that applies to AI.

Considering the ethics of AI projects can require more time and money, “but we have to make an attempt,” said Branson. “We need to make reasonable attempts to address all of the ethical issues, and that’s before you even write a single line of code.”

Then, when the AI ​​model is being built, you need to think about both the intended and unintended uses of the technology, said Branson. “If someone else had access to this, how else could they use this in different settings?” he asked. In other words, could someone use this product in an unethical way?

Another fatal flaw: waiting until the last step of your research to factor in ethics, which is often what happens. But Altman and others hope that efforts, such as the new Ethics Fellowship, designed to boost the prevalence of ethics-minded AI scientists, can address that problem.

“You don’t just sprinkle ethics on top of a project,” said Altman. “The project has to start with a scientific question and the ethical framework of that question.”

This article is based on a podcast originally shared by the Stanford School of Engineering.

Photo by metamorworks

About the author


Leave a Comment