Medicine by Algorithm? Will Computers Replace Doctors?
Late last year, the FDA released a draft guidance document that will loosen the reigns on some types of medical software. This development is part of a larger trend in which artificial intelligence (AI) and other technologies take on a more central role in the doctor/patient relationship. It is “evidence-based” medicine, as determined by government regulators. The FDA’s guidance deals with “clinical decision support” software (CDS). These are programs that “analyze data within electronic health records to provide prompts and reminders to assist health care providers in implementing evidence-based clinical guidelines at the point of care.” Make no mistake: CDS is the future of medicine. The AI health market is projected to hit $6.6 billion by 2021; in 2014 it was $600 million. AI supported by machine learning algorithms is already being integrated in the practice of oncology. The most common application of this technology is the recognition of potentially cancerous lesions in radiology images. But we are rapidly moving beyond this to where AI is used to make clinical decisions. When a mistake is made, who is accountable? The algorithm, or the doctor? How do we hold an algorithm accountable? There are other problems when using AI machine learning in medicine. IBM has developed Watson for Oncology, a program that uses patient data and national treatment guidelines to guide cancer management. As we’ve seen, Google and other tech giants are greedy for our health data so they can develop more of these tools. Technology should absolutely be harnessed to improve medicine and clinical outcomes, but AI cannot replace the doctor/patient relationship. There are obvious ethical questions.