Artificial intelligence has already revolutionized many parts of medicine and will only continue to do so as things like machine learning in healthcare become more refined and widely used. However, the human aspect of healthcare will always exist and there are proverbial lines to toe when it comes to the ethics of how and why artificial intelligence is utilized in the field.
Health education, medicine, surgery, and research and development are all parts of healthcare that have already been positively impacted by artificial intelligence, but each brings with it ethical decisions that must be made.
Current trends in health education are moving away from a focus on knowledge recall and more towards hands-on training, and artificial intelligence can play a big part in the latter. For aspiring physicians, for instance, actual “patients” now exist for training that are robots with artificial intelligence programmed to actually “tell” their student doctors what is wrong, and what (if anything) has been fixed or further damaged after a given procedure. The ethical question that arises in situations like this is that, although these novice doctors do have “experience,” is it fair to claim it if they never worked on an actual person?
Most of the ethical discussions revolving AI in health education are related to the question of “what qualifies as real-world experience?” and whether or not it is fair to tell patients that a doctor is experienced if she or he has only ever worked on AI dummies.
On the other side of the procedural coin is those doctors being replaced with machines. AI-assisted surgeries have been gradually becoming more popular, but a report from the Harvard Business Review determined that almost 90% of surgeries performed in 2020 could have been robot-assisted, but the trust in such procedures is still very low regardless of the statistics that show its success versus human procedures. In the same study, a poll of 700 individuals determined that only 26% would be willing to undergo a robot-assisted surgery when a doctor’s procedure was also available.
When it comes to ethics of these procedures, most questions arise around the discussions of malpractice and responsibility when something does go wrong. This lack of accountability is one of the major reasons trust in AI surgeries remains low, even though statistics show it being more successful than doctor procedures.
The data industry is expected to be worth more than $275 billion by the end of next year, and sharing electronic health data can mean exponential increases in improvements in care. AI and machine learning utilize this health data to determine trends in demographics, ultimately allowing physicians to deliver better diagnoses and better plans for fixing a given ailment. However, data breaches are still very common and health data is extremely personal.
The ethical issues here again involve a line, and that is the line between advancing the science of medicine with AI, and risking personal information being stolen on a grand scale.
Similar issues exist with the advancements of medication. The only way to know if a medicine is working or not is to collect information from patients who use it, but if these patients don’t want that information being utilized beyond their own health, medical teams must decide if it is more ethical to respect those wishes (and currently it is illegal to do otherwise) or use the information for the greater good.
Ultimately, AI will be responsible for evolutions in healthcare currently beyond the scope of imagination, but as is the case with any new technology, public acceptance will lag behind the evolutions themselves. Education and transparency surrounding these technologies and their rates of success is as important to their implementation as creating the new procedures, medicines, etc. themselves, and no matter the success rate there will always be patients who prefer the human aspect of healthcare. Whether or not they will always get that wish is an ethical conversation to be had when that time comes.