Skip to Content
Link
Connecting medical imaging leaders to the latest industry news, best practices, and AHRA happenings.
AHRA
RADIOLOGY MANAGEMENT
IS NOW PART OF LINK!
  • Leadership & Workforce Management
  • Operational Excellence
  • Technology & Innovation
  • Patient Care
  • Regulatory & Compliance
  • Career Journey
  • Podcast
  • About
Ethical Issues and Concerns of AI in Medical Imaging
Technology & Innovation Ethical Issues and Concerns of AI in Medical Imaging March 25, 2026 - Brandon Hughes
Share this story:

 

 

This is an AHRA Quick Credit article. The corresponding post-test in the AHRA Online Institute is coming soon.


With the continual rise of AI in society, it is fitting to examine the role AI has played in medical imaging. AI is described as machines imitating cognitive functions naturally applied by humans. Used in medical imaging for decades, the benefits of AI in medical imaging include image interpretation, image reconstruction, reduction in radiation dose, and improved image quality.

As with any technology, AI in medical imaging is accompanied by its own unique considerations, including ethical concerns. Ethical issues with AI in medical imaging include explainability, transparency, accuracy, privacy, generalizability, trust, accountability, oversight, patient and societal well-being, fear of radiologist replacement, responsibility, and legality.

The reliability of AI in image diagnosis, lack of transparency, the black box phenomenon, and patient confidentiality concerns are of particular importance for medical imaging. Given these concerns, a regulatory framework is necessary to ensure AI is used in an ethical manner.  


For many years, AI has been a part of societal consciousness and feared for its inevitable takeover of the world. More recently, it has become a buzzword full of excitement at the potential it holds to make the world better.

Although fear remains for many, in medical imaging and radiation therapy, AI has been assisting in pathological diagnosis and treatment for many years.1 The technology is a dynamic field of study, and all facets must be considered to not only understand the advancement but determine whether its implementation will be beneficial.

This article will describe AI contextually within the medical imaging sciences with considered focus on the perceived benefits, ethical issues, and continued concerns.

What Is Artificial Intelligence and Where Did it Come From?

Something that is artificial is made by humans to resemble a natural occurrence, and intelligence is the ability to learn and manage situations based on objective criteria.2-3 One may apply a synonym of fake thinking. However, AI goes beyond this banal estimation to consider machines that imitate cognitive functions naturally applied by humans in learning and problem solving based on comprehension of known data.4-6

Instead of human brain cognition and problem solving based on learning and the perceived environment, computers with enormous computational power use developed algorithms for decision making.4-7 Through giant data sets these algorithms learn to act autonomously.4-7 Just as we use knowledge gained through the accumulation of information and experience, AI uses data sets labeled with interpretations to learn.4-7

Although the idea of a thinking machine has been around for millennia, it wasn’t until the advent of electronic computers that it became a reality.4,8 In 1956 a collection of scientists from the fields of engineering, psychology, mathematics, and others convened for a workshop on the idea of an artificial brain capable of intelligent computation. 4,8-9 It was then that computer scientist John McCarthy came up with the term artificial intelligence.4,8-9 Thereafter, artificial neural networks (ANN) using mathematical algorithms to imitate the human cognitive neural network were developed. 4 In 1967, the Mark I Perceptron became the first computer to function while learning through trial and error.4

As computational power and technology expanded, so did AI.4,8,10 A boon for medical imaging was that AI could take input data, like images, and derive meaning. In 2015 a convolutional neural network was developed with the ability to identify and categorize images more accurately than an average human.4 The age of AI had perhaps arrived.

Subsets of AI: Machine and Deep Learning

Machine learning is the subfield of consequence when it comes to medical imaging as it uses model-based algorithms (for example input-to-output) to learn and make predictions based on presented data, such as patient images.4-5,11 Artificial neural networks, perhaps the most widespread algorithms in machine learning, use a structure of interconnected points (nodes modeled on the brain’s neurons) to accurately evaluate complex data toward a predictive output.4 Labeled data sets, wherein users select image features to define and classify image data, train the algorithms to predict outcomes for new data.4,8 The computer then learns which inputs go with which outputs, enabling them to understand new input data, as with medical images.4 Just as we learn that a hot stove means burning and pain, AI learns that “x” is associated with “y.”

Machine learning can quantify characteristics of medical images such as the density and shape of lesions.6 This is achieved through three different types of “layers” that build the desired output.5

Two of the layers are self-explanatory, the input layer receives the data, and the output layer presents the results.5 It is the hidden layer(s) of processing, where the data patterns are extracted to produce the output, that are difficult to understand and raise ethical concerns, which will be discussed later.5

The trained algorithm can classify the image based on these recognized features, some of which may be invisible to humans.8 The usefulness of traditional machine learning in medical imaging has been evident since the 1980’s, though the need for input features and extensive labeled data can be burdensome.5-6,8 Deep learning is a subset of machine learning that alleviates some of this necessary front-end work.8

Deep learning can perform without human intervention and user input.4,8 The deep ANN used with deep learning AI is different from that of machine learning. Deep learning has many hidden layers extracting patterns within the data.5,12 Although deep learning requires a large amount of data for training and computing power, these ANN’s can detect and extract complex relationships and patterns, mimicking the human brain cortex.5,12-13

More complex convolutional neural networks (CNN) in deep learning are particularly advanced in visual application.5 Deep learning can glean higher level features from raw data, such as recognizing shapes from image intensities, without the detection of specific image features and are able to make predictions without supervised, labeled data.4-5,8 As a result, deep learning has shown significant improvements in performance over machine learning with explicit programing from input to output not necessary, as the computer has learned from previous data.4,8

With deep learning, AI can now act independently of data input and human intervention, while understanding human language, identifying objects, responding to observed data, and making recommendations, just as humans do.4 Traditional commands are unnecessary. Advancements in graphic processing units and growth in the quantity of medical images have enabled the development and use of deep learning AI with the ultimate goal of improved healthcare.8

AI in Medical Imaging

It is recognized that AI in healthcare can potentially lead to improved patient experience, improved healthcare professional well-being, reduced costs, and, most importantly, improvement in the health of individuals and society.14

The technology continues to make significant improvements, with deep learning algorithms showing effective detection of pathological lesions and classification of images based on the absence or presence of disease.15

Clinically, AI contributes to medical imaging in almost every aspect of the field. It is involved in the entire medical imaging profession, from the reduction of radiation dose and exam time to image data acquisition, processing and reconstruction, automatic evaluation of image quality, and much more.5-6,9,11,15,17

Patients and physicians alike are served through AI’s ability to triage patients by identifying exams that are more complex, severe, and likely critical.7,17-19 Radiologist workflow is improved through exam triage, quick identification of negative exams, biopsy planning, guidance of real time ablation, and the ability of AI to learn radiologists' personal preferences.5,14,17-19 AI is ubiquitous in medical imaging.  

How Data Powers AI and Why it Matters to Imaging

Just as radiologists are increasingly challenged with swelling volumes of image data, training AI requires increasingly immense amounts of examination data.5,8 Using the billions of diagnostic images produced each year precision data on tissue characteristics may yield results regarding disease evolution and treatment response.9,20

When image data is fed into AI algorithms diagnostic characteristics (i.e. lesion and tissues features) are extracted from the input data through a process known as radiomics.5,8,20 Through radiomics the expectation is that quantifiable and reproducible image data beyond human recognition will be extracted and used for image interpretation.20

AI’s anatomy, artifact, and noise differentiation means it is less distracted by normal variants.9 The technology is free of human error and bias, instilling distinct objectivity that has shown to provide excellent performance clinically in pattern recognition and image interpretation toward diagnosis.9,12,14 AI as a diagnostic tool with anomaly detection and image interpretation should be viewed as a benefit, perhaps as a second reader to radiologists, with the ultimate goal of patient health at the forefront.7,15

Ethical Issues and Concerns

While the benefits of AI in medical imaging are evident, there are ethical concerns with its use, centered on explainability, transparency, accuracy, cybersecurity, privacy, apathy, generalizability, trust, reliability, accountability, lack of oversight, patient and societal well-being, fear of radiologist replacement, responsibility, safety, and legality.1,11,13,16-17 This article will focus on accountability, trust concerns, transparency, explainability, the Black Box effect, and privacy considerations in relation to the use of AI.

In light of these issues, there is a need for regulatory measures to ensure non-maleficence is at the center of AI in medical imaging.11,16

Much of the ethical concern with AI in medical imaging is centered on pathological diagnosis. Even still, traditionally new technologies have had a beneficial effect on the role of radiologists with one of the first uses of AI in health care being its use in interpreting medical imaging studies.1,5 Computer-aided diagnostic tools have been assisting radiologists in their image interpretation for over 50 years, and AI image feature recognition is now becoming commonplace.1,5

The stresses and reliance on radiologists are evident as they must be able to navigate over 50,000 causal relationships and roughly 20,000 diseases while interpreting potentially hundreds of medical images.17 To assist, AI has the ability to reduce radiologist workload through automatic delineation between normal and diseased tissue.

With AI’s continuous improvement in recognizing patterns within disease features, medical imaging moves away from a subjective interpretation toward a more objective science.8,14,19 The human eye can only differentiate 60 to 80 shades of gray, wherein AI can perceive features not detectible by people and assess complex characteristics found in medical images toward a more accurate diagnosis.6,8,21  

As the technology evolves, we may witness a paradigm shift wherein AI algorithms potentially become more accurate than radiologists in image interpretation, causing fear of replacement to be a key barrier in the adoption of AI in image diagnosis.1,9,17

There is much concern amongst radiologists that they may be responsible for a misdiagnosis or misinterpretation that was purely from AI.5,11,17 Though AI’s diagnostic benefits are apparent, the legal question of responsibility is underscored by the lack of definitive laws on AI autonomy.5,11,17

Accountability is a necessity, not only from a medico-legal standpoint, but also in establishing trust.1,11 Patients are more accepting of the use of AI when final decisions are made by humans, pointing to the necessity of radiologists shaping the diagnostic quality of AI through training and development of AI applications.1,5

If patients lose trust in healthcare, the results will be disastrous. Healthcare should be patient-driven with service paramount. Part of that service is instilling trust that medical professionals have patient health as priority.

Considering Patient Involvement With AI in Imaging

Indicating the specifics of why and how AI will be used in imaging examinations and interpretation is crucial in building patient trust.1,11,22 Additionally, patient trust in AI can be dependent upon an informed patient regarding its use, particularly when radiologists and physicians disagree with AI’s image interpretation.1

Patients should have the opportunity to give informed consent with sufficient exam information to make an educated decision in exam participation.1,11 Patients need to know they are getting quality imaging, but additionally that accountability is present for potential problems.11 This is common practice. Continual testing and monitoring are needed to ensure AI algorithms are achieving the required accuracy, specificity, and reliability in interpretation and diagnosis.13 Currently AI quality in diagnosis is lacking proficiency due to insufficient amounts of high-quality image data, pointing to the need for continual radiologist peer review of AI outputs, just as peer review is common practice throughout healthcare.8,13

Furthermore, consideration that AI has no understanding of the patient’s uniqueness creates an ethical issue concerning patient trust.1 All patients have their own personal medical history, status, and symptoms.1,13 The potential for bias in AI algorithms stems from input data being restricted to specific demographics and community commonality.13,16

The Role of Diversity in Data

Of specific concern to some is the physiological and anatomical differences between adult and pediatric patients.13 If AI is not considerate of differing patient demographics and characteristics, it changes potential pathological indications based on image characteristics.1,13

These differences can lead to mistakes in image interpretation and misdiagnosis with severe consequences for the patient.13 Diversity in data input will serve to minimize the statistical discrimination that may cause significant harm to individuals of demographic characteristics other than that which was used to build algorithms.4 This points to the fact that the practice of medicine is as much an art as it is science, with technology and human sensitivity each playing a role.8

The accuracy and reliability of AI in medical imaging depends upon the validity of AI algorithms.8,13 Though this points to the definitive need for radiologists to be readers along with AI, the lack of trust in productivity and safety creates problems.8,13

The Black Box Effect

Of additional ethical concern is that no one is fully in control of AI’s outputs at all times.11 There is an overall lack of understanding of how AI reaches an output decision.11,13 This is known as the “black box” effect, described by Laudicella as “a system, process, or algorithm that receives an input and produces an output, in a way not known or easily accessible.”13,16-18

Perhaps much of the need for regulatory guardrails stems from the black box nature of AI algorithms. In fact, the FDA places responsibility on AI developers to explain purpose, uses, inputs, and rationales for AI software.10,17 Even still, we do not fully understand what is taking place in the hidden layers of AI algorithms, particularly the hundreds of layers between the input and output layers of deep learning AI algorithms.4,15,17-18

Transparency vs. Explainability, and Where Trust Factors In

Companies using AI systems must be considerate of their individual needs concerning AI transparency.22 With the majority of AI use outside of healthcare, there is no need for interpretability and transparency, but the potentially deadly consequences from medical imaging misinterpretation creates a new avenue of ethical worry.16,22

Although they are not synonymous when it comes to AI systems, some use transparency and explainability interchangeably.22 Explainability is how understandable the cognitive  processes of AI are, while transparency considers how much of the inner workings of AI are revealed.22 AI may come to a reasonable conclusion, but without an understanding of how that conclusion was reached, there can be mistrust.12-13

Lack of transparency in the system leads to mistrust, and a lack of ethical and legal oversight leads to a lack of generalizability, fairness, and benefit.11-13,23 People tend to distrust what they feel is evasive, but a full in-depth description of the inner workings of the hidden layers of AI algorithms is not necessary to build trust.15,23 Just as imaging professionals may explain how the MR system achieves its images with the recognition that delving into quantum physics is not necessary, a superficial explanation of how AI will be used in image production and interpretation will help to instill trust in the healthcare professionals and system in which the patient relies.

Viewing AI as an Assistant: One Path to Building Trust

If AI is to be seen as a tool to assist imaging professionals in diagnostic and image production, perhaps viewing AI as part of the team is a helpful way to avoid mistrust.24

Knowing that AI can be trusted to effectively and correctly produce and interpret medical images is imperative. These are literally life and death issues. The ability to predict the behavior of AI in these tasks is affected by the capacity to understand how it reaches conclusions.24

One significant circumstance is the inability to determine what went wrong in AI’s layer when there is a problem with image production or interpretation.24 Although all the inner workings of AI are elusive to us because of the inherent black box nature, a cursory understanding and trust that AI will achieve its outputs correctly will instill the trust needed between medical imaging professionals and AI.24 If the outputs from AI cannot be understood and accepted, they are useless, potentially causing detriment to patients in delayed diagnosis and treatment.24

AI ethics literature points to the need for transparency wherein AI outputs need explainability, discoverability, interpretability, and traceability.11 Arieta23 describes the opposite of “black-box-ness” in AI as transparency. However, the black box nature in AI algorithms is too complex for us to comprehend, limiting the ability to comprehensively disclose the functionality of AI systems.11 The harmful effects of AI’s black box nature in medical imaging can be life changing and this author believes it to be one of the most urgent ethical issues concerning AI in image interpretation.11-12

Because medical imaging is so crucial to the health of individuals, transparency and explainability can go a long way to instill trust from patients and healthcare professionals alike.23

How can AI algorithms be fully trusted? This brings confidentiality concerns to the forefront as another trust-based ethical issue of AI in medical imaging.

AI and Patient Confidentiality

Patients enter personal, even somewhat intimate, relationships with the medical professionals caring for them. 25 Obtaining sensitive, personal data is a significant part of that relationship and must be kept confidential.    

At the same time the necessity for very large amounts of accurately labeled medical image data to train and validate algorithms is essential to the success and benefit of AI.15,17 For this to occur, patient image data must leave the healthcare facility and be shared with AI companies.11,17

Governance of this data must fully comply with federal Health Insurance Portability and Accountability Act (HIPAA) laws to protect patient confidentiality.12,17 Although the healthcare facility owns patient data, transparency necessitates the need for patients to be informed concerning the use of their personal data and further instill trust in the system.1

A breach of patient confidentiality can have detrimental effects and cause unintentional harm.11 Furthermore, if there is inaccurate data labeling, either unintentional or purposeful, there can be significant physical harm to patients.8,11

AI needs built-in privacy protections to minimize unintentional harm due to the collection of personal data.11 Though personal identification data is removed, the transfer of patient image data for use in training AI algorithms prompts new confidentiality consideration.17 Care must be taken to prevent any such breach.

Early Guidance From Regulatory Bodies

Currently authorizing over 1,000 AI-enabled devices, the U.S. Food and Drug Administration (FDA) recognized the potential of AI to transform healthcare while still being cognizant of the ethical issues involved.26-28

Considering this, the FDA, along with Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), released a list of guiding principles for AI-enabled medical devices, such as medical imaging machines using AI software.27-28 These guiding principles were designed to promote effective, high-quality, safe devices for use in healthcare.27-28

Among other guiding principles, those laid out specifically address the need for “robust security practices” and a demographically representative data population to promote generalizability.27-28 Furthermore, essential information must be clearly available to healthcare professionals and patients regarding the intended use and limitations for informed decision making.27-28 These principles give guidance toward the use and development of medical devices and address some of the ethical concerns noted.27-28

The FDA has since crafted further guidance for manufacturers developing new AI-enabled medical devices.28 Although it has not been adopted at the writing of this article, the non-binding guidance document gives recommendations to manufacturers seeking approval for medical devices that use AI software.28

The document indicates the necessity of users to describe the use of AI in the performance of the device and how it will be monitored.28 This will assist patients in making informed decisions regarding medical imaging examinations and instill trust in medical imaging professionals. The document directs steps toward the correct labeling of data, considering demographic characteristics to ensure generalizability and avoid bias.28

Regarding the black box effect, the FDA draft describes the necessity of relating the AI model and validation of its performance in predictability and reliability.28 Further guidance describes the need for manufacturers to indicate how they will execute risk management regarding AI in the device.28 The handling of examination data, particularly the need to ensure security of patient information, is also specified.28 Although these documents are non-binding, perhaps it could lead to a regulatory framework regarding AI in medical imaging.

Looking Ahead: Monitoring the Benefits vs. Risks

Growing use of AI in medical imaging signifies that healthcare agencies and medical imaging professionals see the technology’s benefits outweighing the potential associated ethical risks. Will radiologists or AI ultimately be responsible for image interpretation and diagnosis? How do AI algorithms reach conclusions? How do we safeguard the vast amounts of patient data being used to train AI? All these questions will need to be continually addressed as medical imaging moves farther into this technological territory and AI becomes a significant part of medical imaging.

Regardless of the advancements occurring throughout the history of medical imaging, emotional intelligence has always been at the center of patient care. Navigating the ever-expanding use of AI will be a task for medical imaging professionals for years to come. One should just hope the considerable benefits of AI will continue to outweigh the significant associated risks involved.

References

1. Kitts AB. Patient perspectives on artificial intelligence in radiology. Journal of the American College of Radiology, September 2023;20(9):863–867. https://doi.org/10.1016/j.jacr.2023.05.017.

2. Merriam-Webster, Inc. Artificial. Accessed September 12, 2025. https://www.merriam-webster.com/dictionary/artificial.

3. Merriam-Webster, Inc. Intelligence. Accessed September 12, 2025. https://www.merriam-webster.com/dictionary/intelligence.

4. Stryker C, Kavlakoglu E. What is ai? ibm.com. Published August 9, 2024. Accessed March 7, 2025. https://www.ibm.com/think/topics/artificial-intelligence

5. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. European radiology experimental. 2018;2(35):1-10. https://doi.org/10.1186/s41747-018-0061-6.

6. Zhou LQ, Wang JY, Yu SY, et al. Artificial intelligence in medical imaging of the liver. World journal of gastroenterology. 2019;25(6): 672-682. https://doi.org/10.3748/wjg.v25.i6.672.

7. Alexander A, Jiang A, Ferreira C, Zurkiya D. An intelligent future for medical imaging: a market outlook on artificial intelligence for medical imaging. Journal of the American College of Radiology. 2020;17(1):165-170. https://doi.org/10.1016/j.jacr.2019.07.019.

8. Tang X. The role of artificial intelligence in medical imaging research. BJR. 2020;2. https://doi.org/10.1259/bjro.20190031.

9. Gore JC. Artificial intelligence in medical imaging. Magnetic Resonance Imaging. 2020;68:A1-A4. https://doi.org/10.1016/j.mri.2019.12.006.

10. Tilkin M, Carey CK. The Role of Organized Radiology in Advancing Imaging Artificial Intelligence. Journal of the American College of Radiology. 2023;9:886–888. https://doi.org/10.1016/j.jacr.2023.06.020.

11. Sihlahla I, Donnelly D-L, Townsend B, Thaldar D. Legal and ethical principles governing the use of artificial intelligence in radiology services in South Africa. Developing World Bioethics. 2023:1-11. https://doi.org/10.1111/dewb.12436.

12. Saw SN, Ng KH. Current challenges of implementing artificial intelligence in medical imaging. Physica Medica. June 14, 2022;100:12-17. https://doi.org/10.1016/j.ejmp.2022.06.003.

13. Ciet P, Eade C, Ho M-L, et al. The unintended consequences of artificial intelligence in paediatric radiology. Pediatric Radiology. September 4, 2023;54:585-593. https://doi.org/10.1007/s00247-023-05746-y.

14. Sim JZT, Bhanu Prakash KN, Huang WM, Tan CH. Harnessing artificial intelligence in radiology to augment population health. Frontiers in Medical Technology. November 8, 2023: https://doi.org/10.3389/fmedt.2023.1281500.

15. Hong G.-S, Jang M, Kyung S, et al. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean Journal of Radiology. 2023;24(11):1061–1080. https://doi-org.msu.idm.oclc.org/10.3348/kjr.2023.0393.

16. Cui S, Traverso A, Niraula D, Zou J, et al. Interpretable artificial intelligence in radiology and radiation oncology. The British Journal of Radiology. July 23, 2023;96(1150). https://doi-org.msu.idm.oclc.org/10.1259/bjr.20230142.

17. Mello-Thoms C, Mello CAB. Clinical applications of artificial intelligence in radiology. The British Journal of Radiology. April 26, 2023;96(1150). https://doi.org/10.1259/bjr.20221031.

18. Laudicella R, Davidzon GA, Dimos N, Provenzano G, Iagaru A, Bisdas S. ChatGPT in nuclear medicine and radiology: lights and shadow in the AI bionetwork. Clinical and Translational Imaging. 2023;11: 407–411. https://doi.org/10.1007/s40336-023-00574-4.

19. Lewis SJ, Gandomkar Z, Brennan PC. Artificial Intelligence in medical imaging practice: looking to the future. Journal of Medical radiation sciences. 2019;66(4):292-295. https://doi-org/10.1002/jmrs.369.

20. Mayerhoefer ME, Meterka A, Langs G, et al. Introduction to radiomics. Journal of Nuclear Medicine. April, 2020;61(4):488-495. https://doi.org/10.2967/jnumed.118.222893.

21. DeMaio D. Mosby’s exam review for computed tomography. (3rd Ed.). Elsevier, Inc. 2017.  

22. Wahlström M, Tammentie B, Salonen T-T, Karvonen A. AI and the transformation of industrial work: Hybrid intelligence vs double-black box effect. Applied Ergonomics. July, 2024;118. https://doi.org/10.1016/j.apergo.2024.104271.

23. Arrieta AB, Diaz-Rodriguez N, Del Ser J. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. June 2020;58:82-115. https://doi.org/10.1016/j.inffus.2019.12.012.

24. Hauptman AI, Schelble, BG, Duan, W, Flathmann, C, McNeese, NJ. Understanding the influence of AI autonomy on AI explainability levels in human‑AI teams using a mixed methods approach. Cognition, Technology & Work. May 18, 2024;26:435–455. https://doi.org/10.1007/s10111-024-00765-7.

25. Zapusek, T. Artificial intelligence in medicine and confidentiality of data. Asia Pacific Journal of Health Law & Ethics. November, 2017;11(1):105-126. https://heinonline.org/HOL/Page?handle=hein.journals/aspjhle11&div=8&g_sent=1&casa_token=eX2DSm7P5RYAAAAA:BbBZCXSIZ23eh1j_p2qCBct8ZB33g57EwNM-ompa04TsnIfiAcG0qbN3pZjxOVHqO2TxCMriDw&collection=journals.

26. U.S. Food & Drug Administration. FDA news release: FDA issues comprehensive draft guidance for developers of artificial intelligence-enabled medical devices. Fda.gov. January 6, 2025. Accessed September 15, 2025. https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices.

27. U.S. Food & Drug Administration. fda.gov. Good machine learning practice for medical device development: Guiding principles. October 2021. Accessed September 15, 2025.  https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles.

28. U.S. Food & Drug Administration. Guidance document: Artificial intelligence-enabled device software functions: Lifecycle management and marketing submission recommendations: Draft guidance for industry and food and drug administration staff. fda.gov. January 7, 2025. Accessed September 15, 2025. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing.

 

headshot
Brandon Hughes

Brandon Hughes serves as an assistant professor in imaging sciences at Morehead State University. He is the clinical coordinator and faculty for the Computed Tomography/Magnetic Resonance program and acting admissions coordinator and faculty for the Leadership in Medical Imaging program. He has years of experience working in the field and is a member of state and national professional organizations.

Hughes holds a PhD in Leadership Studies from Johnson University, a Master of Science in Radiologic Sciences degree from Midwestern State University, and Associate of Applied Science and Bachelor of Science in Imaging Sciences degrees from Morehead State University. He lives in Kentucky with his wife and son.

Tags: Artificial Intelligence

More from LINK

Ethical Issues and Concerns of AI in Medical Imaging
Technology & Innovation Ethical Issues and Concerns of AI in Medical Imaging March 25, 2026 - Brandon Hughes Learn More
AI in Clinical Radiology: Technological Considerations for Enabling AI-driven Medical Image Diagnosis
Technology & Innovation AI in Clinical Radiology: Technological Considerations for Enabling AI-driven Medical Image Diagnosis March 25, 2026 - Purva Shah Learn More
Imaging Under Pressure: A Guide to Advocating for Your Practice
Leadership & Workforce Management Imaging Under Pressure: A Guide to Advocating for Your Practice March 19, 2026 - Nicole Jones-Gerbino Learn More
AHRA
AHRA: The Association for Medical Imaging Management

2001 K Street NW, Third Floor North, Washington, DC 20006
Tel: (800) 334-2472
Email: memberservices@ahra.org

Quick Links Press Releases
Volunteer
Privacy & Terms Terms of Use
Privacy
Login
Copyright AHRA. All Rights Reserved.