No matter how engaging and attractive both machine learning and artificial intelligence are, we know for a fact that they also have limitations, especially in the field of medicine, specifically in reconstructing accurate medical imaging, like MRI Scans.

A lot of well-set intentions aim to apply AI in medical imaging and even revolutionizing modern medicine. However, the University of Cambridge’s very own Dr. Anders Hansen, together with Simon Fraser University’s Dr. Ben Adcock, led research to present some pitfalls that AIs have in this field.

The effects scatter across different kinds of artificial neural networks, so we cannot quickly provide a remedy. They presented their results in the National Academy Proceedings with a caution that we can harm patients if we depend on AI-based image reconstruction techniques to form diagnoses and suggest possible treatment.

Their team utilized AI and deep learning to construct a sequence of tests on algorithms of medical image reconstruction. They found that these mechanisms result in unwanted alterations in the data that were not present in imaging techniques not based on Artificial Intelligence.

Algorithms Of Medical Image Reconstruction - THESIS.PH

Limitations for Applying AI in Medical Image Reconstruction

  • Poor Image Quality

Hansel and his colleagues that tiny movement can cause:

  • deteriorated quality of image reconstruction after repeated subsampling
  • blurry or Completely removed Images
  • countless artifacts in the final images

Poor Image Quality for Applying AI in Medical Image Reconstruction - THESIS.PH

Though it seems that AI techniques could improve the image quality of medical imaging like MRI scans, you still need to train AI algorithms using previous data so it can learn to reconstruct images and optimize the reconstruction’s quality itself.

  • Inaccurate Results

A study found that AI and machine learning are both unstable for reconstructing medical images, which eventually leads to false positives and false negatives.  Given this, Dr. Hansen stated that these small input alterations grow into some significant output changes.

  • Wrong Interpretations

There might be instances when radiologists interpret images from our scans as medical issues rather than dismissing it due to technical errors, and this causes a problem.

When we discuss critical decisions about our health, Hansen stresses that algorithms must not make any mistakes. They found that even the smallest manipulation, such as minimal patient movement, could cause a different outcome when you reconstruct medical images using deep learning and AI. It needs more stability.

  • Lack of Stability

Apart from accuracy, stability is one of the things an AI algorithm needs to be reliable. However, sometimes your image can be incorrectly classified as several things could go wrong with medical imaging. Images may either conceal details or add unwanted artifacts.

Hansen and his colleagues also added some crucial issues related to stability, mainly associated with:

  • tiny perturbations or movements
  • respect to small structural changes
  • respect to changes in the number of samples

Hansen concluded that deep learning techniques are universally unstable for reconstructing medical images. Given so, they are more focused on limiting AI’s uses, which he believes we can only express mathematically. It is by uncovering these limits that we could aid these problems of medical imaging.

LEAVE A REPLY

Please enter your comment!
Please enter your name here