Two AI Tools Create Safer MRI Scans for Cancer Patients
Two AI models generate realistic MRI contrast images for NPC without gadolinium, reducing safety risks for patients.

Complete the form below to unlock access to ALL audio articles.
Researchers at The Hong Kong Polytechnic University have developed artificial intelligence (AI) models that generate synthetic MRI scans without the need for gadolinium-based contrast agents, offering a safer method for imaging nasopharyngeal carcinoma (NPC). The work builds on years of development and could improve diagnostic quality while reducing the risk of contrast-related side effects.
NPC is a challenging form of cancer that arises in the nasopharynx, a region near the base of the skull and surrounded by critical structures. The cancer is particularly common in Southern China, where it is around 20 times more prevalent than in non-endemic regions. Because of the tumour’s location and tendency to infiltrate surrounding tissue, high-quality imaging is crucial for treatment planning, especially for radiation therapy.
Nasopharyngeal carcinoma (NPC)
A type of head and neck cancer that originates in the upper part of the throat behind the nose.
Risks of gadolinium contrast
Contrast-enhanced MRI is widely used to help clinicians distinguish between tumour and healthy tissue. Gadolinium, a rare metal, is often used in contrast agents because of its ability to improve image clarity. However, its use can pose health risks.
In rare cases, gadolinium exposure has been associated with nephrogenic systemic fibrosis, a serious condition that causes skin tightening, joint immobility and organ dysfunction. Additionally, evidence has emerged suggesting that gadolinium may accumulate in the brain, raising concerns about long-term safety, particularly in patients requiring repeated scans.
To address these concerns, the research team led by Jing Cai, head of the Department of Health Technology and Informatics, developed deep learning systems to simulate contrast-enhanced images using data from non-contrast MRI scans. Their approach is known as virtual contrast enhancement (VCE).
Two models developed for virtual contrast
The group initially designed the Multimodality-Guided Synergistic Neural Network (MMgSN-Net) to synthesise contrast-enhanced T1-weighted images from standard T1- and T2-weighted scans. The system includes multiple modules – multimodality learning, synergistic guidance, self-attention, multi-level processing and a discriminator – to extract tumour-specific features and generate realistic synthetic images.
The network uses T1- and T2-weighted images to produce synthetic contrast-enhanced scans with high structural similarity. The self-attention module helps preserve anatomical accuracy, while the synergistic module integrates complementary features from both input modalities. This multimodal strategy addresses the limitations of single-modality image synthesis.
Building on this work, the team later developed the Pixelwise Gradient Model with Generative Adversarial Network for Virtual Contrast Enhancement (PGMGVCE), which integrates deep learning and image texture modelling.
Texture quality and image realism
PGMGVCE uses a combination of pixelwise gradient loss and GAN loss to optimize both the shape and texture of synthetic images. Pixelwise gradients allow the system to preserve anatomical geometry, while the GAN improves realism by training the model to produce images that are indistinguishable from real contrast-enhanced scans.
Comparative testing of the two models showed similar performance in structural measures such as mean absolute error and structural similarity index. However, PGMGVCE demonstrated superior image texture and detail replication, closely matching the texture of real scans. Texture quality was assessed using total mean square variation per mean intensity and the Tenengrad function per mean intensity.
Model performance was further optimized by adjusting hyperparameters and testing different normalisation techniques. Sigmoid normalization was found to be most effective. In addition, training the model with both T1- and T2-weighted data yielded better results than using either modality alone, reinforcing the importance of multimodal imaging for VCE.
Safer diagnostics without contrast agents
By eliminating the need for gadolinium-based contrast, both MMgSN-Net and PGMGVCE could provide safer imaging alternatives for patients undergoing MRI for NPC diagnosis and monitoring. The improved texture synthesis seen with PGMGVCE may enhance clinicians’ ability to detect and delineate tumours in complex anatomical regions.
The authors note that future studies will need to expand training datasets and incorporate additional MRI modalities to improve model performance and ensure broader clinical applicability.
Reference: Li W, Xiao H, Li T, et al. Virtual contrast-enhanced magnetic resonance images synthesis for patients with nasopharyngeal carcinoma using multimodality-guided synergistic neural network. Int J Radiat Oncol Biol Phys. 2022;112(4):1033-1044. doi: 10.1016/j.ijrobp.2021.11.007
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.
This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.