Wavelet-Based Adversarial Training: A Cybersecurity System Safeguarding Medical Digital Twins from Attacks

Janani R April 16, 2025 | 10:30 AM Technology

A digital twin is a precise virtual replica of a real-world system, created using real-time data. These models offer a platform to test, simulate, and optimize the performance of their physical counterparts. In healthcare, medical digital twins create virtual models of biological systems, enabling the prediction of diseases or testing of medical treatments. However, medical digital twins are vulnerable to adversarial attacks, where small, deliberate alterations to input data can deceive the system into making incorrect predictions, such as false cancer diagnoses, thereby posing serious risks to patient safety.

Figure 1. wbad: Combining Wavelet Denoising and Adversarial Training for Enhanced Defense Against Adversarial Inputs

To address these threats, a research team led by Professor Insoo Sohn from Dongguk University, Republic of Korea, and Oregon State University, U.S., has introduced a groundbreaking defense algorithm: Wavelet-Based Adversarial Training (WBAD) [1]. This innovative approach, designed to protect medical digital twins from cyberattacks, is detailed in the journal Information Fusion. Figure 1 shows WBAD: Combining Wavelet Denoising and Adversarial Training for Enhanced Defense Against Adversarial Inputs.

"We present the first study in Digital Twin Security to propose a secure medical digital twin system, featuring a novel two-stage defense mechanism against cyberattacks. This mechanism combines wavelet denoising with adversarial training," says Professor Sohn, the corresponding author of the study from Dongguk University.

The researchers tested their defense system on a digital twin designed to diagnose breast cancer using thermography images. Thermography detects temperature variations in the body, with tumors typically appearing as hotter areas due to increased blood flow and metabolic activity.

Their model processes these thermography images using Discrete Wavelet Transform, which extracts key features to generate Initial Feature Point Images. These features are then input into a machine learning classifier, trained on a dataset of 1,837 breast images (both healthy and cancerous), to differentiate between normal and tumorous tissue.

Initially, the model achieved 92% accuracy in predicting breast cancer. However, when subjected to three types of adversarial attacks—Fast Gradient Sign Method, Projected Gradient Descent, and Carlini & Wagner attacks—its accuracy plummeted to just 5%, revealing its vulnerability to adversarial manipulations.

To address these threats, the researchers implemented a two-layer defense mechanism. The first layer, wavelet denoising, is applied during the image preprocessing stage. Adversarial attacks often introduce high-frequency noise into the input data to deceive the model. Wavelet denoising uses soft thresholding to eliminate this noise while preserving the low-frequency features of the image.

To further enhance the model's resilience, the researchers incorporated an adversarial training step, which trains the machine learning model to recognize and counter adversarial inputs [2]. This two-step defense strategy proved highly effective, with the model achieving 98% accuracy against FGSM attacks, 93% against PGD attacks, and 90% against C&W attacks.

"Our results demonstrate a transformative approach to medical digital twin security, offering a comprehensive and effective defense against cyberattacks while improving system functionality and reliability," says Prof. Sohn.

References:

  1. https://techxplore.com/news/2025-04-wavelet-based-adversarial-cybersecurity-medical.html
  2. https://www.prnewswire.com/news-releases/dongguk-university-researchers-develop-wavelet-based-adversarial-training-a-breakthrough-defense-system-for-medical-digital-twins-302425701.html
Cite this article:

Janani R (2025), Wavelet-Based Adversarial Training: A Cybersecurity System Safeguarding Medical Digital Twins from Attacks, AnaTechMaz, pp. 167

Recent Post

Blog Archive