BIOIMAGING 2024 Abstracts


Full Papers
Paper Nr: 57
Title:

Unsupervised Domain Adaptation for Medical Images with an Improved Combination of Losses

Authors:

Ravi K. Gupta, Shounak Das and Amit Sethi

Abstract: This paper presents a novel approach for unsupervised domain adaptation that is tested on H&E stained histology and retinal fundus images. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions associated with classification problems. Since our objective is to enhance domain alignment and reduce domain shifts between these domains by leveraging their unique characteristics, we propose a tailored loss function to address the challenges specific to medical images. This loss combination not only makes the model accurate and robust but also faster in terms of training convergence. We specifically focus on leveraging texture-specific features, such as tissue structure and cell morphology, to enhance adaptation performance in the histology domain. The proposed method – Domain Adaptive Learning (DAL) – was extensively evaluated for accuracy, robustness, and generalization. We conducted experiments on the FHIST and a retina dataset and the results show that DAL significantly surpasses the ViT-based and CNN-based state-of-the-art methods by 1.41% and 6.56% respectively for FHIST dataset while also showing improved results for the retina dataset.
Download

Paper Nr: 89
Title:

Magnification Invariant Medical Image Analysis: A Comparison of Convolutional Networks, Vision Transformers, and Token Mixers

Authors:

Pranav Jeevan, Nikhil C. Kurian and Amit Sethi

Abstract: Convolution neural networks (CNNs) are widely used in medical image analysis, but their performance degrades when the magnification of testing images differs from that of training images. The inability of CNNs to generalize across magnification scales can result in sub-optimal performance on external datasets. This study aims to evaluate the robustness of various deep learning architectures for breast cancer histopathological image classification when the magnification scales are varied between training and testing stages. We compare the performance of multiple deep learning architectures, including CNN-based ResNet and MobileNet, self-attention-based Vision Transformers and Swin Transformers, and token-mixing models, such as FNet, ConvMixer, MLP-Mixer, and WaveMix. The experiments are conducted using the BreakHis dataset, which contains breast cancer histopathological images at varying magnification levels. We show that the performance of WaveMix is invariant to the magnification of training and testing data and can provide stable and good classification accuracy. These evaluations are critical in identifying deep learning architectures that can robustly handle domain changes, such as magnification scale.
Download

Paper Nr: 101
Title:

Bone-Aware Generative Adversarial Network with Supervised Attention Mechanism for MRI-Based Pseudo-CT Synthesis

Authors:

Gurbandurdy Dovletov, Utku Karadeniz, Stefan Lörcks, Josef Pauli, Marcel Gratz and Harald H. Quick

Abstract: Deep learning techniques offer the potential to learn the mapping function from MRI to CT domains, allowing the generation of synthetic CT images from MRI source data. However, these image-to-image translation methods often introduce unwanted artifacts and struggle to accurately reproduce bone structures due to the absence of bone-related information in the source data. This paper extends the recently introduced Attention U-Net with Extra Supervision (Att U-Net ES), which has shown promising improvements for the bone regions. Our proposed approach, a conditional Wasserstein GAN with Attention U-Net as the generator, leverages the network’s self-attention property while simultaneously including domain-specific knowledge (or bone awareness) in its learning process. The adversarial learning aspect of the proposed approach ensures that the attention gates capture both the overall shape and the fine-grained details of bone structures. We evaluate the proposed approach using cranial MR and CT images from the publicly available RIRE data set. Since the images are not aligned with each other, we also provide detailed information about the registration procedure. The obtained results are compared to Att U-Net ES, baseline U-Net and Attention U-Net, and their GAN extensions.
Download

Paper Nr: 110
Title:

Mutually Exclusive Multi-Modal Approach for Parkinson’s Disease Classification

Authors:

Arunava Chaudhuri, Abhishek S. Sambyal and Deepti R. Bathula

Abstract: Parkinson’s Disease (PD) is a progressive neurodegenerative disorder that affects the central nervous system and causes both motor and non-motor symptoms. While movement related symptoms are the most noticeable early signs, others like loss of smell can occur quite early and are easy to miss. This suggests that multi-modal assessment has significant potential in early diagnosis of PD. Multi-modal analysis allows for synergistic fusion of complementary information for improved prediction accuracy. However, acquiring all modalities for all subjects is not only expensive but also impractical in some cases. This work attempts to address the missing modality problem where the data is mutually exclusive. Specifically, we propose to leverage two distinct and unpaired datasets to improve the classification accuracy of PD. We propose a two-stage strategy that combines individual modality classifiers to train a multi-modality classifier using siamese network with Triplet Loss. Furthermore, we use a Max-Voting strategy applied to Mix-and-Match pairing of the unlabelled test sample of one modality with both positive and negative samples from the other modality for test-time inference. We conducted experiments using gait sensor data (PhysioNet) and clinical data (PPMI). Our experimental results demonstrate the efficacy of the proposed approach compared to the state-of-the-art methods using single modality analysis.
Download

Paper Nr: 247
Title:

Few-Shot Histopathology Image Classification: Evaluating State-of-the-Art Methods and Unveiling Performance Insights

Authors:

Ardhendu Sekhar, Ravi K. Gupta and Amit Sethi

Abstract: This paper presents a study on few-shot classification in the context of histopathology images. While few-shot learning has been studied for natural image classification, its application to histopathology is relatively unexplored. Given the scarcity of labeled data in medical imaging and the inherent challenges posed by diverse tissue types and data preparation techniques, this research evaluates the performance of state-of-the-art few-shot learning methods for various scenarios on histology data. We have considered four histopathology datasets for few-shot histopathology image classification and have evaluated 5-way 1-shot, 5-way 5-shot and 5-way 10-shot scenarios with a set of state-of-the-art classification techniques. The best methods have surpassed an accuracy of 70%, 80% and 85% in the cases of 5-way 1-shot, 5-way 5-shot and 5-way 10-shot cases, respectively. We found that for histology images popular meta-learning approaches is at par with standard fine-tuning and regularization methods. Our experiments underscore the challenges of working with images from different domains and underscore the significance of unbiased and focused evaluations in advancing computer vision techniques for specialized domains, such as histology images.
Download

Short Papers
Paper Nr: 30
Title:

3D Nuclei Segmentation by Combining GAN Based Image Synthesis and Existing 3D Manual Annotations

Authors:

Xareni Galindo, Thierno Barry, Pauline Guyot, Charlotte Rivière, Rémi Galland and Florian Levet

Abstract: Nuclei segmentation is an important task in cell biology analysis that requires accurate and reliable methods, especially within complex low signal to noise ratio images with crowded cells populations. In this context, deep learning-based methods such as Stardist have emerged as the best performing solutions for segmenting nucleus. Unfortunately, the performances of such methods rely on the availability of vast libraries of ground truth hand-annotated data-sets, which become especially tedious to create for 3D cell cultures in which nuclei tend to overlap. In this work, we present a workflow to segment nuclei in 3D in such conditions when no specific ground truth exists. It combines the use of a robust 2D segmentation method, Stardist 2D, which have been trained on thousands of already available ground truth datasets, with the generation of pair of 3D masks and synthetic fluorescence volumes through a conditional GAN. It allows to train a Stardist 3D model with 3D ground truth masks and synthetic volumes that mimic our fluorescence ones. This strategy allows to segment 3D data that have no available ground truth, alleviating the need to perform manual annotations, and improving the results obtained by training Stardist with the original ground truth data.
Download

Paper Nr: 43
Title:

Performance Review of Retraining and Transfer Learning of DeLTA2 for Image Segmentation for Pseudomonas Fluorescens SBW25

Authors:

Beate Gericke, Finn Degner, Tom Hüttmann, Sören Werth and Carsten Fortmann-Grote

Abstract: High throughput microscopy imaging yields vast amount of image data, e.g. in microbiology, cell biology, and medical diagnostics calling for automated analysis methods. Despite recent progress in employing deep neural networks to image segmentation in a supervised learning setting, these models often do not meet the performance requirement when used without model refinement in particular when cells accumulate and overlap in the image plane. Here, we analyse segmentation performance gains obtained through retraining and through transfer learning using a curated dataset of phase contrast microscopy images taken of individual cells and cell accumulations of Pseudomonas fluorescens SBW25. Both methods yield significant improvement over the baseline model DeLTA2 (O’Conner et al. PLOS Comp. Biol 18, e1009797 (2022)) in intersection–over–union and balanced accuracy test metrics. We demonstrate that (computationally cheaper) transfer learning of only 25% of neural network layers yields the same improvement over the baseline as a complete retraining run. Furthermore, we achieve highest performance boosts when the training data contains only well separated cells even though the test split may contain cell accumulations. This opens up the possibility for a semi–automated segmentation workflow combining feature extraction techniques for ground truth mask generation from low complexity images and supervised learning for the more complex data.
Download

Paper Nr: 115
Title:

Combining Datasets with Different Label Sets for Improved Nucleus Segmentation and Classification

Authors:

Amruta Parulekar, Utkarsh Kanwat, Ravi K. Gupta, Medha Chippa, Thomas Jacob, Tripti Bameta, Swapnil Rane and Amit Sethi

Abstract: Segmentation and classification of cell nuclei using deep neural networks (DNNs) can save pathologists’ time for diagnosing various diseases, including cancers. The accuracy of DNNs increases with the sizes of annotated datasets available for training. The available public datasets with nuclear annotations and labels differ in their class label sets. We propose a method to train DNNs on multiple datasets where the set of classes across the datasets are related but not the same. Our method is designed to utilize class hierarchies, where the set of classes in a dataset can be at any level of the hierarchy. Our results demonstrate that segmentation and classification metrics for the class set used by the test split of a dataset can improve by pre-training on another dataset that may even have a different set of classes due to the expansion of the training set enabled by our method. Furthermore, generalization to previously unseen datasets also improves by combining multiple other datasets with different sets of classes for training. The improvement is both qualitative and quantitative. The proposed method can be adapted for various loss functions, DNN architectures, and application domains.
Download

Paper Nr: 179
Title:

Open Platform for the De-identification of Burned-in Texts in Medical Images using Deep Learning

Authors:

Quentin Langlois, Nicolas Szelagowski, Jean Vanderdonckt and Sébastien Jodogne

Abstract: While the de-identification of DICOM tags is a standardized, well-established practice, the removal of protected health information that is burned into the pixels of medical images is a more complex challenge for which Deep Learning is especially well adapted. Unfortunately, there is currently a lack of accurate, effective, and freely available tools to this end. This motivates the release of a new benchmark dataset, together with free and open-source software that implements suitable Deep Learning algorithms, with the objective of improving patient confidentiality. The proposed methods consist in adapting regular scene-text detection models (SSD and TextBoxes) to the task of image de-identification. It is shown that the fine-tuning of such generic scene-text detection models on medical images significantly improves performance. The developed algorithms can be applied either from the command line or using a Web interface that is tightly integrated with a free and open-source PACS server.
Download

Paper Nr: 198
Title:

Coronary Artery Stenosis Assessment in X-Ray Angiography Through Spatio-Temporal Attention for Non-Invasive FFR and iFR Estimation

Authors:

Raffaele Mineo, Federica Proietto Salanitri, Giovanni Bellitto, Ovidio De Filippo, Fabrizio D’Ascenzo, Simone Palazzo and Concetto Spampinato

Abstract: Determining the degree of stenosis in coronary arteries through X-ray angiography imaging is a multifaceted task, given their appearance variability, the overlapping of vessels, and their small size. Traditional automated approaches utilize 2D deep models processing multiple angiography views as well as key frames. In this research, we propose a new deep learning model to non-invasively evaluate the fractional flow reserve (FFR) and instantaneous wave-free ratio (iFR) of moderate coronary stenosis from angiographic videos to better analyze spatial and temporal correlation without manual preprocessing. Our strategy harnesses 3D Convolutional Neural Networks (CNNs) to learn local spatio-temporal features and integrates self-attention layers to understand broad correlations within the feature set. At training time, both FFR and iFR values are employed for supervision, with missing targets suitably handled through multi-branch outputs. The resulting model can be employed to predict the presence of a clinically-significant coronary artery stenosis and to directly determine the FFR and iFR values. We also include an explainability strategy to show which parts of a video the model focuses on in the assessment of FFR and iFR values. Our proposed model demonstrates superior results than competitors on a dataset of 778 angiography exams from 389 patients. Importantly, our model doesn’t require key frames, thus reducing the efforts required by clinicians.
Download

Paper Nr: 225
Title:

Influence of Arterial Occlusion at Various Cuff Pressures on Systemic Circulation Measured by rPPG

Authors:

Leah De Vos, Gennadi Saiko, Denis Bragin and Alexandre Douplik

Abstract: Background: Arterial occlusion is a ubiquitous medical procedure, which is used in many clinical scenarios. However, there is no standard protocol for the selection of the applied pressure. As various pressures may trigger different physiological responses, it is important to understand these peculiarities. The aim of the current work is to investigate if there is any difference in the systemic response to the occlusion at various applied pressures. Methods: Hands of healthy volunteers (10 volunteers) were occluded at the wrist by inflating the blood cuff to 150 or 200 mmHg. The remote photoplethysmography (rPPG) measurements of control and experimental hands were taken. To assess systemic response, we have analysed the behaviour of AC (low frequency, LF at 0.1 Hz rate) components in green and red channels during occlusion and reperfusion. Results: We have not found a statistically significant difference in the LF spectra between occlusions at 150 and 200 mmHg pressures. Conclusions: We have performed the analysis of low-frequency (0.1 Hz) components of remote photoplethysmography signals during arterial occlusion at 150 and 200 mmHg. Our preliminary results show that the systemic response is similar at both levels of occlusions.
Download

Paper Nr: 250
Title:

Automated Classification of Phonetic Segments in Child Speech Using Raw Ultrasound Imaging

Authors:

Saja Al Ani, Joanne Cleland and Ahmed Zoha

Abstract: Speech sound disorder (SSD) is defined as a persistent impairment in speech sound production leading to reduced speech intelligibility and hindered verbal communication. Early recognition and intervention of children with SSD and timely referral to speech and language therapists (SLTs) for treatment are crucial. Automated detection of speech impairment is regarded as an efficient method for examining and screening large populations. This study focuses on advancing the automatic diagnosis of SSD in early childhood by proposing a technical solution that integrates ultrasound tongue imaging (UTI) with deep-learning models. The introduced FusionNet model combines UTI data with the extracted texture features to classify UTI. The overarching aim is to elevate the accuracy and efficiency of UTI analysis, particularly for classifying speech sounds associated with SSD. This study compared the FusionNet approach with standard deep-learning methodologies, highlighting the excellent improvement results of the FusionNet model in UTI classification and the potential of multi-learning in improving UTI classification in speech therapy clinics.
Download

Paper Nr: 22
Title:

Visualizing, Analyzing and Constructing L-System from Arborized 3D Model Using a Web Application

Authors:

Nick van Nielen, Fons Verbeek and Lu Cao

Abstract: In biology, arborized structures are well represented and typically complex for visualization and analysis. In order to have a profound understanding of the topology of arborized 3D biological model, higher level abstraction is needed. We aim at constructing an abstraction of arborized 3D biological model to an L-system that provides a generalized formalization in a grammar to represent complex structures. The focus of this paper is to combine 3D visualization, analysis and L-system abstraction into a single web application. We designed a front-end user interface and a back-end. In the front-end, we used A-Frame and defined algorithms to generate and visualize L-systems. In the back-end, we utilized the Vascular Modelling Toolkit’s (VMTK) centerline analysis methods to extract important features from the arborized 3D models, which can be applied to L-system generation. In addition, two 3D biological models: lactiferous duct and artery are used as two case studies to verify the functionality of this web application. In conclusion, our web application is able to visualize, analyse and create L-system abstractions of arborized 3D models. This in turn provides workflow-improving benefits, easy accessibility and extensibility.
Download

Paper Nr: 137
Title:

Characterization and Quantification of Image Quality in CT Imaging Systems: A Phantom Study

Authors:

Maria E. Fantacci

Abstract: Computed Tomography (CT) is a widely used imaging technique in lung cancer screening programs. To address the problem of exposing potentially healthy patients to ionizing radiation, Iterative Reconstruction (IR) algorithms can be employed. Indeed, traditional Filtered Back Projection reconstruction does not deliver adequate image quality with reduced dose levels. IR instead is prone to preserve diagnostic information and resolution while reducing noise and radiation dose. We characterized image quality for two CT scanners equipped with different iterative algorithms by using a quantitative metric, the detectability index. We compared the dependence of the image quality on the dose and the iterative level when the human visual perception is considered or not in the detectability index definition. It has been found that similar image quality can be obtained by using different scanners and different combinations of dose and iterative levels. This allows us to extrapolate the protocols corresponding to a lower dose while preserving as much as possible the imaging properties.
Download

Paper Nr: 249
Title:

Utilizing Radiomic Features for Automated MRI Keypoint Detection: Enhancing Graph Applications

Authors:

Sahar A. Nasser, Shashwat Pathak, Keshav Singhal, Mohit Meena, Nihar Gupte, Ananya Chinmaya, Prateek Garg and Amit Sethi

Abstract: Graph neural networks (GNNs) present a promising alternative to CNNs and transformers for certain image processing applications due to their parameter-efficiency in modeling spatial relationships. Currently, an active area of research is to convert image data into graph data as input for GNN-based models. A natural choice for graph vertices, for instance, are keypoints in images. SuperRetina is a promising semi-supervised technique for detecting keypoints in retinal images. However, its limitations lie in the dependency on a small initial set of ground truth keypoints, which is progressively expanded to detect more keypoints. We encountered difficulties in detecting a consistent set of initial keypoints in brain images using traditional keypoint detection techniques, such as SIFT and LoFTR. Therefore, we propose a new approach for detecting the initial keypoints for SuperRetina, which is based on radiomic features. We demonstrate the anatomical significance of the detected keypoints by showcasing their efficacy in improving image registration guided by these keypoints. We also employed these keypoints as ground truth for a modified keypoint detection method known as LK-SuperRetina for improved image matching in terms of both the number of matches and their confidence scores.
Download