Medicine is currently experiencing a paradigm shift from the reliance on individual imaging modalities to a more integrative approach encompassing multiple imaging and non-imaging datasets for holistic information capture. BIDSLab's research projects outlined below aim at developing signal processing, image analysis, and deep learning techniques for biomedical datasets with an emphasis on multimodal information integration.
BIDSLab's core expertise combines image processing, graph signal processing, and deep learning to target a range of biomedical inverse problems including image reconstruction, deblurring, denoising, and brain network analysis with clinical applications spanning neurology, oncology, pulmonology and more.
The quantitative accuracy of positron emission tomography (PET) is degraded by partial volume effects caused by the limited spatial resolution capabilities of PET scanners. We have developed methods for image deblurring and super-resolution for PET that exploits the higher resolution imaging capabilities of high-resolution anatomical magnetic resonance (MR) images. Methods developed by us include (i)) spatially variant deconvolution with a joint entropy (JE) penalty function and (ii) convolutional neural networks, both supervised and self-supervised, with spatial and anatomical input channels. Our target application with super-resolution PET is Alzheimer's disease (AD), which is a debilitating neurodegenerative disorder that affects over 30 million people worldwide. The hallmark pathologies of AD are histology-confirmed amyloid-β (Aβ) plaques and tau neurofibrillary tangles, both of which appear many years before the onset of cognitive decline. Application of our methods to tau images using Flortaucipir PET on cognitively normal and impaired subjects has revealed than deblurring leads to a marked improvement in the correlation of PET measures with well-recognized clinical metrics of cognitive performance. Our results indicate that the applied correction substantially improves our ability to distinguish between different stages of AD progression based on amyloid network structure analysis.
Comparison of brain scans of a cognitively normal subject and and Alzheimer's disease patient. The subfigures show A. T1-weighted MPRAGE MRI images, B. Flortaucipir PET images of tau tangles reconstructed by the scanner, C.Flortaucipir PET images deblurred with a measured spatially varying point spread function, and D. Flortaucipir PET images deblurred with a measured spatially varying point spread function with the help of a MRI-based joint entropy prior.
Aβ and tau protein aggregates, which are hallmarks of AD, have characteristic spatial patterns with links to disease progression. Brain network connectivity studies are important for revealing the spatial associations in these patterns. It is well-known now that neurological disorders like AD disrupt both structural and functional connectivity of brain networks. At BIDSLab, we have developed techniques based on graph theory and graph signal processing to interpret and analyze brain connectivity. Building on the hypothesis that tau propagates along the structural network of the human brain, our recent work uses tau PET and diffusion tensor imaging (DTI) to model distinct propagative and generative components of tau spread. Our other contributions in this domain include classification of human subjects on the AD spectrum based on (i) DTI-based structural networks and (ii) PET-based cross-sectional functional networks. The novelty of the former was in the use of graph convolutional neural networks (a deep geometric learning approach). The novelty of the latter was in being able to demonstrate that PET image deblurring using the JE-based method that we developed leads to increased discriminative power in the brain network domain.
Comparison of Aβ networks obtained from original (uncorrected) and deblurred (partial volume corrected) Florbetabir datasets from ADNI. The subfigures show A. Brain network and graph adjacency matrix obtain from original Florbetabir images, B. FBrain network and graph adjacency matrix obtain from deblurred Florbetabir images showing recovery of previously unseen inter-regional connections (edges) in the graph, C. Node degrees for four populations: normal controls (NC), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and Alzheimer's disease (AD). Deblurring of Florbetabir images led to a consistently decreasing trend in the observed node degree with disease progression.
In the current information age, we are faced with a biomedical data deluge, which includes both medical imaging and non-imaging clinical datasets. From a signal processing perspective, efforts to tap into this vast and rapidly growing data resource has led to an escalating need to develop data analysis techniques that are scalable for large data volumes and dimensionalities. Starting with an overarching research theme of developing signal processing techniques with a focus on image quantitation and multimodality information integration, I have broadened by current research scope to encompass two new foci – (1) Integration of imaging data with non-imaging clinical datasets, e.g. genomics, demographics, environmental information, bloods works, spinal taps etc. and (2) Development of optimization algorithms that are scalable to large data volumes. My current research directions include using deep learning for predictive analytics for biomedical data. Datasets of particular interest to me include the Alzheimer's Disease Neuroimaging Initiative (ADNI), the COPDGene Study, and The Cancer Genome Atlas (TCGA).
[Left] Pipeline for input feature selection for the COPDGene cohort using Fisher scoring. [Right] Simplified visualization of a deep neural network for predicting COPD exacerbation frequency.
Diagnostic and therapeutic tasks often rely on images with multiple “channels”. Examples include multimodal imaging, multi-time-point imaging (e.g. dynamic scans), and multi-parametric imaging (e.g. different MR pulse sequences) acquisitions. Quantitative interpretation and analysis of such multi-channel images leads to the need for tools for joint segmentation. We have developed a joint segmentation approach based on higher order singular value decomposition (HOSVD) of the composite multigraph Laplacian tensor derived from multichannel images. The method has been successfully applied to a wide range of clinical datatypes including PET and MR images from sarcoma patients, dynamic PET image of hepatocellular carcinoma, and different types of scans from glioblastoma multiforme patients. More recently, a modified version of these method was used to performed respiratory motion based lung parcellation.
Schematic describing HOVD-based multigraph cuts in four steps: 1. Computation of adjacency matrices from multi-channel images, 2. Computation of the multigraph Laplacian tensor, 3. HOSVD, and 4. k-Means Clustering.
Simultaneous PET/MRI combines the strengths of two complementary imaging modalities and is emerging as an increasingly potent tool for integrated imaging. While PET (positron emission tomography) reveals only functional or physiological information, MRI (magnetic resonance imaging) is able to generate structural or anatomical information, generally with higher resolution. In the context of lung imaging, where PET scans are severely compromised by respiratory motion, we have developed a maximum a posteriori estimation framework that incorporates deformation fields derived from simultaneously acquired MRI data. This technique enables the generation of PET images free of motion artifacts, which leads to improved image quantitation, thereby facilitating lung cancer staging and treatment optimization.
Cover of the July 2015 issue of the journal Medical Physics featuring our work on motion-compensated image reconstruction using simultaneous PET/MRI.
The high levels of statistical noise in PET images pose a challenge to accurate quantitation. This issue is particularly well-pronounced at the early time frames of dynamic PET images, which are usually short to capture rapid changes in tracer uptake patterns. We developed a non-local means denoising filter for dynamic PET images which uses spatiotemporal patches for robust similarity computation. Realistic simulations of a dynamic digital mouse phantom showed improved bias-variance performance characterics relative to several well-known denoising approaches. Experiments in mice and humans showed clear improvement in contrast-to-noise ratio in Patlak parametric images. To further improve denoising performance along sharp edges, we used anatomical guidance to limit the spatial window for non-local similarity computation. The method was tested on the BrainWeb digital phantom and on human datasets and demonstrated robustness particularly at high noise levels and led to recovery of sharp edges (e.g. tissue and organ boundaries).
Link: PLOS One 2013 Paper
A transverse slice from a dynamic human Flortaucipir scan of a human subject with mild cognitive impairment. The columns represent the segmented MRI, noisy, NLM denoised, and ANLM denoised images respectively. The rows represent three time points (2.6 min, 19 min, and 37.5 min) reflecting the evolution of activity over time. The original image frames corresponding to the top and middle rows are noisier than the right column.