Research

Medicine is currently experiencing a paradigm shift from the reliance on individual imaging modalities to a more integrative approach encompassing multiple imaging and non-imaging datasets for holistic information capture. BIDSLab's research projects outlined below aim at developing signal processing, image analysis, and deep learning techniques for biomedical datasets with an emphasis on multimodal information integration.

summary

BIDSLab's core expertise combines image processing, graph signal processing, and deep learning to target a range of biomedical inverse problems including image reconstruction, deblurring, denoising, and brain network analysis with clinical applications spanning neurology, oncology, pulmonology and more.


Image Deblurring and Super-Resolution Imaging

The quantitative accuracy of positron emission tomography (PET) is degraded by partial volume effects caused by the limited spatial resolution capabilities of PET scanners. We have developed an image deblurring technique for PET that exploits the higher resolution imaging capabilities of MRI by means of an information-theoretic anatomical prior. In parallel, we are using deep convolutional neural networks to generate super-resolution PET image with anatomical (MRI-based) guidance. Our target application with super-resolution PET is Alzheimer's disease (AD), which is a debilitating neurodegenerative disorder that affects over 30 million people worldwide. The hallmark pathologies of AD are histology-confirmed amyloid-β (Aβ) plaques and tau neurofibrillary tangles, both of which appear many years before the onset of cognitive decline. Application of our methods to tau images using Flortaucipir PET on cognitively normal and impaired subjects has revealed than deblurring leads to a marked improvement in the correlation of PET measures with well-recognized clinical metrics of cognitive performance. Our results indicate that the applied correction substantially improves our ability to distinguish between different stages of AD progression based on amyloid network structure analysis.

Link: ISBI 2015 Paper

Super-Resolution PET

Comparison of brain scans of a cognitively normal subject and and Alzheimer's disease patient. The subfigures show A. T1-weighted MPRAGE MRI images, B. Flortaucipir PET images of tau tangles reconstructed by the scanner, C.Flortaucipir PET images deblurred with a measured spatially varying point spread function, and D. Flortaucipir PET images deblurred with a measured spatially varying point spread function with the help of a MRI-based joint entropy prior.


Brain Network Analysis

Aβ and tau protein aggregates, which are hallmarks of AD, have characteristic spatial patterns with links to disease progression. Brain network connectivity studies are important for revealing the spatial associations in these patterns. But partial volume effects in PET severely limits network accuracy. We used Florbetapir PET scans of Aβ from the Alzheimer's Disease NeuroImaging (ADNI) database to investigate the effect of deblurring on Aβ networks. One of our key findings was a decreasing trend in the node degree accompanying disease progression (from normal to AD) upon correction of the Florbetapir images with our deblurring technique. As part of our current research, we are developing and employing graph-based data mining techniques to determine the relationship between tau-based functional networks (derived from deblurred Flortaucipir images) and structural networks (obtained from diffusion tensor imaging).

Link: Sci Rep 2017 Paper

Amyloid Networks in AD

Comparison of Aβ networks obtained from original (uncorrected) and deblurred (partial volume corrected) Florbetabir datasets from ADNI. The subfigures show A. Brain network and graph adjacency matrix obtain from original Florbetabir images, B. FBrain network and graph adjacency matrix obtain from deblurred Florbetabir images showing recovery of previously unseen inter-regional connections (edges) in the graph, C. Node degrees for four populations: normal controls (NC), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and Alzheimer's disease (AD). Deblurring of Florbetabir images led to a consistently decreasing trend in the observed node degree with disease progression.


Predictive Analytics

In the current information age, we are faced with a biomedical data deluge, which includes both medical imaging and non-imaging clinical datasets. From a signal processing perspective, efforts to tap into this vast and rapidly growing data resource has led to an escalating need to develop data analysis techniques that are scalable for large data volumes and dimensionalities. Starting with an overarching research theme of developing signal processing techniques with a focus on image quantitation and multimodality information integration, I have broadened by current research scope to encompass two new foci – (1) Integration of imaging data with non-imaging clinical datasets, e.g. genomics, demographics, environmental information, bloods works, spinal taps etc. and (2) Development of optimization algorithms that are scalable to large data volumes. My current research directions include using deep learning for predictive analytics for biomedical data. Datasets of particular interest to me include the Alzheimer's Disease Neuroimaging Initiative (ADNI), the COPDGene Study, and The Cancer Genome Atlas (TCGA).

Links: ICASSP 2016 Paper, JBHI 2017 Paper

COPD Exacerbation Prediction with Deep Learning

[Left] Pipeline for input feature selection for the COPDGene cohort using Fisher scoring. [Right] Simplified visualization of a deep neural network for predicting COPD exacerbation frequency.


Image Segmentation

Diagnostic and therapeutic tasks often rely on images with multiple “channels”. Examples include multimodal imaging, multi-time-point imaging (e.g. dynamic scans), and multi-parametric imaging (e.g. different MR pulse sequences) acquisitions. Quantitative interpretation and analysis of such multi-channel images leads to the need for tools for joint segmentation. We have developed a joint segmentation approach based on higher order singular value decomposition (HOSVD) of the composite multigraph Laplacian tensor derived from multichannel images. The method has been successfully applied to a wide range of clinical datatypes including PET and MR images from sarcoma patients, dynamic PET image of hepatocellular carcinoma, and different types of scans from glioblastoma multiforme patients.

Links: SNMMI 2017 Abstract

HOSVD-Based Multigraph Segmentation

Schematic describing HOVD-based multigraph cuts in four steps: 1. Computation of adjacency matrices from multi-channel images, 2. Computation of the multigraph Laplacian tensor, 3. HOSVD, and 4. k-Means Clustering.


Motion-Compensated Image Reconstruction

Simultaneous PET/MRI combines the strengths of two complementary imaging modalities and is emerging as an increasingly potent tool for integrated imaging. While PET (positron emission tomography) reveals only functional or physiological information, MRI (magnetic resonance imaging) is able to generate structural or anatomical information, generally with higher resolution. In the context of lung imaging, where PET scans are severely compromised by respiratory motion, we have developed a maximum a posteriori estimation framework that incorporates deformation fields derived from simultaneously acquired MRI data. This technique enables the generation of PET images free of motion artifacts, which leads to improved image quantitation, thereby facilitating lung cancer staging and treatment optimization.

Links: Med Phys 2015 Paper

Motion Compensated Image Reconstruction

Cover of the July 2015 issue of the journal Medical Physics featuring our work on motion-compensated image reconstruction using simultaneous PET/MRI.


Image Denoising

The high levels of statistical noise in PET images pose a challenge to accurate quantitation. This issue is particularly well-pronounced at the early time frames of dynamic PET images, which are usually short to capture rapid changes in tracer uptake patterns. We developed a non-local means denoising filter for dynamic PET images which uses spatiotemporal patches for robust similarity computation. Realistic simulations of a dynamic digital mouse phantom showed improved bias-variance performance characterics relative to several well-known denoising approaches. Experiments in mice and humans showed clear improvement in contrast-to-noise ratio in Patlak parametric images. To further improve denoising performance along sharp edges, we used anatomical guidance to limit the spatial window for non-local similarity computation. The method was tested on the BrainWeb digital phantom and on human datasets and demonstrated robustness particularly at high noise levels and led to recovery of sharp edges (e.g. tissue and organ boundaries).

Link: PLOS One 2013 Paper

Anatomically Guided Dynamic PET Image Denoising

A transverse slice from a dynamic human Flortaucipir scan of a human subject with mild cognitive impairment. The columns represent the segmented MRI, noisy, NLM denoised, and ANLM denoised images respectively. The rows represent three time points (2.6 min, 19 min, and 37.5 min) reflecting the evolution of activity over time. The original image frames corresponding to the top and middle rows are noisier than the right column.