Browsing by Subject "Image Processing, Computer-Assisted"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item Automated Quantification of Image-Derived Phenotypes and Integration with the Electronic Health Record at Scale in an Academic Biobank(2020-05-01T05:00:00.000Z) MacLean, Matthew Timothy; Hill, Joseph A.; de Lemos, James; Rader, DanielBACKGROUND: Radiographic images obtained during clinical care can yield a tremendous number of quantitative traits that facilitate translational research. Both liver fat and abdominal adipose mass are examples of quantitative traits that are highly relevant to human health and disease and can be quantitated from medical images such as CT scans. The Penn Medicine Biobank has generated genomic and biomarker data from >50,000 participants with consent to access EHR data. Among these patients, there are greater than 160,000 CT scans, representing over 19,000 patients from which these quantitative traits could be derived. However, manual review of imaging data is time-consuming, costly, and can produce variable results. OBJECTIVE: The goal of this research was to develop a fully automated pipeline to quantify hepatic fat and abdominal adipose mass from CT scans that could be run at scale on patients within the Penn Medicine Biobank. METHODS: We developed a fully automated image filtering and analysis pipeline to analyze CT imaging data. Deep learning networks were trained to identify the presence of IV contrast, delineate the borders of the liver and spleen, and detect visceral and subcutaneous fat. To identify CT studies, we queried our biobank of 52,441 patients for all non-contrast chest, abdomen, and all abdomen/pelvis scans and identified 161,748 CT scans from 19,624 patients. All scans were processed in our deep-learning pipeline in less than 96 hours using parallel processing with cloud-computing resources. From the imaging data, we extracted 12 different image-derived phenotypes that were used in association studies with the electronic health record (N>13000). We also performed genetic association studies (N>5000) on our CT-derived measure of liver fat (LF). Liver fat was defined as the difference in attenuation between the spleen and the liver. Receiver operator characteristic analysis of the liver fat metric was conducted by utilizing 135 patients who had both a CT scan as well as a liver biopsy. Finally, we performed principal component analysis to explore the interrelatedness of the image derived metrics in the context of the phenome. RESULTS: Each component of the algorithm was individually validated for accuracy. The first deep learning network (CNN1) functioned to identify IV contrast, and on a testing set of 400 (half with IV contrast), CNN1 classified 399 scans correctly. CNN2 was tasked with identifying the superior and inferior borders of the abdominal cavity and on a testing set of 100 scans was on average within one slice of the superior (1.01±1.11) and inferior (0.70±0.64) borders. Performance of CNN3, which was tasked with labeling liver, spleen, visceral and subcutaneous fat was assessed by computing region-of-interest area overlap ratios (Dice coefficients). Dice coefficients for a validation set of randomly selected CT scans showed high values for liver (0.95±0.02, n=20), spleen (0.92±0.07, n=20), abdominal compartment (0.98±0.01, n=10), subcutaneous fat (1.00±0.00, n=10), and visceral fat (0.99±0.01, n=10). After extracting the liver fat (LF) metric for all patients, LF had a mean of -6.4 ± 9.1 Hounsfield units. Association studies with billing codes in the electronic health record yielded many known associations for LF including with chronic liver disease, diabetes mellitus, obesity, and hypertension. A genetic association study showed significant associations with variants located in PNPLA3 and TM6SF2. ROC analysis using pathology data yielded an AUC value of 0.81 with a balanced cutoff value of -6 HU for LF. CONCLUSION: This paper presents a fully automated AI-based algorithm for the quantification of traits from medical imaging. This high-throughput algorithm was applied to over 160,000 scans to quantify 12 different traits and the results were integrated with phenome, genome, and pathology data in the Penn Medicine Biobank. This process is scalable, fast, and fault-tolerant and can power translational studies when integrated with clinical and genetic data.Item Comprehensive Analysis of Lung Cancer Prognostic Factors(2019-07-29) Wang, Shidan; Gerber, David E.; Xie, Yang; Xiao, Guanghua; Zhan, Xiaowei; Hoshida, YujinLung cancer is the leading cause of death from cancer. It is remarkably heterogeneous in histopathological features and highly variable in prognosis. Analysis of prognostic factor is anticipated to guide clinicians for treatment selection, enhance patient care, and help understanding biological mechanism of tumor progression. To extend current knowledge about lung cancer prognosis, this dissertation analyzed lung cancer prognostic factors in three levels. First, in tumor level, deep learning aided pathology image analysis was used to extract tumor geometry and microenvironment features, upon which an image-based survival prediction model was built and independently validated for lung adenocarcinoma. Second, in patient level, a nomogram was built with demographic and clinical variables for patients with small cell lung cancer. The nomogram was implemented online for public usage. Third, in population level, how facility type and volume affect survival outcome and surgery selection for early stage non-small cell lung cancer was analyzed.Item Computer Vision to Characterize Protein Interactions at the Cell Membrane(2018-12-26) Vega, Anthony Raphael; Yu, Hongtao; Jaqaman, Khuloud; Schmid, Sandra; Grishin, Nick V.Protein interactions at the cell membrane provide critical insight into how cells respond to and interact with their environment. Technological advances in light microscopy have allowed an unprecedented perspective into these interactions, however manual analysis of data has become increasingly insufficient to characterize interactions as advances progress. Computer vision tools offer a powerful approach to automate analysis, overcoming limitations of manual analysis to optimize the discovery of novel interactions and their underlying mechanisms. In this thesis, I develop novel computer vision tools to probe the intensity and mobility properties of proteins on the cell membrane, and demonstrate how these can be used to provide insight into membrane protein interactions and organization.Item Development of Applications and Quantitative Frameworks for Multispectral Optoacoustic Tomography(2020-12-01T06:00:00.000Z) O'Kelly, Devin Sean; Danuser, Gaudenz; Bouchard, Richard R.; Lewis, Matthew Allen; Fiolka, Reto; Mason, Ralph P.The tumor microenvironment is a highly complex system, with variations through space and time that are determined by the interplay of normal and cancerous cells, physiological phenomena, and treatments that can dramatically change the structural or biological dynamics underlying the emergent behaviors. Quantitatively and reliably imaging the microenvironment represents an opportunity to develop diagnostic and prognostic assessment of cancer patients, enabling a fuller understanding of the tumor's evolution, and response to treatments. Multispectral optoacoustic tomography (MSOT), a novel imaging modality, has the potential to reveal the spatiotemporal dynamics of oxygenation at high resolution through the use of multiplexed laser light and has shown promise in advancing both clinical and pre-clinical research. Nevertheless, current methods of analysis often fail to yield sensible data, and are prone to artifacts and quantitative errors that preclude the effective use of this imaging method for diagnostic or prognostic imaging and that add difficulties in downstream analyses. In this work, I developed a battery of methods and tools that bridge the gap of MSOT's theoretical capabilities and the practical realities of its usage. These include a transparent and open-source toolbox for image reconstruction and analysis along with its deployment to a cloud-based workflow service managed by the University of Texas Southwestern Medical Center at Dallas' BioHPC; a simple and scalable method to address spectral aliasing and improve the time resolution and signal-to-noise ratio of dynamic MSOT data; a method to extract quantitative breathing parameters from tomographic imaging data; and a model scheme of the systemic physiology that determines the response to gas-breathing challenges. These developments have laid the groundwork for more rigorous investigations using MSOT for preclinical imaging research.Item An Imaging Approach to Examine Telomere Dynamics and Regulation of Gene Expression with Aging(2020-08-01T05:00:00.000Z) Zhang, Ning; Xie, Yang; Danuser, Gaudenz; Shay, Jerry W.; Jaqaman, Khuloud; Siegwart, Daniel J.Telomeres are repetitive non-coding nucleotide sequences (TTAGGG)n capping the ends of chromosomes. Improved methods to measure the shortest (not just average) telomere lengths (TLs) are needed. Progressive telomere shortening with increasing age has been associated with shifts in gene expression through models such as the telomere position effect (TPE), which suggests reduced interference of the telomere with transcriptional activity of increasingly more distant genes. A modification of the TPE model, referred to as Telomere Position Effects over Long Distance (TPE-OLD), explains why some genes 1-10 MB from a telomere are still affected by TPE, but genes closer to the telomere are not. Therefore, demonstrating the regulatory roles of telomere length shortening on genes with accurate TL measurement will improve our understanding to the 3D genomic DNA landscape including telomeres. In this doctoral dissertation, I developed a user-friendly software for automatic electrophoresis gel quantification and contributed to developing the Telomere Shortest Length Assay (TeSLA), a technique that detects telomeres from all chromosome ends from <1 kb to 18 kb using small amounts of input DNA. Using cells with more TL information provided by TeSLA, I conducted an imaging approach to systematically examine the occurrence of TPE-OLD at the single cell level. Compared to existing methods, the pipeline allows rapid analysis of hundreds to thousands of cells, which is necessary to establish TPE-OLD as an acceptable mechanism of gene expression regulation. I examined two human genes, for which TPE-OLD has been described before, ISG15 (Interferon Stimulated Gene 15) and TERT (TElomerase Reverse Transcriptase). For both genes I found less interaction with the telomere on the same chromosome in old cells compared to young cells. Experimentally elongated telomeres in old cells rescued the level of telomere interaction for both genes. However, the dependency of the interactions on the age progression from young to old cells varied. One model for the differences between ISG15 and TERT may relate to the markedly distinct interstitial telomeric sequence arrangement in the two genes. Overall, this provides a strong rationale for the role of telomere length shortening in the regulation of gene expression.Item Improving Cone-Beam Computed Tomography Based Adaptive Radiation Therapy with Deep Learning(2023-05-01T05:00:00.000Z) Liang, Xiao; Nguyen, Dan; Jiang, Steve B.; Lin, Mu-Han; Wang, Jing; Lu, WeiguoDuring radiation therapy, patient anatomical changes may compromise treatment quality if the treatment plan, prepared prior to therapy, remains unchanged throughout the course. Adaptive radiation therapy (ART) has been developed to address this issue by adapting the treatment plan based on up-to-date patient anatomy. The widespread availability of cone-beam computed tomography (CBCT) and its capability for the 3D imaging of patient anatomy has made CBCT-based ART an emerging and increasingly popular technology in the field of radiation oncology. However, numerous challenges persist in the current workflow. One challenge involves generating synthetic CT (sCT) images that retain CBCT anatomy while maintaining CT image quality. Clinically used sCT is typically obtained through deformable image registration (DIR) between pre-planning CT (pCT) and CBCT; however, this method often inadequately preserves CBCT anatomy due to DIR errors. Another challenge stems from the truncation issue in CBCT images, resulting from size limitations of imaging panels, which consequently leads to inaccuracies in CBCT-based dose calculations. Additionally, auto-segmentation of CBCT is impeded by low image quality and a lack of training labels. Although addressing this issue is difficult, enhancing auto-segmentation is essential, as manual segmentation is highly time-consuming. This dissertation aims to improve CBCT-based ART by leveraging deep learning (DL) technologies. First, an unsupervised DL model is proposed to directly convert CBCT images into sCT images with reduced artifacts and more accurate Hounsfield Unit values, as found in CT scans. Second, the model's generalizability is investigated and discussed, along with potential solutions to address the generalizability problem. Third, two DL models are designed to extract and combine information from pCT and CBCT, in order to inpaint axial and longitudinal truncations in CBCT images. Through these three studies, the predicted truncation-free sCT images hold the potential to enhance ART workflows, enabling more accurate dose calculations compared to DIR-generated sCT. The dissertation then shifts its focus to CBCT auto-segmentation. Initially, an unsupervised DL-based DIR model is proposed to predict the deformation vector field between pCT and CBCT, enabling pCT structure propagation as the segmentation on CBCT. To further improve segmentation accuracy, a DL-based direct segmentation model assisted by DIR is proposed, which outperforms state-of-the-art DIR-based segmentation results. The contributions of this thesis are expected to enhance the accuracy and efficiency of sCT generation and CBCT auto-segmentation in the CBCT-based ART workflow.Item Motion Estimation and Motion-Compensated Reconstruction for Four-Dimensional Cone Beam Computed Tomography (4D-CBCT)(2020-05-01T05:00:00.000Z) Huang, Xiaokun; Jia, Xun; Wang, Jing; Sun, Xiankai; Jiang, Steve B.The emerging of sophisticated radiation therapy such as stereotactic body radiation therapy (SBRT) characterizing as high dose in each fraction and few fraction number requires higher accuracy for tumor localization. For organs influenced by the respiration, respiration induced motion becomes the principal cause for tumor localization uncertainty and four dimensional (4D) cone beam computed tomography (CBCT) has been developed to locate tumor in each respiration phase to better estimate the possible motion range of the tumor motion during the treatment. However, 4D-CBCT reconstructed by conventional methods on current commercial scanners is not optimal for tumor localization due to low image quality caused by insufficient number of projections located in each phase after the projection binning according to respiration phases. The specific aims of this dissertation research are to: 1) improving the accuracy of inter-phase motion model to feed in a motion-compensated reconstruction scheme to improve the 4D-CBCT image quality; 2) utilizing high-quality 4D-CBCT for motion evaluation and 4D dose accumulation for lung cancer patients receiving SBRT. The motion-compensated reconstruction suppresses motion and improves image quality by deforming other phase image to the reference phase using inter-phase motion model to reconstruct reference phase image using projections from all phases. Therefore, it is essential to improve the inter-phase motion model accuracy. Two methods, biomechanical modeling based and convolutional neural network (CNN) based, were applied to fine-tune the inner lung motion model. The biomechanical modeling is a physics-driven method which introduced tissue related elasticity properties to simulate the movement of lung and solve the deformation by finite-element analysis. Biomechanical modeling requires boundary condition which is the deformation vector fields (DVFs) estimated from a 2D-3D registration. For CNN based methods, boundary DVFs are also used as the input for the U-net based architectures to predict the inner lung motion. All methods can improve accuracy of DVFs and further improve reconstructed 4D-CBCT images quality. After obtaining high-quality 4D-CBCT images, we created a tool using 4D-CBCT images to evaluate the motion variation as well as calculate the accumulated 4D dose to monitor and evaluate the delivered dose for lung SBRT patients.Item [News](1972-03-31) Fenley, Bob; Weeks, JohnItem Novel Detection Methods for Chemical Exchange and Application to Breast Cancer Imaging(2018-07-25) Zhang, Shu; Madhuranthakam, Ananth; Vinogradov, Elena; Lenkinski, Robert; Sherry, A. Dean; Pedrosa, IvanChemical exchange saturation transfer (CEST) is a novel contrast mechanism that is based on the chemical exchange processes between the protons in water and solutes. CEST can indirectly detect the low concentrated solutes, which are not observable with conventional MRI and indicate the quantitative environmental parameters such as pH. Therefore, many promising applications of CEST are explored. The aim of my projects is to develop new CEST imaging techniques and apply these techniques in human studies at 3 T. The first project is to develop a fast and quantitative imaging method based on the balanced steady-state free precession sequence as an alternative way for chemical exchange detection (bSSFPX). The feasibility of bSSFPX for chemical exchange detection was proved both theoretically through Bloch-McConnell Equations simulations and experimentally by phantoms studies. Analytical models for bSSFPX were developed for quantitative measurements of T1ρ and exchange rate. In a first in vivo experiment, bSSFPX was applied in the human brain to detect the chemical exchange possibly from fast exchanging metabolites with resonance frequencies close to water that would be challenging at 3 T for standard CEST imaging methods. As a new CEST data acquisition method, the bSSFPX experiment holds high promise for fast, quantitative and 3D CEST imaging. The second project is to develop a CEST-Dixon sequence for fat free CEST imaging and apply it to breast cancer imaging. The influence of non-exchanging fat on CEST imaging was studied by simulation, in phantoms, and in vivo at different fat fractions and echo times. The CEST-Dixon method has been proved to eliminate lipid contamination robustly in breast CEST imaging. In the breast cancer study, higher CEST effects were observed in the more aggressive cancer group than the less aggressive cancer, benign and normal groups in all three frequencies ranges (hydroxyl, amine and amide) explored, while no significant differences were observed between the less aggressive cancer, benign and normal groups. In addition, significant correlation between MTRasym and Ki-67 was observed for cancer groups. While the study is preliminary, the results indicate that the CEST-Dixon method may differentiate between more aggressive and less aggressive breast cancer.Item An Optical Flow Based Methodology for Visualizing Dynamic Sucellular [sic] Organiztion [sic] Demonstrated Through Profilin and Rho GTPase Microdomains(December 2021) Jiang, Xuexia; Doubrovinski, Konstantin; Jaqaman, Khuloud; Danuser, Gaudenz; Rajaram, SatwikLive cell imaging has enabled the collection of movies of subcellular protein dynamics at a submicron resolution. Statistical time series analysis can greatly expand our understanding of subcellular interactions in minimally perturbed systems. This was previously achieved for the leading edge of migrating cells in select cases. Importantly no strategy existed to simultaneously analyze every subcellular location. Building on existing optical flow based non-linear image registration we developed an approach to remap a migrating cell to a common cell footprint while preserving the characteristics of our signal of interest at a spatial granularity necessary for understanding micron scale biological interactions. This tool enabled us to discover that Profilin fluctuations are organized in living cells. This organization was found to be dependent on cell polarization and actin binding capability. Expanding on this ability to query all subcellular locations, we developed a feature set and feature projection strategy to map molecular biosensor movies of Rho GTPase signaling into micron scale regions of internally consistent signaling dynamics or "microdomains". Microdomains of GTPases match literature descriptions of signaling organization and in an optogenetic study were found to almost precisely match the perturbation footprint.Item Robust Fat and Fluid Suppression in MR Imaging: Technical Developments and Advanced Clinical Applications(2018-07-25) Wang, Xinzeng; Vinogradov, Elena; Madhuranthakam, Ananth; Lenkinski, Robert; Pedrosa, Ivan; Lewis, Matthew AllenFat and fluid suppression methods are widely used in MR imaging to improve the lesion conspicuity, reduce the artifacts, increase quantification accuracy etc. However, these methods often suffer from either low signal to noise ratio (SNR), incomplete fat suppression or long scan times in some challenging clinical applications, such as MR Neurography, abdominal imaging, whole-body imaging and diffusion-weighted imaging. The research in this thesis aims to improve and develop MR sequences and reconstruction methods for robust fat and fluid suppression in several advanced clinical applications. The first topic of this thesis focuses on improving fat suppression. A frequency offset corrected inversion (FOCI) pulse based short tau inversion recovery (STIR) sequence was developed to improve the fat suppression in brachial plexus imaging, where large B1 and B0 inhomogeneities are often encountered. However, similar to the conventional STIR, it suffers from low SNR. Then, a multi-echo Dixon based variable flip angle TSE sequence was implemented for robust fat suppression with improved SNR and blood suppression, increasing the visualization of brachial plexus. The multi-echo Dixon method was later extended to single shot TSE (SShTSE) sequence to improve the fat suppression in breathhold abdominal imaging, where the commonly used fat suppression method (SPAIR) suffers from incomplete fat suppression due to the large B0 inhomogeneities. The second topic was simultaneous fat and fluid suppression. A dual-echo 3D TSE sequence combined with multi-echo Dixon was developed to generate simultaneous fat and fluid suppressed images of the cervical spines in a single acquisition. It can also simultaneously generate the standard T2-weighted image, fluid suppressed image and myelogram, significantly reducing the total scan time compared to the current clinical protocols. Then a fast whole-body MR imaging (7 min) sequence was developed for metastatic cancer detection by combining the simultaneous fat and fluid suppression method with the SShTSE acquisition. The images generated from the proposed sequence showed good lesion conspicuity without EPI-associated geometric distortions. Finally, the multi-echo Dixon was implemented with the TSE-based diffusion-weighted imaging sequence, to generate distortion-fee diffusion images with improved fat suppression and lesion conspicuity in areas with large B0 inhomogeneities, such as cervical spinal cord.Item Structural Basis for Coordination in Dimeric Kinesin(2009-09-04) Metlagel, Zoltan; Kikkawa, MasahideKinesin-1 (conventional kinesin) is a protein motor that carries organelles and vesicle cargo along its microtubule track. The two catalytic heads of Kinesin-1 are linked to function as a highly processive "molecular walker'' that can take hundreds of steps before falling off the track. A key requirement for processivity is that the nucleotide cycles of the heads are coordinated to prevent simultaneous release of both heads from the track. The structural basis for coordination has not been established yet. Here, we show the conformational changes involved in nucleotide-dependent switching of the kinesin core in the functional context of the \MT. The observed conformational differences between two key nucleotide states comprise the structural groundwork for future studies on how the nucleotide cycles are coordinated between the heads. Further, a software suite, Ruby-Helix, was developed to facilitate helical image analysis and implement a new algorithm for the analysis of helical objects with a seam. Ruby-Helix incorporates several new techniques for conventional helical analysis, and automates many of the repetitive steps involved in helical analysis, thereby greatly increasing the throughput of this method.Item [UT News](1986-04-28) Cason, Vicki