EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half ring PET Insert System

Download or read book Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half ring PET Insert System written by Daniel Brian Keesing and published by . This book was released on 2009 with total page 159 pages. Available in PDF, EPUB and Kindle. Book excerpt: X-ray computed tomography (CT) and positron emission tomography (PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets.

Book 3D Image Reconstruction for CT and PET

Download or read book 3D Image Reconstruction for CT and PET written by Daniele Panetta and published by CRC Press. This book was released on 2020-10-11 with total page 97 pages. Available in PDF, EPUB and Kindle. Book excerpt: This is a practical guide to tomographic image reconstruction with projection data, with strong focus on Computed Tomography (CT) and Positron Emission Tomography (PET). Classic methods such as FBP, ART, SIRT, MLEM and OSEM are presented with modern and compact notation, with the main goal of guiding the reader from the comprehension of the mathematical background through a fast-route to real practice and computer implementation of the algorithms. Accompanied by example data sets, real ready-to-run Python toolsets and scripts and an overview the latest research in the field, this guide will be invaluable for graduate students and early-career researchers and scientists in medical physics and biomedical engineering who are beginners in the field of image reconstruction. A top-down guide from theory to practical implementation of PET and CT reconstruction methods, without sacrificing the rigor of mathematical background Accompanied by Python source code snippets, suggested exercises, and supplementary ready-to-run examples for readers to download from the CRC Press website Ideal for those willing to move their first steps on the real practice of image reconstruction, with modern scientific programming language and toolsets Daniele Panetta is a researcher at the Institute of Clinical Physiology of the Italian National Research Council (CNR-IFC) in Pisa. He earned his MSc degree in Physics in 2004 and specialisation diploma in Health Physics in 2008, both at the University of Pisa. From 2005 to 2007, he worked at the Department of Physics "E. Fermi" of the University of Pisa in the field of tomographic image reconstruction for small animal imaging micro-CT instrumentation. His current research at CNR-IFC has as its goal the identification of novel PET/CT imaging biomarkers for cardiovascular and metabolic diseases. In the field micro-CT imaging, his interests cover applications of three-dimensional morphometry of biosamples and scaffolds for regenerative medicine. He acts as reviewer for scientific journals in the field of Medical Imaging: Physics in Medicine and Biology, Medical Physics, Physica Medica, and others. Since 2012, he is adjunct professor in Medical Physics at the University of Pisa. Niccolò Camarlinghi is a researcher at the University of Pisa. He obtained his MSc in Physics in 2007 and his PhD in Applied Physics in 2012. He has been working in the field of Medical Physics since 2008 and his main research fields are medical image analysis and image reconstruction. He is involved in the development of clinical, pre-clinical PET and hadron therapy monitoring scanners. At the time of writing this book he was a lecturer at University of Pisa, teaching courses of life-sciences and medical physics laboratory. He regularly acts as a referee for the following journals: Medical Physics, Physics in Medicine and Biology, Transactions on Medical Imaging, Computers in Biology and Medicine, Physica Medica, EURASIP Journal on Image and Video Processing, Journal of Biomedical and Health Informatics.

Book Statistical Modeling and Path based Iterative Reconstruction for X ray Computed Tomography

Download or read book Statistical Modeling and Path based Iterative Reconstruction for X ray Computed Tomography written by Meng Wu and published by . This book was released on 2015 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: X-ray computed tomography (CT) and tomosynthesis systems have proven to be indispensable components in medical diagnosis and treatment. My research is to develop advanced image reconstruction and processing algorithms for the CT and tomosynthesis systems. Streak artifacts caused by metal objects such as dental fillings, surgical instruments, and orthopedic hardware may obscure important diagnostic information in X-ray computed tomography (CT) images. To improve the image quality, we proposed to complete the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. We developed two statistical image reconstruction methods, dual-energy penalized weighted least squares and polychromatic maximum likelihood, for combining kV and selective MV data. Cramer-Rao Lower Bound for Compound Poisson was studied to revise the statistical model and minimize radiation dose. Numerical simulations and phantom studies have shown that the combined kV/MV imaging systems enable a better delineation of structures of interest in CT images for patients with metal objects. The x-ray tube on the CT system produces a wide x-ray spectrum. Polychromatic statistical CT reconstruction is desired for more accurate quantitative measurement of the chemical composition and density of the tissue. Polychromatic statistical reconstruction algorithms usually have very high computational demands due to complicated optimization frameworks and the large number of spectrum bins. We proposed a spectrum information compression method and a new optimization framework to significantly reduce the computational cost in reconstructions. The new algorithm applies to multi-material beam hardening correction, adaptive exposure control, and spectral imaging. Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but is difficult to choose. We have developed two path seeking algorithms that are capable of generating a path of MBIR images with different strengths of the penalty function. The errors of the proposed path seeking algorithms are reasonably small throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggested a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model. Additionally, we have developed a convolution-based blur-and-add model for digital tomosynthesis systems that can be used in efficient system analysis, task-dependent optimization, and filter design. We also proposed a computationally practical algorithm to simulate and subtract out-of-plane artifacts in tomosynthesis images using patient-specific prior CT volumes.

Book Programs for Evaluation of 3D PET Reconstruction Algorithms

Download or read book Programs for Evaluation of 3D PET Reconstruction Algorithms written by and published by . This book was released on 1994 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Evaluation of a reconstruction algorithm should be done using a sample set that is large enough to provide us with a statistically significant result. In order to carry out an evaluation, one possibility is to use a set of computer simulated phantoms, that takes into account some parameter variabilities. This technical report describes in detail programs that generate a set of 3D phantoms and projection data, reconstruct, evaluate and then compare. The main characteristics are: (1) Phantom and projection data generation, (a) Phantoms with many (69) ellipsoid features, ranging from small (4 mm) to large (40 mm) sized features; (b) Phantoms are random samples from a statistically described ensemble of 3D images resembling those to which PET would be applied in a medical situation (features with random size, orientation and activity); (c) Features are inside spheres that provide background value for some important clinical tasks such as detectability; (d) Types of features: hot, cold and normal spots, and (e) Emulation of 3D PET scanner for projection data generation, with detector field of view (FOV) blurring and a realistic 3D PET noise model. (2) Reconstruction algorithms: (a) Algebraic Reconstruction Technique using blob (ARTblob) as basis function; (b) ART using voxels (ARTvox); (c) EM-ML using blobs (EMblob), and (d) EM-ML using voxels (EMvox). (3) Evaluation for following tasks: (a) Training figure of merit (FOM); (b) Structural accuracy; (c) Hot spot detectability, and (d) Cold spot detectability. (4) Statistical comparison using paired t-test. Justifications for using some models are described in the paper and an application evaluating some reconstruction methods is reported.

Book Statistical Reconstruction Algorithms for Polyenergetic X ray Computed Tomography

Download or read book Statistical Reconstruction Algorithms for Polyenergetic X ray Computed Tomography written by Idris A. Elbakri and published by . This book was released on 2003 with total page 358 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Convergent Algorithms for Statistical Image Reconstruction in Emission Tomography

Download or read book Convergent Algorithms for Statistical Image Reconstruction in Emission Tomography written by Sangtae Ahn and published by . This book was released on 2004 with total page 378 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Residual Correction Algorithms for Statistical Image Reconstruction in Positron Emission Tomography

Download or read book Residual Correction Algorithms for Statistical Image Reconstruction in Positron Emission Tomography written by Lin Fu and published by . This book was released on 2010 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Positron emission tomography (PET) is a radionuclide imaging modality that plays important roles in visualizing, targeting, and quantifying functional processes in vivo. High-resolution and quantitative PET images are reconstructed by solving large-scale inverse problems with iterative methods that incorporate accurate physics and noise modeling of the imaging process. The computation demands of PET image reconstruction are rapidly increasing as higher-resolution detectors, larger imaging field-of-view, and dynamic or adaptive data acquisition modes are being adopted by modern PET scanners. The trend of the increase in the computation demands is even faster than Moore's law that describes the exponential growth in the number of transistors placed on an integrated circuit. In this project a residual correction mechanism is introduced to PET image reconstruction to create computationally efficient yet accurate tomographic reconstruction algorithms. By using residual correction, reconstruction methods are able to adopt a more simplified physical model for fast computation while retaining the accuracy of the final solution. Residual correction can accelerate existing image reconstruction packages. It allows iterative reconstruction with more accurate physical models which are currently impractical due to the high computation cost. Two illustrative applications of the residual correction approach are provided. One is image reconstruction with an object-dependent Monte Carlo based physics model. The other is image reconstruction using an ultra fast GPU-accelerated simplified geometric model.

Book The Theory and Practice of 3D Pet

Download or read book The Theory and Practice of 3D Pet written by B. Bendriem and published by . This book was released on 2014-01-15 with total page 188 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book 3D Image Reconstruction for PET by Multi slice Rebinning and Axial Filtering

Download or read book 3D Image Reconstruction for PET by Multi slice Rebinning and Axial Filtering written by and published by . This book was released on 1991 with total page 9 pages. Available in PDF, EPUB and Kindle. Book excerpt: Two different approaches are used at present to reconstruct from 3D coincidence data in PET. We refer to these approaches as the single-slice rebinning approach and the fully-3D approach. The single-slice rebinning approach involves geometrical approximations, but it requires the least possible amount of computation. Fully-3D reconstruction algorithms, both iterative and non-iterative, do not make such approximations, but require much more computation. Multi-slice rebinning with axial filtering is a new approach which attempts to achieve the geometrical accuracy of the fully-3D approach with the simplicity and modest amount of computation of the single-slice rebinning approach. The first step (multi-slice rebinning) involves rebinning of coincidence lines into a stack of 2D sinograms, where multiple sinograms are incremented for each oblique coincidence line. This operation is followed by an axial filtering operation, either before or after slice-by-slice reconstruction, to reduce the blurring in the axial direction. Tests with simulated and experimental data indicate that the new method has better geometrical accuracy than single-slice rebinning, at the cost of only a modest increase in computation. 11 refs.

Book NUFFT and NUIFFT Based Reconstruction Algorithms for Non Cartesian Imaging on Small Animals

Download or read book NUFFT and NUIFFT Based Reconstruction Algorithms for Non Cartesian Imaging on Small Animals written by Jiayu Song and published by ProQuest. This book was released on 2007 with total page 304 pages. Available in PDF, EPUB and Kindle. Book excerpt: The growing interest in small animal models of human disease requires developments of imaging techniques dedicated to small animals. This dissertation work is motivated to develop efficient non-Cartesian image reconstruction methods to address the dual challenges of signal-to-noise ratio (SNR) and spatio-temporal resolution. Multi-dimensional nonuniform fast Fourier transform (NUFFT) and nonuniform inverse fast Fourier transform algorithms have been developed to efficiently evaluate the nonuniform DFT (NUDFT) and its inverse. Several reconstruction algorithms were proposed for different applications based on NUFFT and NUIFFT. NUFFT reconstruction was developed for 3D cardiac mouse imaging with radial trajectory and 3D multi-spectral brain imaging with rosette trajectory. An NUFFT based interactive density compensation function (DCF) was formulated for the rosette trajectory that had intersections. NUIFFT reconstruction was also developed and applied on radially acquired 2D cardiac mouse data. A sparseness prior was further integrated into this NUIFFT algorithm to tackle the low SNR and high requirement of temporal resolution in dynamic contrast enhanced (DCE) pulmonary perfusion imaging of a rat lung. The sparseness prior iterative reconstruction method was later extended to reconstruct 3D cone-beam micro-CT data acquired with retrospective gating. In each of the applications, the proposed NUFFT and NUIFFT based reconstructions outperformed the conventional reconstruction methods on image qualities and/or reconstruction speeds.

Book Statistical Image Reconstruction for Quantitative Computed Tomography

Download or read book Statistical Image Reconstruction for Quantitative Computed Tomography written by Joshua D. Evans and published by . This book was released on 2011 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Statistical iterative reconstruction (SIR) algorithms for x-ray computed tomography (CT) have the potential to reconstruct images with less noise and systematic error than the conventional filtered backprojection (FBP) algorithm. More accurate reconstruction algorithms are important for reducing imaging dose and for a wide range of quantitative CT applications. The work presented herein investigates some potential advantages of one such statistically motivated algorithm called Alternating Minimization (AM). A simulation study is used to compare the tradeoff between noise and resolution in images reconstructed with the AM and FBP algorithms. The AM algorithm is employed with an edge-preserving penalty function, which is shown to result in images with contrast-dependent resolution. The AM algorithm always reconstructed images with less image noise than the FBP algorithm. Compared to previous studies in the literature, this is the first work to clearly illustrate that the reported noise advantage when using edge-preserving penalty functions can be highly dependent on the contrast of the object used for quantifying resolution. A polyenergetic version of the AM algorithm, which incorporates knowledge of the scanner's x-ray spectrum, is then commissioned from data acquired on a commercially available CT scanner. Homogeneous cylinders are used to assess the absolute accuracy of the polyenergetic AM algorithm and to compare systematic errors to conventional FBP reconstruction. Methods to estimate the x-ray spectrum, model the bowtie filter and measure scattered radiation are outlined which support AM reconstruction to within 0.5% of the expected ground truth. The polyenergetic AM algorithm reconstructs the cylinders with less systematic error than FBP, in terms of better image uniformity and less object-size dependence. Finally, the accuracy of a post-processing dual-energy CT (pDECT) method to non-invasively measure a material's photon cross-section information is investigated. Data is acquired on a commercial scanner for materials of known composition. Since the pDECT method has been shown to be highly sensitive to reconstructed image errors, both FBP and polyenergetic AM reconstruction are employed. Linear attenuation coefficients are estimated with residual errors of around 1% for energies of 30 keV to 1 MeV with errors rising to 3%-6% at lower energies down to 10 keV. In the ideal phantom geometry used here, the main advantage of AM reconstruction is less random cross-section uncertainty due to the improved noise performance.

Book Applications of Statistical Modeling in Iterative CT Image Reconstruction

Download or read book Applications of Statistical Modeling in Iterative CT Image Reconstruction written by David Simon Perlmutter and published by . This book was released on 2015 with total page 43 pages. Available in PDF, EPUB and Kindle. Book excerpt: Traditionally, x-ray CT images are produced by an algorithm called filtered back projection, or FBP. FBP is an analytical solution to the idealized CT image reconstruction problem, the inverse problem of turning raw x-ray measurements into a full 3-dimensional (3D) image, and is derived assuming a continuous set of noiseless measurements. However real CT data are noisy and biased, especially so if the scans are performed at low x-ray dose, and advanced statistical estimation techniques have been shown to produce higher quality images than FBP. This work presents two applications of statistical modeling in CT image reconstruction. The first application discusses the statistics of CT data noise, and compares the performance of several common models for estimation in a simplified 1D experiment. The second application concerns modeling temporal CT data, in which the measured data typically contain redundancies. It proposes an estimation method that exploits these redundancies to address two key challenges in CT image reconstruction: reducing noise and lowering computation time. We demonstrate this noise reduction analytically and through experimental simulations. In addition, a third study validates the use of the statistical models used in this work by comparing them to measured data from a clinical CT scanner. Overall, these methods contribute to the methodology of statistical CT image reconstruction to enable ultra-low dose x-ray CT imaging.

Book Image Reconstruction Algorithms for Volume imaging Pet Scanners  microform

Download or read book Image Reconstruction Algorithms for Volume imaging Pet Scanners microform written by Kinahan, P. E. (Paul E.) and published by Ann Arbor, Mich. : University Microfilms International. This book was released on 1994 with total page 472 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Medical Image Reconstruction

Download or read book Medical Image Reconstruction written by Gengsheng Zeng and published by Springer Science & Business Media. This book was released on 2010-12-28 with total page 204 pages. Available in PDF, EPUB and Kindle. Book excerpt: "Medical Image Reconstruction: A Conceptual Tutorial" introduces the classical and modern image reconstruction technologies, such as two-dimensional (2D) parallel-beam and fan-beam imaging, three-dimensional (3D) parallel ray, parallel plane, and cone-beam imaging. This book presents both analytical and iterative methods of these technologies and their applications in X-ray CT (computed tomography), SPECT (single photon emission computed tomography), PET (positron emission tomography), and MRI (magnetic resonance imaging). Contemporary research results in exact region-of-interest (ROI) reconstruction with truncated projections, Katsevich's cone-beam filtered backprojection algorithm, and reconstruction with highly undersampled data with l0-minimization are also included. This book is written for engineers and researchers in the field of biomedical engineering specializing in medical imaging and image processing with image reconstruction. Gengsheng Lawrence Zeng is an expert in the development of medical image reconstruction algorithms and is a professor at the Department of Radiology, University of Utah, Salt Lake City, Utah, USA.

Book Computed Tomography for Technologists  Exam Review

Download or read book Computed Tomography for Technologists Exam Review written by Lois Romans and published by Lippincott Williams & Wilkins. This book was released on 2018-07-23 with total page 446 pages. Available in PDF, EPUB and Kindle. Book excerpt: Publisher's Note: Products purchased from 3rd Party sellers are not guaranteed by the Publisher for quality, authenticity, or access to any online entitlements included with the product. Computed Tomography for Technologists: Exam Review, Second Edition, is intended to be used as a companion to Computed Tomography for Technologists: A Comprehensive Text, Second Edition, and as a review of computed tomography on its own. This is an excellent resource for students preparing to take the advanced level certification exam offered by The American Registry of Radiologic Technologists (ARRT).

Book Nuclear Oncology

    Book Details:
  • Author : H. William Strauss
  • Publisher : Springer Science & Business Media
  • Release : 2012-11-27
  • ISBN : 0387488944
  • Pages : 873 pages

Download or read book Nuclear Oncology written by H. William Strauss and published by Springer Science & Business Media. This book was released on 2012-11-27 with total page 873 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides the reader with a comprehensive understanding of both the basic principles and the clinical applications of nuclear oncology imaging techniques. The authors have assembled a distinguished group of leaders in the field who provide valuable insight on the subject. The book also includes major chapters on the cancer patient and the pathophysiology of abnormal tissue, the evaluation of co-existing disease, and the diagnosis and therapy of specific tumors using functional imaging studies. Each chapter is heavily illustrated to assist the reader in understanding the clinical role of nuclear oncology in cancer disease therapy and management.