Sunday, May 24, 2009

Definition of Tomography

Definition of Tomography

Tomography: The process for generating a tomogram, a two-dimensional image of a slice or section through a three-dimensional object. Tomography achieves this remarkable result by simply moving an x-ray source in one direction as the x-ray film is moved in the opposite direction during the exposure to sharpen structures in the focal plane, while structures in other planes appear blurred. The tomogram is the picture; the tomograph is the apparatus; and tomography is the process.

See also: Computed tomography (CT); Computed tomography colography; Computerized axial tomography scan (CAT scan); Electron beam computerized tomography (EBCT); Positron emission tomography (PET scan).

What is Bioinformatics and Computational Biology?

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and protein-protein interactions, and the modeling of evolution. 

Introduction

The terms bioinformatics and computational biology are often used interchangeably. However bioinformatics more properly refers to the creation and advancement of algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Computational biology, on the other hand, refers to hypothesis-driven investigation of a specific biological problem using computers, carried out with experimental or simulated data, with the primary goal of discovery and the advancement of biological knowledge. Put more simply, bioinformatics is concerned with the information while computational biology is concerned with the hypotheses. A similar distinction is made by National Institutes of Health in their working definitions of Bioinformatics and Computational Biology, where it is further emphasized that there is a tight coupling of developments and knowledge between the more hypothesis-driven research in computational biology and technique-driven research in bioinformatics. Bioinformatics is also often specified as an applied subfield of the more general discipline of Biomedical informatics.
A common thread in projects in bioinformatics and computational biology is the use of mathematical tools to extract useful information from data produced by high-throughput biological techniques such as genome sequencing. A representative problem in bioinformatics is the assembly of high-quality genome sequences from fragmentary "shotgun" DNA sequencing. Other common problems include the study of gene regulation to perform expression profiling using data from microarrays or mass spectrometry.

DNA sequencing

The term DNA sequencing encompasses biochemical methods for determining the order of the nucleotide bases, adenine, guanine, cytosine, and thymine, in a DNA oligonucleotide. The sequence of DNA constitutes the heritable genetic information in nuclei, plasmids, mitochondria, and chloroplasts that forms the basis for the developmental programs of all living organisms. Determining the DNA sequence is therefore useful in basic research studying fundamental biological processes, as well as in applied fields such as diagnostic or forensic research. The advent of DNA sequencing has significantly accelerated biological research and discovery. The rapid speed of sequencing attainable with modern DNA sequencing technology has been instrumental in the large-scale sequencing of the human genome, in the Human Genome Project. Related projects, often by scientific collaboration across continents, have generated the complete DNA sequences of many animal, plant, and microbial genomes.

Regulation of gene expression

 Regulation of gene expression (or gene regulation) refers to the cellular control of the amount and timing of changes to the appearance of the functional product of a gene. Although a functional gene product may be an RNA or a protein, the majority of the known mechanisms regulate the expression of protein coding genes. Any step of the gene's expression may be modulated, from DNA-RNA transcription to the post-translational modification of a protein. Gene regulation gives the cell control over its structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism 

Sequence analysis

Since the Phage Φ-X174 was sequenced in 1977, the DNA sequences of hundreds of organisms have been decoded and stored in databases. The information is analyzed to determine genes that encode polypeptides, as well as regulatory sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Today, computer programs are used to search the genome of thousands of organisms, containing billions of nucleotides. These programs would compensate for mutations (exchanged, deleted or inserted bases) in the DNA sequence, in order to identify sequences that are related, but not identical. A variant of this sequence alignment is used in the sequencing process itself. The so-called shotgun sequencing technique (which was used, for example, by The Institute for Genomic Research to sequence the first bacterial genome, Haemophilus influenzae) does not give a sequential list of nucleotides, but instead the sequences of thousands of small DNA fragments (each about 600-800 nucleotides long). The ends of these fragments overlap and, when aligned in the right way, make up the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. In the case of the Human Genome Project, it took several months of CPU time (on a circa-2000 vintage DEC Alpha computer) to assemble the fragments. Shotgun sequencing is the method of choice for virtually all genomes sequenced today, and genome assembly algorithms are a critical area of bioinformatics research.
Another aspect of bioinformatics in sequence analysis is the automatic search for genes and regulatory sequences within a genome. Not all of the nucleotides within a genome are genes. Within the genome of higher organisms, large parts of the DNA do not serve any obvious purpose. This so-called junk DNA may, however, contain unrecognized functional elements. Bioinformatics helps to bridge the gap between genome and proteome projects--for example, in the use of DNA sequences for protein identification.

Sequence profiling tool

A sequence profiling tool in bioinformatics is a type of software that presents information related to a genetic sequence, gene name, or keyword input. Such tools generally take a query such as a DNA, RNA, or protein sequence or ‘keyword’ and search one or more databases for information related to that sequence. Summaries and aggregate results are provided in standardized format describing the information that would otherwise have required visits to many smaller sites or direct literature searches to compile. Many sequence profiling tools are software portals or gateways that simplify the process of finding information about a query in the large and growing number of bioinformatics databases. The access to these kinds of tools is either web based or locally downloadable executables.

Computational evolutionary biology

Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists in several key ways; it has enabled researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
more recently, compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, lateral gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational models of populations to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
The area of research within computer science that uses genetic algorithms is sometimes confused with computational evolutionary biology, but the two areas are unrelated.

Gene prediction

Gene finding typically refers to the area of computational biology that is concerned with algorithmically identifying stretches of sequence, usually genomic DNA, that are biologically functional. This especially includes protein-coding genes, but may also include other functional elements such as RNA genes and regulatory regions. Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced.
In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome, and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem.
Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. The latter still demands in vivo experimentation through gene knockout and other assays, although frontiers of bioinformatics research are making it increasingly possible to predict the function of a gene based on its sequence alone. 

Measuring biodiversity

Biodiversity of an ecosystem might be defined as the total genomic complement of a particular environment, from all of the species present, whether it is a biofilm in an abandoned mine, a drop of sea water, a scoop of soil, or the entire biosphere of the planet Earth. Databases are used to collect the species names, descriptions, distributions, genetic information, status and size of populations, habitat needs, and how each organism interacts with other species. Specialized software programs are used to find, visualize, and analyze the information, and most importantly, communicate it to other people. Computer simulations model such things as population dynamics, or calculate the cumulative genetic health of a breeding pool (in agriculture) or endangered population (in conservation). One very exciting potential of this field is that entire DNA sequences, or genomes of endangered species can be preserved, allowing the results of Nature's genetic experiment to be remembered in silico, and possibly reused in the future, even if that species is eventually lost. 

Biodiversity

Biodiversity is the variation of life forms within a given ecosystem, biome or for the entire Earth. Biodiversity is often used as a measure of the health of biological systems.
Biodiversity found on Earth today consists of many millions of distinct biological species, the product of four billion years of evolution

Biosphere

The biosphere is the part of the Earth, including air, land, surface rocks, and water, within which life occurs, and which biotic processes in turn alter or transform. From the broadest biophysiological point of view, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, hydrosphere, and atmosphere. This biosphere is postulated to have evolved, beginning through a process of biogenesis or biopoesis, at least some 3.5 billion years ago.
Biomass accounts for about 3.7 kg carbon per square meter of the earth's surface averaged over land and sea, making a total of about 1900 gigatonnes of carbon. 

Serial Analysis of Gene Expression

Serial analysis of gene expression (SAGE) is a technique used by molecular biologists to produce a snapshot of the messenger RNA population in a sample of interest. The original technique was developed by Dr. Victor Velculescu at the Oncology Center of Johns Hopkins University, and was published in the journal Science in 1995. Several variants have been developed since, most notably a more robust version, LongSAGE (developed by Dr. Saurabh Saha and colleagues at Johns Hopkins University), RL-SAGE (developed by Dr. Malali Gowda and colleagues at The Ohio State University; Gowda et al. 2004) and the most recent SuperSAGE (by Hideo Matsumura and colleagues) that enables very precise annotation of existing genes and discovery of new genes within genomes because of an increased tag-length of 25-27 bp 

DNA microarray


A DNA microarray (also commonly known as gene or genome chip, DNA chip, or gene array) is a collection of microscopic DNA spots, commonly representing single genes, arrayed on a solid surface by covalent attachment to a chemical matrix. DNA arrays are different from other types of microarray only in that they either measure DNA or use DNA as part of its detection system. Qualitative or quantitative measurements with DNA microarrays utilize the selective nature of DNA-DNA or DNA-RNA hybridization under high-stringency conditions and fluorophore-based detection. DNA arrays are commonly used for expression profiling, i.e., monitoring expression levels of thousands of genes simultaneously, or for comparative genomic hybridization

Expressed sequence tag

An expressed sequence tag or EST is a short sub-sequence of a transcribed spliced nucleotide sequence (either protein-coding or not). They may be used to identify gene transcripts, and are instrumental in gene discovery and gene sequence determination.The identification of ESTs has proceeded rapidly, with approximately 43 million ESTs now available in public databases (e.g. GenBank 6/2007, all species).
An EST is produced by one-shot sequencing of a cloned mRNA (i.e. sequencing several hundred base pairs from an end of a cDNA clone taken from a cDNA library). The resulting sequence is a relatively low quality fragment whose length is limited by current technology to approximately 500 to 800 nucleotides. Because these clones consist of DNA that is complementary to mRNA, the ESTs represent portions of expressed genes. They may be present in the database as either cDNA/mRNA sequence or as the reverse complement of the mRNA, the template strand.
ESTs can be mapped to specific chromosome locations using physical mapping techniques, such as radiation hybrid mapping or FISH. Alternatively, if the genome of the organism that originated the EST has been sequenced one can align the EST sequence to that genome.
The current understanding of the human set of genes (2006) includes the existence of thousands of genes based solely on EST evidence. In this respect, ESTs become a tool to refine the predicted transcripts for those genes, which leads to prediction of their protein products, and eventually of their function. Moreover, the situation in which those ESTs are obtained (tissue, organ, disease state - e.g. cancer) gives information on the conditions in which the corresponding gene is acting. ESTs contain enough information to permit the design of precise probes for DNA microarrays that then can be used to determine the gene expression.
Some authors use the term "EST" to describe genes for which little or no further information exists besides the tag 

Epithelium


In biology and medicine, epithelium is a tissue composed of layers of cells that line the cavities and surfaces of structures throughout the body. It is also the type of tissue of which many glands are formed. Epithelium lines both the outside (skin) and the inside cavities and lumen of bodies. The outermost layer of our skin is composed of dead stratified squamous, keratinized epithelial cells.
Mucous membranes lining the inside of the mouth, the oesophagus, and part of the rectum are lined by nonkeratinized stratified squamous epithelium. Other, open to outside body cavities are lined by simple squamous or columnar epithelial cells.
Other epithelial cells line the insides of the lungs, the gastrointestinal tract, the reproductive and urinary tracts, and make up the exocrine and endocrine glands. The outer surface of the cornea is covered with fast-growing, easily-regenerated epithelial cells.
Functions of epithelial cells include secretion, absorption, protection, transcellular transport, sensation detection, and selective permeability.
Endothelium (the inner lining of blood vessels, the heart, and lymphatic vessels) is a specialized form of epithelium. Another type, Mesothelium, forms the walls of the pericardium, pleurae, and peritoneum.

256+ slice CT

At RSNA 2007, Philips announced a 256 slice scanner, while Toshiba announced a "dynamic volume" scanner based on 320 slices. The majority of published data with regard to both technical and clinical aspects of the systems have been related to the prototype unit made by Toshiba Medical Systems. The recent 3 month Beta installation at Johns Hopkins Press Release using a Toshiba system tested the clinical capabilities of this technology JHU Gazette. The technology currently remains in a development phase but has demonstrated the potential to significantly reduce radiation exposure by eliminating the requirement for a helical examination in both cardiac CT angiography and whole brain perfusion studies for the evaluation of stroke. 

Dual-source CT


Siemens introduced a CT model with dual X-ray tube and dual array of 64 slice detectors, at the 2005 Radiological Society of North America (RSNA) medical meeting. Dual sources increase the temporal resolution by reducing the rotation angle required to acquire a complete image, thus permitting cardiac studies without the use of heart rate lowering medication, as well as permitting imaging of the heart in systole. The use of two x-ray units makes possible the use of dual energy imaging, which allows an estimate of the average atomic number in a voxel, as well as the total attenuation. This permits automatic differentiation of calcium (e.g. in bone, or diseased arteries) from iodine (in contrast medium) or titanium (in stents) - which might otherwise be impossible to differentiate. It may also improve the characterization of tissues allowing better tumor differentiation

Multislice CT

Multislice CT scanners are similar in concept to the helical or spiral CT but there are more than one detector ring. It began with two rings in mid nineties, with a 2 solid state ring model designed and built by Elscint (Haifa) called CT TWIN, with one second rotation (1993): It was followed by other manufacturers. Later, it was presented 4, 8, 16, 32, 40 and 64 detector rings, with increasing rotation speeds. Current models (2007) have up to 3 rotations per second, and isotropic resolution of 0.35mm voxels with z-axis scan speed of up to 18 cm/s.. This resolution exceeds that of High Resolution CT techniques with single-slice scanners, yet it is practical to scan adjacent, or overlapping, slices - however, image noise and radiation exposure significantly limit the use of such resolutions.
The major benefit of multi-slice CT is the increased speed of volume coverage. This allows large volumes to be scanned at the optimal time following intravenous contrast administration; this has particularly benefitted CT angiography techniques - which rely heavily on precise timing to ensure good demonstration of arteries.
Computer power permits increasing the postprocessing capabilities on workstations. Bone suppression, volume rendering in real time, with a natural visualization of internal organs and structures, and automated volume reconstruction really change the way diagnostic is performed on CT studies and this models become true volumetric scanners. The ability of multi-slice scanners to achieve isotropic resolution even on routine studies means that maximum image quality is not restricted to images in the axial plane - and studies can be freely viewed in any desired plane. 

Helical cone beam computed tomography

Helical (or spiral) cone beam computed tomography is a type of three dimensional computed tomography (CT) in which the source (usually of x-rays) describes a helical trajectory relative to the object while a two dimensional array of detectors measures the transmitted radiation on part of a cone of rays eminating from the source. Willi Kalender, who is credited with the invention prefers the term Spiral scan CT, arguing that spiral is synonymous with helical: for example as used in 'spiral staircase'.
In practical helical cone beam x-ray CT machines, the source and array of detectors are mounted on a rotating gantry while the patient is moved axially at a uniform rate. Earlier x-ray CT scanners imaged one slice at a time by rotating source and one dimensional array of detectors while the patient remained static. The helical scan method reduces the x-ray dose to the patient required for a given resolution while scanning more quickly. This is however at the cost of greater mathematical complexity in the reconstruction of the image from the measurements. 

Electron beam tomography

Electron beam tomography (EBT) is a specific form of computed axial tomography (CAT or CT) in which the X-Ray tube is not mechanically spun in order to rotate the source of X-Ray photons. This different design was explicitly developed to better image heart structures which never stop moving, performing a complex complete cycle of movement with each heart beat.
As in conventional CT technology, the X-ray source still rotates around the circle in space containing an object to be imaged tomographically, but the X-Ray tube is much larger than the imaging circle and the electron beam current within the vacuum tube is swept electronically, in a circular (partial circle actually) path and focused on a stationary tungsten anode target ring. 

What is Radiography?


Radiography is the use of X-rays to view unseen or hard-to-image objects. The use of non-ionizing radiations (visible light and ultraviolet light) to view objects should be considered as a normal “optical” method (e.g., light microscopy). The modification of an object through the use of ionizing radiation is not radiography. Depending on the nature of the object and the intended outcome it can be radiotherapy, food irradiation, or radiation processing

Helical cone beam computed tomography

Helical (or spiral) cone beam computed tomography is a type of three dimensional computed tomography (CT) in which the source (usually of x-rays) describes a helical trajectory relative to the object while a two dimensional array of detectors measures the transmitted radiation on part of a cone of rays eminating from the source. Willi Kalender, who is credited with the invention prefers the term Spiral scan CT, arguing that spiral is synonymous with helical: for example as used in 'spiral staircase'.
In practical helical cone beam x-ray CT machines, the source and array of detectors are mounted on a rotating gantry while the patient is moved axially at a uniform rate. Earlier x-ray CT scanners imaged one slice at a time by rotating source and one dimensional array of detectors while the patient remained static. The helical scan method reduces the x-ray dose to the patient required for a given resolution while scanning more quickly. This is however at the cost of greater mathematical complexity in the reconstruction of the image from the measurements. 

Dynamic volume CT

During the Radiological Society of North America (RSNA) in 2007, Toshiba Medical Systems introduced the world's first dynamic volume CT system, Aquilion ONE. This 320-slice CT scanner, with its 16 cm anatomical coverage, can scan entire organs such as heart and brain, in just one single rotation, thereby also enabling dynamic processes such as blood flow and function to be observed.
Whereas patients exhibiting symptoms of a heart attack or stroke have until now normally had to submit to a variety of examinations preparatory to a precise diagnosis, all of which together took up a considerable amount of time, with dynamic volume CT this can be decreased to a matter of minutes and one single examination. Functional imaging can thus be performed rapidly, with the least possible radiation and contrast dose combined with very high precision 

What is Tomosynthesis?

Digital tomosynthesis combines digital image capture and processing with simple tube/detector motion as used in conventional radiographic tomography - although there are some similarities to CT, it is a separate technique. In CT, the source/detector makes a complete 360 degree rotation about the subject obtaining a complete set of data from which images may be reconstructed. In digital tomosynthesis, only a small rotation angle (e.g. 40 degrees) with a small number of discrete exposures (e.g. 10) are used. This incomplete set of data can be digitally processed to yield images similar to conventional tomography with a limited depth of field. However, because the image processing is digital, a series of slices at different depths and with different thicknesses can be reconstructed from the same acquisition, saving both time and radiation exposure.
Because the data acquired is incomplete, tomosynthesis is unable to offer the extremely narrow slice widths that CT offers. However, higher resolution detectors can be used, allowing very-high in-plane resolution, even if the Z-axis resolution is poor. The primary interest in tomosynthesis is in breast imaging, as an extension to mammography, where it may offer better detection rates, with little extra increase in radiation exposure.
Reconstruction algorithms for tomosynthesis are significantly different from conventional CT, as the conventional filtered back projection algorithm requires a complete set of data. Iterative algorithms based upon expectation maximization are most commonly used, but are extremely computationally intensive. Some manufacturers have produced practical systems using commercial GPUs to perform the reconstruction

Molecular modelling

Molecular modelling is a collective term that refers to theoretical methods and computational techniques to model or mimic the behaviour of molecules. The techniques are used in the fields of computational chemistry, computational biology and materials science for studying molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling techniques is the atomistic level description of the molecular systems; the lowest level of information is individual atoms (or a small group of atoms). This is in contrast to quantum chemistry (also known as electronic structure calculations) where electrons are considered explicitly. The benefit of molecular modelling is that it reduces the complexity of the system, allowing many more particles (atoms) to be considered during simulations. 

What is Systems biology?

Systems biology is a relatively new biological study field that focuses on the systematic study of complex interactions in biological systems, thus using a new perspective (integration instead of reduction) to study them. Particularly from year 2000 onwards, the term is used widely in the biosciences, and in a variety of contexts. Because the scientific method has been used primarily toward reductionism, one of the goals of systems biology is to discover new emergent properties that may arise from the systemic view used by this discipline in order to understand better the entirety of processes that happen in a biological system 

Design and advatage of EBT



Design and advatage of EBT


The principal application advantage of EBT tomographic CT machines and the reason for the invention, is that the X-Ray source is swept electronically, not mechanically, and can thus be swept with far greater speed than with conventional CT machines based on mechanically spun X-Ray tubes.
The major medical application for which this design technology was invented in the 1980s, namely for imaging the human heart. The heart never stops moving, and some important structures, such as arteries, move several times their diameter during each heartbeat. Rapid imaging is, thus, important to prevent blurring of moving structures during the scan. The most advanced current commercial designs can perform image sweeps in as little as 0.025 seconds. By comparison, the fastest mechanically swept X-Ray tube designs require about 0.33 seconds to perform an image sweep. For reference, current coronary artery angiography imaging is usually performed at 30 frames/second or 0.033 seconds/frame; EBT is far closer to this than mechanically swept CT machines.