Monday, January 31, 2011

MaxQuant comes with a brand-new search engine


J Proteome Res. 2011 Jan 21. [Epub ahead of print]

Andromeda - a peptide search engine integrated into the MaxQuant environment.

Abstract

A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify co-fragmented peptides, significantly improving the total number of identified peptides.

Saturday, January 29, 2011

University Uses Thermo Fisher Scientific ICP-MS for Reliable and Efficient Sulfur Detection in Proteins


"Thermo Fisher Scientific Inc. announced that the University of Oviedo’s analytical spectrometry research group has implemented the Thermo Scientific XSERIES 2 ICP-MS to perform reliable and interference-free sulfur detection in proteins.

The analytical spectrometry research group at the University of Oviedo in Asturias, Spain aims to solve the analytical challenges encountered by science and technology. Within this framework, a small sub-group has been established focusing on the development of inductively coupled plasma-mass spectrometry (ICP-MS) based analytical methods for the quantification of biopolymers such as DNA and proteins. One of the principal issues faced by the group is the interference from gas-based polyatomics such as oxygen in the determination of sulfur when using a low resolution instrument. To eliminate these problems, the group selected the Thermo Scientific XSERIES 2 ICP-MS with collision/reaction cell technology (CRC).

Quantitative protein analysis is currently one of the most demanding applications in analytical chemistry. Mass spectrometric techniques such as electrospray ionization-mass spectrometry (ESI-MS) and matrix assisted laser desorption ionization-mass spectrometry (MALDI-MS) have traditionally played a key role in protein analysis. However, the potential of ICP-MS has recently been recognized for the determination of proteins. Although ICP-MS detection does not provide any structural information, its outstanding capability to quantify most of the elements proves valuable for accurate protein quantification. Keeping pace with the latest technological developments, the University of Oviedo’s research group has coupled the XSERIES 2 ICP-MS with a reversed-phase capillary liquid chromatography (μLC) system to facilitate precise determination of sulfur isotopes in standard proteins.

Dr. Jörg Bettmer of the University of Oviedo’s analytical spectrometry research group comments, “The Thermo Scientific XSERIES 2 ICP-MS was chosen because no other quadrupole-based system matches its capabilities in terms of accuracy, reliability and overall efficiency. The implementation of the instrument has allowed us to achieve reliable, interference-free detection of sulfur isotopes. It has enabled us to determine sulfur-containing standard proteins in an accurate and efficient manner that had not been previously possible.”

The XSERIES 2 offers outstanding productivity in a quadrupole ICP-MS for both routine and high-performance analytical applications. By using the system, laboratories can achieve their analytical objectives faster, with greater confidence and less hands-on time from the operator. The innovative ion lens design of the instrument enables simple field upgrade to collision cell technology (CCT) performance without affecting the normal (non-CCT mode) sensitivity or background. The cell is also compatible with a range of reactive gases, such as pure oxygen for interference suppression in challenging matrices."

Global Biotechnology Instrumentation Market to Reach US$5.8 Billion by 2015, According to a New Report by Global Industry Analysts, Inc.


Biotechnology Instrumentation sector continues to display healthy growth supported by biotech research expansion in pharmaceutical operations, forensics, agriculture environmental monitoring, and animal husbandry sectors. Demand for faster and economical ways to produce drugs for the pharmaceutical industry, proteomics, genomics, functional genomics and combinatorial chemistry have led to the advancements of bioinstrumentation in recent years. The use of bioinstrumentation techniques has become simple and more accessible. The routine biological tests in forensic laboratories and food technology facilities are performed by instruments such as off-the-shelf kits of PCR primers, automated analytical equipment, and automated synthesizers.
Universities, particularly medical schools, are the largest end-users of biotech instruments. Product enhancements, such as automation and better reproducibility, have led to an increased demand from pharmaceutical companies. Emerging sectors in drug discovery arena such as genomics, proteomics, DNA chips, combinatorial chemistry, and high throughput screening, are fueling the demand for bioinstrumentation by the life science industry. Rapid advancements in biotechnology and pharmaceutical research require complicated analysis and purification methods. An unprecedented interest in these fields led to a significant increase in the use of analytical techniques such as HPLC, Gas Chromatography, and Mass Spectrometry.

A single blood drop could detect heart disease, cancer


A University of Victoria researcher hopes to change the nature of testing for, cancer and drug toxicity using a highly sensitive and fast machine that would only require a single drop of blood from a patient.
Called a mass spectrometer, this machine determines the weight of  in the blood, and would allow researchers to determine if key marker proteins related to heart disease or cancer are present. The mass spectrometer being used in this research is among the most sensitive spectrometers that are commercially available, and is currently the only one of its kind in Canada.
Dr. Christoph Borchers at the University of Victoria-Genome BC Proteomics Centre will use the Agilent ion funnel 6490 mass spectrometer to develop methodologies for early diagnostic tests. These tests will detect and measure biomarkers, which are proteins in a patient’s blood that can signal early and subtle health changes. Dr. Borchers hopes to apply the technology to develop inexpensive, fast, and reproducible biomarker tests for early diagnosis of cardiovascular disease (CVD), the leading cause of death in the Western hemisphere.

Wednesday, January 26, 2011

Hope Offered For New Diagnostics Following Research Into Synthetic Antibodies



Antibodies are watchdogs of human health, continuously prowling the body and registering minute changes associated with infection or disease with astonishing acuity. They also serve as biochemical memory banks, faithfully recording information about pathogens they encounter and efficiently storing this data for later use. 

Stephen Albert Johnston, Neal Woodbury and their colleagues at the Biodesign Institute at Arizona State University have been exploring mechanisms of antibody activity, particularly the ability of these sentries to bind - with high affinity and specificity - to their protein targets. A more thorough understanding of the antibody universe may lead to a new generation of rapid, low-cost diagnostic tools and speed the delivery of new vaccines and therapeutics. 

Borrowing a script from nature, the group has been working to construct synthetic antibodies or synbodies, through a simple method developed in Johnston's Center for Innovations in Medicine. They have also examined the broad portrait of antibody activity revealed in a sample of blood, harnessing this information for the presymptomatic diagnosis of disease. These immunosignatures, as Johnston has named them, provide a dynamic report card on human health. 

In a pair of new papers, the group demonstrated a simple means of improving the binding affinity of synbodies, which are composed of 20 unit chains of amino acids, strung together in random order. They also used random peptide sequences spotted onto glass microarray slides to mine information concerning the active regions or epitopes of naturally occurring antibodies. These two projects recently appeared in the journals PloS ONE and Molecular and Cellular Proteomics, respectively. 

While antibodies have been in use for biomedical research for a long time, conventional techniques for producing them have been time consuming and expensive. Normally, antibodies used for research are produced in animals, which respond to a given injected protein by producing a protein-specific antibody, which may then be extracted. 

In earlier work, Johnston's group showed that high-affinity antibody mimics can be produced synthetically by simple means. Their technique turns the traditional production approach on its head. Rather than beginning with a given protein and trying to generate a corresponding antibody, the new method involves building a synthetic antibody first, later determining the protein it effectively binds with, by screening it against a library of potential protein mates. 

The first step in this process is to generate random strings of 20 amino acids. Roughly 10,000 such random peptides are then spotted onto a glass microarray slide. The protein one is seeking an antibody to is screened against this random sequence array and peptides with high binding affinity are identified. Two such peptides can be linked together to form a synbody, whose binding affinity is the product of each separate peptide. In this way, two weakly binding peptides join forces to form a high affinity unit, useful for investigations into the proteome, the vast domain of proteins essential to virtually all biological processes. 
read more

Saturday, January 22, 2011

Blurring contact maps of thousands of proteins: what we can learn by reconstructing 3D structure



Abstract (provisional)

Background

The present knowledge of protein structures at atomic level derives from some 60,000 molecules. Yet the exponential ever growing set of hypothetical protein sequences comprises some 10 million chains and this makes the problem of protein structure prediction one of the challenging goals of bioinformatics. In this context, the protein representation with contact maps is an intermediate step of fold recognition and constitutes the input of contact map predictors. However contact map representations require fast and reliable methods to reconstruct the specific folding of the protein backbone.

Methods

In this paper, by adopting a GRID technology, our algorithm for 3D reconstruction FT-COMAR is benchmarked on a huge set of non redundant proteins (1716) taking random noise into consideration and this makes our computation the largest ever performed for the task at hand.

Results

We can observe the effects of introducing random noise on 3D reconstruction and derive some considerations useful for future implementations. The dimension of the protein set allows also statistical considerations after grouping per SCOP structural classes.

Conclusions

All together our data indicate that the quality of 3D reconstruction is unaffected by deleting up to an average 75% of the real contacts while only few percentage of randomly generated contacts in place of non-contacts are sufficient to hamper 3D reconstruction.

Cell biology animations

below is a collection of biology animations, I found them useful and intuitive for bioinformatics researchers with mainly quantitative background, like me :(
http://bio-alive.com/animations/,
http://highered.mcgraw-hill.com/sites/dl/free/0072437316/120060/ravenanimation.html

Wednesday, January 19, 2011

Stable isotope shifted matrices enable the use of low mass ion precursor scanning for targeted metabolite identification



We describe a method to identify metabolites of exogenous proteins that eliminates endogenous background by using stable isotope labeled matrices. This technique allows selective screening of the intact therapeutic molecule and all metabolites using a modified precursor ion scan that monitors low molecular weight fragment ions produced during MS/MS. This distinct set of low mass ions differs between isotopically labeled and natural isotope containing species allowing excellent discrimination between endogenous compounds and target analytes during the precursor scanning experiments. All compounds containing amino acids that consist of naturally abundant isotopes can be selected using this scanning technique for further analysis, including metabolites of the parent molecule. The sensitivity and selectivity of this technique is discussed with specific examples of insulin derived peptides being screened from a complex matrix using a range of different validated target ions.


more

Monday, January 17, 2011

Global Bioinformatics Market to Reach US$5.0 Billion by 2015



Bioinformatics, one of the most vibrant industries in the current scenario, is at the forefront of the biotech revolution. The market is projected to transform into a major industry within the next few years. Driving the revolution is genomics, a set of advanced tools designed for large data acquisition and analysis. New tools are not only pushing the development of drug discovery, but also fundamentally changing the nature of biological research. The success of Human Genome Project and breakthrough technologies in drug discovery initiatives spells significant investment opportunities for the industry. As new players enter the market, and existing companies grow in size and revenue, competition in the bioinformatics industry is likely to intensify significantly.
Globally, pharmaceutical companies are increasingly seeking help from biotechnology to alleviate concerns related to increasing number of blockbuster drugs going off-patent, narrow product pipelines, and high drug development costs. Bioinformatics can be applied in every stage of the R&D processes in biotechnology as well as the pharmaceutical sectors. The emergence of genomics and its ever-growing application in the research and development processes have created soaring amount of data, creating significant opportunities for bioinformatics. Data management tools based on bioinformatics have helped companies in easing the task of R&D analysis, thereby enhancing their productivity by way of identifying new biomarkers for toxicity and drug efficacy, diagnostic biomarkers as well as new drug targets. Bioinformatics help utilization of the gene and protein data and construct interactive models that aid in identifying disease pathways and effects of compounds.
Going forwards, the penetration of genomics in drug discovery is expected to increase further, which bodes tremendous market prospects for bioinformatics. In addition, more and more spending is expected to be made in research and development from pharmaceutical companies, of which a major chunk is expected to end up in the area of bioinformatics. Bioinformatics is a multifaceted market, characterized by a host of licensing and research and development collaborations. Worldwide bioinformatics market is primarily concentrated in the United States and Europe. The industry represents one of the fastest growing fields offering economic opportunities in various areas.
The US represents the largest regional market for bioinformatics worldwide, followed by Europe, as stated by the new market research report on Bioinformatics. However, Asia-Pacific is projected to record the fastest growth over the analysis period. Japanese market is projected to post a CAGR of 13.5% during the analysis period. Segment wise, Biocontent represents the largest product segment in the global bioinformatics market. Bioinformatics Software represents the fastest growing product segment. Demand for Bioinformatics hardware is projected to rise by 9.5% during the analysis period.
The global bioinformatics industry is highly fragmented, with several companies offering only specific services and a very few companies delivering comprehensive solutions for their clients' R&D needs. Large-scale presence of smaller companies and high fragmentation is due to lower entry barriers and existence of large IT companies in the sector. Major players profiled in the report include 3rd Millennium, Inc, Accelrys, Inc., Affymetrix, Agilent Technologies, BioWisdom Ltd, Celera Group, Gene Logic, IBM Life Sciences, Life Technologies Corporation, and Rosetta Inpharmatics.
The research report titled "Bioinformatics: A Global Strategic Business Report" announced by Global Industry Analysts Inc., provides a comprehensive review of the bioinformatics market, current market trends, key growth drivers, recent product introductions, recent industry activity, and profiles of major/niche global as well as regional market participants. The report provides annual sales estimates and projections for bioinformatics market for the years 2007 through 2015 for the following geographic markets - US, Canada, Japan, Europe, Asia-Pacific, and Rest of World. Key segments analyzed include Software (Bioinformatics), Biocontent (Bioinformatics), and Hardware (Bioinformatics). Also, a seven-year (2000-2006) historic analysis is provided for additional perspective.
For more details about this comprehensive market research report, please visit -
www.strategyr.com/Bioinformatics_Market_Report.asp



Read more: http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2011/01/10/prweb8052230.DTL#ixzz1BJeau2Po

Sunday, January 16, 2011

Scientists find the 'master switch' for key immune cells in inflammatory diseases



Scientists have identified a protein that acts as a "master switch" in certain white blood cells, determining whether they promote or inhibit inflammation. The study, published in the journal Nature Immunology, could help researchers look for new treatments for diseases such as rheumatoid arthritis that involve excessive inflammation.
Inflammatory responses are an important defence that the body uses against harmful stimuli such as infections or tissue damage, but in many conditions, excessive inflammation can itself harm the body. In rheumatoid arthritis, the joints become swollen and painful, but the reasons why this happens are not well understood.
Cells of the immune system called macrophages can either stimulate inflammation or suppress it by releasing chemical signals that alter the behaviour of other cells. The new study, by scientists from Imperial College London, has shown that a protein called IRF5 acts as a molecular switch that controls whether macrophages promote or inhibit inflammation.
The results suggest that blocking the production of IRF5 in macrophages might be an effective way of treating a wide range of autoimmune diseases, such as rheumatoid arthritis, inflammatory bowel disease, lupus, and multiple sclerosis. In addition, boosting IRF5 levels might help to treat people whose immune systems are compromised.

Friday, January 14, 2011

Comparison of Different Signal Thresholds on Data Dependent Sampling in Orbitrap and LTQ Mass Spectrometry for the Identification of Peptides and Proteins in Complex Mixtures



We evaluate the effect of ion-abundance threshold settings for data dependent acquisition on a hybrid LTQ-Orbitrap mass spectrometer, analyzing features such as the total number of spectra collected, the signal to noise ratio of the full MS scans, the spectral quality of the tandem mass spectra acquired, and the number of peptides and proteins identified from a complex mixture. We find that increasing the threshold for data dependent acquisition generally decreases the quantity but increases the quality of the spectra acquired. This is especially true when the threshold setting is set above the noise level of the full MS scan. We compare two distinct experimental configurations: one where full MS scans are acquired in the Orbitrap analyzer, while tandem MS scans are acquired in the LTQ analyzer and one where both full MS and tandem MS scans are acquired in the LTQ analyzer. We examine the number of spectra, peptides, and proteins identified under various threshold conditions, and we find that the optimal threshold setting is at or below the respective noise level of the instrument regardless of whether the full MS scan is performed in the Orbitrap or in the LTQ analyzer. When comparing the high-throughput identification performance of the two analyzers, we conclude that, used at optimal threshold levels, the LTQ and the Orbitrap identify similar numbers of peptides and proteins. The higher scan speed of the LTQ, which results in more spectra being collected, is roughly compensated by the higher mass accuracy of the Orbitrap, which results in improved database searching and peptide validation software performance.
Keep reading

Tuesday, January 11, 2011

peptide retention time



















Is peptide RT charge specific?
"Not in the sense that ESI-charge is influencing the retention time but
in the sense that longer peptide tend to elute later in the
chromatogram and also are more likely to have higher charge states,
there is a slight correlation between charge state and retention time.
however it would work as prediction tool if you consider predicting
the charge state of a peptide, which is actually not that difficult to
do." by Hannes.

Monday, January 10, 2011

Model predicts a drug's likelihood of causing birth defects

Bioinformatic analysis crunches data on drug effects on genes involved in fetal development

Boston, Mass. – When pregnant women need medications, there is often concern about possible effects on the fetus. Although some drugs are clearly recognized to cause birth defects (thalidomide being a notorious example), and others are generally recognized as safe, surprisingly little is known about most drugs' level of risk. Researchers in the Children's Hospital Boston Informatics Program (CHIP) have created a preclinical model for predicting a drug's teratogenicity (tendency to cause fetal malformations) based on characterizing the genes that it targets.
The model, described in the March 2011 issue of Reproductive Toxicology (published online in November), used bioinformatics and public databases to profile 619 drugs already assigned to a pregnancy risk class, and whose target genes or proteins are known. For each of the genes targeted, 7426 in all, CHIP investigators Asher Schachter, MD, MMSc, MS, and Isaac Kohane, MD, PhD, crunched databases to identify genes involved in biological processes related to fetal development, looking for telltale search terms like "genesis," "develop," "differentiate" or "growth."
The researchers found that drugs targeting a large proportion of genes associated with fetal development tended to be in the higher risk classes. Based on the developmental gene profile, they created a model that showed 79 percent accuracy in predicting whether a drug would be in Class A (safest) or Class X (known teratogen).
For example, the cholesterol-lowering drugs cerivastatin, lovastatin, pravastatin and fluvastatin are all in Class X. All of these drugs also targeted very high proportions of high-risk genes (98 to 100 percent). The anti-coagulant warfarin, also in Class X, had a proportion of 88 percent.
When Schachter and Kohane applied the model to drugs across all risk classes, the proportion of developmental genes targeted roughly matched the degree of known risk (see graph). However, the model needs further validation before Schachter is willing to share actual predictions for specific drugs. "We don't want to risk misleading pregnant women from taking necessary medicines," he says.
One difficulty in validating the model is that the "known" teratogenicity it's being tested against often isn't known. Between Class A and Class X are Classes B, C and D, with increasing amounts of risk, but the boundaries between them are based on minimal data. Teratogenic effects may be difficult to spot, since most drugs are taken relatively rarely in pregnancy, some may be taken along with other drugs, and any effects tend to be rare or too subtle to be noted in medical records. Moreover, data from animal testing doesn't necessarily apply to humans.
"A lot of drugs in the middle of the spectrum, and maybe even some in Class A, may cause subtle defects that we haven't detected," says Schachter. "We can't provide a yes/no answer, but we found a pattern that can predict which are riskier."
Given the degree of uncertainty, Schachter and Kohane believe their model may be of interest to drug developers and prescribing physicians, and might provide useful information to incorporate in drug labeling.
"We can now say to patients, 'This drug targets a ton of genes that are involved in developmental processes,'" says Schachter.
Or, conversely, if a young pregnant woman has a heart condition and needs to be treated, physicians may be reassured by a cardiac drug's profile, he adds. "Instead of saying, 'we don't know,' we can now say that the drug is more likely to be safe in pregnancy."
"We have here a prismatic example of the utility of a big-picture, macrobiological approach," says Kohane, director of CHIP. "By combining a comprehensive database of protein targets of drugs and a database of birth defects associated with drugs, we find a promising predictive model of drug risk for birth defects."
###
The study was funded by a grant from the National Institute of General Medical Sciences.
Children's Hospital Boston is home to the world's largest research enterprise based at a pediatric medical center, where its discoveries have benefited both children and adults since 1869. More than 1,100 scientists, including nine members of the National Academy of Sciences, 12 members of the Institute of Medicine and 13 members of the Howard Hughes Medical Institute comprise Children's research community. Founded as a 20-bed hospital for children, Children's Hospital Boston today is a 392-bed comprehensive center for pediatric and adolescent health care grounded in the values of excellence in patient care and sensitivity to the complex needs and diversity of children and families. Children's also is the primary pediatric teaching affiliate of Harvard Medical School. For more information about research and clinical innovation at Boston Children's visit: Vector Blog.

O18 labeling

"If digestion of proteins by trypsin is performed in a solution that contains 50% O16 water and 50% O18 water, then for most peptides there should be approximately a 1 part unlabeled : 2 parts singly labeled : 1 part doubly labeled ratio.

To understand this, one must think about the process of digestion itself. Trypsin cleaves proteins through hydrolysis. The newly-created C-terminus carboxyl acquires one of its oxygen atoms from the water. So, at this point, half of the C-termini will be singly labeled, and half will be unlabeled. (I'm ignoring the low probability of O18 atoms already being present at the site of cleavage).

However, in most cases, trypsin will continue to interact with the C-termini, swapping out C-terminal oxygens to create new water molecules while at the same time swapping in oxygens from existing water molecules to replace them. After this process has proceeded for some time, the chance of a given C-terminal oxygen atom being O16 approaches the level of its presence in the water: i.e. 50%. The same is true for its chance of being an O18 atom.

So, a simple binomial distribution applies. Since the C-terminal carboxyl has two oxygen atoms:

Chance of both being unlabeled = 0.5 times 0.5 = 0.25.
Chance of both being labeled = 0.5 times 0.5 = 0.25
Chance of one being labeled and the other being unlabeled = the remainder of the probability = 1.0 - 0.25 - 0.25 = 0.50.

So, one quarter of the peptides are unlabeled, one half singly-labeled, and one quarter doubly-labeled. A 1:2:1 ratio should be evident.

That being said, different peptides show different rates of back exchange, with a few being totally resistant to it. So while the majority of peptides should display a 1:2:1 ratio pattern, some exceptions might be noted."

From internal communications.

Friday, January 7, 2011

Strategy for purification and mass spectrometry identification of SELDI peaks corresponding to low-abundance plasma and serum proteins.


Abstract

Analysis by SELDI-TOF-MS of low abundance proteins makes it possible to select peaks as candidate biomarkers. Our aim was to define a purification strategy to optimise identification by MS of peaks detected by SELDI-TOF-MS from plasma or serum, regardless of any treatment by a combinatorial peptide ligand library (CPLL). We describe 2 principal steps in purification. First, choosing the appropriate sample containing the selected peak requires setting up a databank that records all the m/z peaks detected from samples in different conditions. Second, the specific purification process must be chosen: separation was achieved with either chromatographic columns or liquid-phase isoelectric focusing, both combined when appropriate with reverse-phase chromatography. After purification, peaks were separated by gel electrophoresis and the candidate proteins analyzed by nano-liquid-chromatography-MS/MS. We chose 4m/z peaks (9400, 13571, 13800 and 15557) selected for their differential expression between two conditions, as examples to explain the different strategies of purification, and we successfully identified 3 of them. Despite some limitations, our strategy to purify and identify peaks selected from SELDI-TOF-MS analysis was effective.

Thursday, January 6, 2011

Sigma Product Interview



Good morning! 

I'm working with a team to understand what researchers need to manipulate gene transcription and regulation.  We would like to talk to people who are working in an area related to gene regulation and would like to learn about their research.  
 
The goal of this 30-45 minute conversation is simply for us to better understand what researchers are doing and what they would like to do, so we can determine what tools and technologies are missing.  It is not a sales call, but a conversation directed to better understand what is needed to be successful in their research.  This will help Sigma develop products that solve unmet needs.
 

The interviews will be taking place through 1/15/11 and will be conducted by Research Biotech Marketing and R&D staff.
 
Because you are close to customers, we request your help in identifying people who would be interested in participating.  As a token of appreciation, we will be offering participants a $50 Amazon gift card.  Simply send me the contact info of those who are interested in participating, and we will arrange to set up the interview.   



Wednesday, January 5, 2011

The complexity of gene expression dynamics revealed by permutation entropy


Background

High complexity is considered a hallmark of living systems. Here we investigate the complexity of temporal gene expression patterns using the concept of Permutation Entropy (PE) first introduced in dynamical systems theory. The analysis of gene expression data has so far focused primarily on the identification of differentially expressed genes, or on the elucidation of pathway and regulatory relationships. We aim to study gene expression time series data from the viewpoint of complexity.

Results

Applying the PE complexity metric to abiotic stress response time series data in Arabidopsis thaliana, genes involved in stress response and signaling were found to be associated with the highest complexity not only under stress, but surprisingly, also under reference, non-stress conditions. Genes with house-keeping functions exhibited lower PE complexity. Compared to reference conditions, the PE of temporal gene expression patterns generally increased upon stress exposure. High-complexity genes were found to have longer upstream intergenic regions and more cis-regulatory motifs in their promoter regions indicative of a more complex regulatory apparatus needed to orchestrate their expression, and to be associated with higher correlation network connectivity degree. Arabidopsis genes also present in other plant species were observed to exhibit decreased PE complexity compared to Arabidopsis specific genes.

Conclusions

We show that Permutation Entropy is a simple yet robust and powerful approach to identify temporal gene expression profiles of varying complexity that is equally applicable to other types of molecular profile data.

Monday, January 3, 2011

bioinformatics ion motion


Some resources for animation of bioinformatics algorithms, and it also contains image and video libraries.
click here to hit the board

Sunday, January 2, 2011

Improved performance on high-dimensional survival data by application of Survival-SVM


Motivation: New application areas of survival analysis as for example based on micro-array expression data call for novel tools able to handle high-dimensional data. While classical (semi-) parametric techniques as based on likelihood or partial likelihood functions are omnipresent in clinical studies, they are often inadequate for modelling in case when there are less observations than features in the data. Support vector machines (SVMs) and extensions are in general found particularly useful for such cases, both conceptually (non-parametric approach), computationally (boiling down to a convex program which can be solved efficiently), theoretically (for its intrinsic relation with learning theory) as well as empirically. This article discusses such an extension of SVMs which is tuned towards survival data. A particularly useful feature is that this method can incorporate such additional structure as additive models, positivity constraints of the parameters or regression constraints.
Results: Besides discussion of the proposed methods, an empirical case study is conducted on both clinical as well as micro-array gene expression data in the context of cancer studies. Results are expressed based on the logrank statistic, concordance index and the hazard ratio. The reported performances indicate that the present method yields better models for high-dimensional data, while it gives results which are comparable to what classical techniques based on a proportional hazard model give for clinical data.
Supplementary information: Supplementary data are available at Bioinformatics online. Full article

Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation



The performance of the level set segmentation is subject to appropriate initialization and optimal configuration of controlling parameters, which require substantial manual intervention. A new fuzzy level set algorithm is proposed in this paper to facilitate medical image segmentation. It is able to directly evolve from the initial segmentation by spatial fuzzy clustering. The controlling parameters of level set evolution are also estimated from the results of fuzzy clustering. Moreover the fuzzy level set algorithm is enhanced with locally regularized evolution. Such improvements facilitate level set manipulation and lead to more robust segmentation. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.
read more

Clinical proteomics for liver disease: a promising approach for discovery of novel biomarkers


Hepatocellular carcinoma (HCC) is the fifth most common cancer and advanced hepatic fibrosis is a major risk factor for HCC. Hepatic fibrosis including liver cirrhosis and HCC are mainly induced by persistent hepatitis B or C virus infection, with approximately 500 million people infected with hepatitis B or C virus worldwide. Furthermore, the number of patients with non-alcoholic fatty liver disease (NAFLD) has recently increased and NAFLD can progress to cirrhosis and HCC. These chronic liver diseases are major causes of morbidity and mortality, and the identification of non-invasive biomarkers is important for early diagnosis. Recent advancements in quantitative and large-scale proteomic methods could be used to optimize the clinical application of biomarkers. Early diagnosis of HCC and assessment of the stage of hepatic fibrosis or NAFLD can also contribute to more effective therapeutic interventions and an improve prognosis. Furthermore, advancements of proteomic techniques contribute not only to the discovery of clinically useful biomarkers, but also in clarifying the molecular mechanisms of disease pathogenesis by using body fluids, such as serum, and tissue samples and cultured cells. In this review, we report recent advances in quantitative proteomics and several findings focused on liver diseases, including HCC, NAFLD, hepatic fibrosis and hepatitis B or C virus infections.