Posts by Johannes Goll

JCVI Supports Human Mircrobiome Body Site Experts with Shotgun Data Analysis

Members of the Human Microbiome Project (HMP) Consortium (see http://commonfund.nih.gov/hmp and http://www.hmpdacc.org for more information on the project and partners) including human microbiome body site experts gathered for a virtual Jamboree January 19th. The fully online-based Jamboree has been set-up to communicate initial data products and tools best suited for analysis, primarily to make the data amendable/consumable in a user-friendly way for body site exerts. 61 participants followed the Jamboree agenda with presenters given access to a common desktop that was shared via the internet using an online collaboration tool. Results from  the Data Analysis Working Group (DAWG) were presented in the areas of 16S rRNA gene sequence (16S DAWG) and metagenomic whole-genome shotgun analysis (WGS DAWG). The efforts of the 16S DAWG focus on marker-gene based approaches to estimate biological diversity and how marker variability is associated with patient meta-data. The WGS DAWG  complements results from the 16S marker based analysis with comprehensive sequencing of random pieces of genomic DNA from the collection of microorganisms which inhabit a particular site on, or in, the human body (microbiome). These analyses allow researchers to investigate among other questions what microorganisms are present, and the nature and extent of their collective metabolism, at a particular body site. Ultimately researchers want to relate this information to healthy versus diseases states in humans.

METAREP tutorial presented as part of the HMP Virtual Jamboree

The current survey comprises more than 700 samples from hundreds of individuals taken from up to 16 distinct body sites. Illumina sequencing has yielded more than 20 billion Illumina reads and annotation data produced from the sequences exceeds 10 terabytes. In anticipation of such data volumes, we have developed JCVI Metagenomics Reports (METAREP), an open source tool for high-performance comparative analysis, in 2010. The tool enables users to slice and dice data using a combination of taxonomic and functional/pathway signatures. To demonstrate how the tool can be used by body site experts, we picked and loaded sample data from 17 oral samples and presented a quick tutorial on how users can view, search, browse individual samples and compare multiple samples (see video). The functionality was very well received and body site experts asked JCVI to make all the 700+ samples available. As a result of the Jamboree, JCVI in agreement/collaboration with the HMP Data Analysis and Coordination Center and the rest of the HMP consortium, will soon set-up a dedicated HMP METAREP instance that will allow body-site experts and eventually other users to analyze the DAWG data in a user-friendly way via the web.

Lucene Revolution Conference 2010

I arrived late in Boston after my plane from Washington DC was delayed. On the agenda – the next four days the Lucene Revolution conference and a Solr application development workshop organized by Lucid Imagination. The conference promised a unique venue (the first of its kind in the US) to meet developers that all share the same challenge: to enable users to find relevant information in growing bodies of data quickly and intuitively. I was looking forward to hearing many interesting talks given by experts of the field, to learning how to build intuitive search interfaces, and to get an idea where things are heading in the next years. As the developer of JCVI’s Metagenomics Reports (METAREP), I was especially looking forward to the Solr workshop to learn some of the tricks from the experts to tweak the search engine behind this open-source metagenomics analysis tool.

The Early Revolution

But before the revolution could happen and I could enjoy some splendid time at the Washington Dulles airport, Doug Cutting had to start developing a Java based full-text search engine called Lucene in 1997. Lucene became an open-source project in 2000 and an Apache Software Foundation project one year later. In 2004, Solr emerged as an internal CNET project created by Yonik Seeley to serve Lucene powered search results to the company’s website. It was donated by CNET to the Apache Software Foundation in 2006.

Google Trend for Solr

Google Trend for Solr

Early this year, both projects merged and development since then has been carried out jointly under the umbrella of the Apache Software Foundation. Meanwhile many companies use Solr/Lucene, among them IBM, LinkedIn, Twitter, and Netflex. How did this happen?

The Lucid Imagination Solr Application Development Workshop

In search of an answer, I made my way from my hotel to the conference venue, the Hyatt Hotel located along the beautiful Boston harbor bay. The 2-day workshop was a brute-de-force tour of Solr features, configuration, and optimization. It also touched on the mathematical theory behind Lucene’s search result scoring and on evaluating result relevance.  The 2-day workshop covered enough material to warrant a third day. Given this optimistic agenda, there was not much time for the labs (exercises) and the trainer had to focus more on breadth than on depth. As a one-year Solr user, many of the general concepts were familiar so I was more interested in details. A comprehensive hand-book and an excellent exercise compilation came to the rescue and provided me with the needed detail to follow up on subjects that were touched on. There were two parallel Solr classes. In my class, 25 participants followed the training. The mix included developers working for media, defense, and other co-operations. Academia was represented by several libraries and universities.

Solr Application Development Workshop

A powerful feature I had not heard before is the DISMAX RequestHandler. The handler allows to abstract complex queries. Users can enter a simple query without complex syntax or specifying a search field and behind the scenes the handler will do its magic.  It searches across a set of specified fields which (among other things) can be weighted by importance. Additional information about this handler and other snippets I collected during the class can be found in my Solr workshop notes .

The Lucene Revolution Conference

After a mediocre coffee brewed in my hotel room, I headed to the conference venue on the second floor of the Hyatt Hotel. The first day of the conference started with a podium discussion about Cutting Edge of Search that included Michael Busch (Twitter), John Wang (LinkedIn), Joshua Tuberville (eHarmony), and Bill Press (Salesforce.com). The discussion went back and forth showcasing each search platform and the experience in developing it. When asked what he would do differently in retrospect John Wang from LinkedIn ironically mentioned that he would “ban recruiters” – if I correctly remember he mentioned that they “spam-up” the system.

Lightning Talk “Using Solr/Lucene for High-Performance Comparative Metagenomics”

Joshua Tuberville from eHarmony provided valuable advice to developers: “Avoid pet queries for benchmarking a system – use a random set of queries instead.” He also suggested tracking queries that web site users enter for optimization, adding “it surprises me every day that the world is not made up from engineers, but it is a fact.” Avoid unnecessary complexity and duplicating efforts. Use open-source if available. For example, instead of implementing their own Lucene wrapper, eHarmony made use of the open-source project Solr. Bill Press added “Do not be afraid to tear things down, rebuild it many times if needed.”

“Companies do not have time to debug code.” Eric Gries (CEO Lucid Imagination)

Eric Gries, CEO of Lucid Imagination, presented ‘The Search Revolution: How Lucene & Solr Are Changing the World’. In the introduction, he pointed out that Solr/Lucene is the 10th largest community project and the 5th largest Apache Software Foundation project. “Open-source projects need a commercial entity behind them to help them grow”. “Companies need no errors, they do not have time to debug.” His main part focused on his company’s LucidWorks Enterprise software which is based on the open-source project Solr/Lucene. Features that separate it from the open-source version include smart defaults, additional data sources, a REST API that allows programmatic access via Perl/Python/PHP code, standardized error messages, and click based relevance boosting. Later, Brian Pinkerton, also from Lucid Imagination presented additional details. He revealed that their software is based on elements of the upcoming Solr 4.0 version and is fully cloud enabled (added SolrCloud patch). It uses ZooKeeper to manage node configuration and failover. All website communication is done in JSON .  The enterprise version supports field collapsing for distributed search.

“A picture communicates a thousand words but a video communicates a thousand pictures.” Satish Gannu (Cisco)

Satish Gannu from Cisco stressed the increasing prevalence of video data and how such data is changing the world. More and more video enabled devices are pushed on the market. Collaboration is increasingly done across the world. Meetings are recorded and shared globally. Videos are replacing manuals. Cooperate communication/PR via video is increasing. He related the popularity of video to the fact that “A picture communicates a thousand words, but a video communicates a thousand pictures” and that “60% of human communication is non-verbal.” Satish went on to highlight Cicso’s video solutions that make use of automatic voice and face recognition software to store metadata about speakers to enrich the user experience. For example, users can filter out certain speakers when watching recorded meetings. More can be found here.

View of Boston

“Mobile application development will be the driver of open-source innovation.” Bill McQuaide (Black Duck Software)

One of the highlights that morning was Bill McQuaide’s talk on open source trends. Based on diverse sources, including his company Black Duck Software, he showed that software IT spending is down, that 22% of software is open source, and that 40% of software projects use open source. There is an enormous amount of new open source projects targeting the cloud with a lot of competition. Among top open-source licenses are the GNU General Public Licenses, GPL 3.0, and BSD licenses. The three predominant programming languages used by open-source developers are C, C++ and Java. Mobile development will be the driver of innovation in the open-source community especially developments around Google’s Android operating system.  To manage licenses for projects that integrate dozens of open-source projects such as Android and to ship the bundled software to customers can become very complex. For this and other reasons, McQuaide recommends companies and institutions to have policies for implementing open source, integrating third party tools, and identifying and cataloging all open source software used.

Distributed Solr/Lucene using Hadoop

An excellent talk was presented by Rod Cope, from Open Logic.  He presented Real-Time Searching of Big Data with Solr and Hadoop.  The search infrastructure centered around Hadoop’s distributed file system on top of which they cleverly arranged several other technologies.  For example, Hadoop’s HBase database provides fast database lookups but does not provide the power of Lucene text searches. Solr/Lucene however is not as optimized to return stored document information. Their solution is to use Solr/Lucene to search indexed text fields, storing and returning only the document ID.  The returned document ID is then used to fetch additional information from the HBase database. Open Logic uses the open-source software katta to integrate Lucene indices with Hadoop and increased fault tolerance by replicating Solr cores across different machines. Also, corresponding master and slave servers were set up to run on different machines for indexing and searching respectively.  The set up he described runs completely on commodity hardware and new machines can be added on the fly to scale out horizontally.

“It surprises me every day that the world is not made up from engineers but it is a fact.” Joshua Tuberville (eHarmony)

Next on the agenda were seven minute lightning talks. I opened-up the lightning talk session describing our Solr/Lucene based open-source web project METAREP for high-performance comparative genomics (watch). Next was Stefan Olafsson from TwigKit presenting ‘The 7-minute Search UI‘, a presentation which I thought was another gem of this conference. In contrast to other talks, it focused on user experience and intuitive user interfaces. TwigKit has developed a framework that provides well designed search widgets that can be  integrated with several search engines.

“If nobody is against you in open source then you are not right.” Marten Mickos (CEO Eucalyptus)

The key note presentation on the second day was presented by Marten Mickos the CEO of Eucalyptus and former CEO of MySQL. He opened by advocating his philosophy of making money out of open source projects. “Innovation is a change that creates a new dimension in performance” he said and mentioned the open-source Apache web server that allows anyone to run a powerful web server. He added “Market disruption is a change that creates a new level of efficiency” and referred to MySQL originally designed to scale horizontally. While in 1995 such a design was a draw-back compared to other marked solutions, scale-out has become the dominant design today. Now, within the cloud, horizontal scaling is the key. A fact that has made MySQL the most used database in the cloud.

He observed that “while most successful open-source projects are related to building infrastructure software, servers and algorithms, there are only a few open-source projects centered around human behavior, user experience and user interfaces. The latter projects are mainly developed in closed source environments.” Then he went on praising open-source as a driver for innovation “Open source is so effective because you are not protected. Code can be scrutinized by everybody. In a close sourced company, your only competition is within the company, while in open source you compete with everybody.” Open source is a way to innovate and it is more productive. It usually takes a stubborn individual to drive things. Innovation mostly stems from single individuals that are supported by the community.

When asked how to maintain property rights as a company when running an open-source model, he responded “keep things that keep the business going proprietary but open-up others. The key is to be very transparent with your model.”

What’s next ?

In a podium session the core Solr/Lucene committer team discussed future features. The team works on rapid front-end prototyping using the Apache Velocity template engine and Ajax. The prototyping code can be found in the current trunk of the Solr/Lucene code repository under the /browse directory. A Solr/Lucene cloud enabled version is being developed. Twitter’s real time search functionality will be integrated. Other open source projects that are being integrated are Nutch, a web-search software, and Mahout for machine learning (http://mahout.apache.org).  New features will include pivot tables (table matrices), a hierarchical data type, spatial searching, and flexible indexing.

The above represents a subset of talks that took place. There were many other interesting talks – some took place in parallel sessions. Individual presentations can be downloaded from the Lucid Imaginations conference page. A selection of videos is available here. The next Lucene Revolution conference will take place in San Francisco May 2011.

After four days of Solr/Lucene, many coffees, talks, discussions, I left inspired by the conference. It dawned on me that the real revolution is not the search technology but the strong community spirit itself that has emerged and drives developers to jointly work towards a common goal.

Virtual Comparative Metagenomics

We have created an open virtualization format (OVF)  package of JCVI’s Metagenomics Reports (METAREP)– a high performance comparative metagenomics analysis tool. The software runs on a web server, retrieves data from two different database systems and uses R for statistical analysis. The new OVF package bundles all these 3rd party tools and is configured to run out of the box in a virtual machine.

Screenshot of the virtual box appliance import wizard. The wizard allows you to specify the CPU and memory usage of the virtual machine on which METAREP will run on.

To run a virtual version of METAREP on your machine, follow these steps

  1. download the METAREP OVF package from our ftp site [download] .
  2. unzipp the OVF package
  3. download and install Oracle’s Virtual Box, a OVF compatible virtualization software [download]
  4. Start Virtual Box
  5. Click File/Import Appliance and select the OVF file.
  6. Adjust RAM/CPU usage using the Appliance Import Wizard (see image)
  7. Start VM
  8. Double-Click on the METAREP firefox link on the VM desktop
  9. Log into METAREP with username=admin and password=admin

This virtual machine appliance is the first step in developing a fully cloud-enabled analysis platform where users can easily launch the application wherever is most convenient: on their personal desktop or in the cloud where they can scale-out the appliance to suite their needs.

Future virtual machine images will be certified to run on other virtualization software platforms. Stay tuned.

If you like to learn more about METAREP and talk to the developers,  join us  at  Lucene Revolution Conference in Boston (October 7-8  2010). We will present a lightning talk about METAREP  the first day of the conference 5pm  (see agenda).

Links:

JCVI’s METAREP Instance

METAREP Flyer

METAREP Manual

METAREP Source Code

Advance Access JCVI Metagenomics Reports Application Note

A significant JCVI informatics development is JCVI Metagenomics Reports, an open source Web 2.0 application designed to help scientists analyze and compare annotated metagenomics data sets. Users can download the application to upload and analyze their own metagenomics datasets.

METAREP has just been published in Bioinformatics (08/26/2010) as an open access article. The publication is currently accessible under the Bioinformatics Advance Access model. The PDF version can be downloaded at

http://bioinformatics.oxfordjournals.org/cgi/reprint/btq455v1.pdf

Supplementary information includes the METAREP data model and an overview about its search performance accessible at

http://bioinformatics.oxfordjournals.org/cgi/content/full/btq455/DC1

One of METAREP’s  key features that distinguishes it from other metagenomics tools is that it utilizes a high-performance scalable search engine that allows users to analyze and compare extremely large metagenomics datasets, e.g. datasets produced by the Human Microbiome Project.

If you like to learn more about METAREP and talk to the developers,  join us  at  Human Microbiome Research Conference in St. Louis in Missouri (August 31 – September 2, 2010). We will present METAREP  the first day of the conference at 10:35am (see agenda).

Contact Us:

We would like to hear from you. If you have questions or feedback or if you wish to contribute to the METAREP open source project please send an email to metarep-support@jcvi.org

Links:

JCVI’s METAREP Instance

METAREP Flyer

METAREP Manual

METAREP Source Code

High-performance comparative metagenomics

Are your carrying out large scale metagenomics analyses to identify differences among multiple sample sites? Are you looking for suitable analysis  tools?

If you have not yet found the right analysis tool, you may be interested in  the latest beta version of JCVI Metagenomics Reports (METAREP)  [Test It].

METAREP is a new open source tool developed for high-performance comparative metagenomics .

It provides a suite of web based tools to help scientists view, query, browse, and compare metagenomics annotation data derived from ORFs called on metagenomics reads or assemblies.

Users can either specify fields, or logical combinations of fields, to filter
and refine datasets
. Users can compare multiple datasets at various functional and taxonomic levels, applying statistical tests as well as hierarchical clustering, multidimensional scaling, and heatmaps (see image gallery).

For each of these features, tab delimited files can be exported for downstream analysis. The web site is optimized to be user friendly and fast.

Feature Summary [download Flyer]:

  • Handle extremely large datasets. Uses scalable high-performance Solr/Lucene search engine (we have indexed 300 million annotation entries, but much larger volumes can be handled as shown by Hathi Trust).
  • Compare 20+ datasets at the same time. Use various compare
    options including statistical tests and plot options to visualize
    dataset difference at various taxonomic and functional levels.
  • Apply statistical tests such as METASTATS (White et al.), a modified
    non-parametric t-test to compare two sample populations (e.g.
    metagenomics samples from healthy and diseased individuals).
  • Export publication-ready graphics. Export heatmaps, hierarchical clustering, and multi-dimensional scaling plots in PDF format.
  • Analyze KEGG metabolic pathways. Summaries include enzyme
    highlights on KEGG maps, pathway enzyme distributions, and
    statistics about pathway coverage at various pathway levels.
  • Search using a SQL-like query syntax. Build your query using 14
    different fields that can be combined logically.
  • Drill down into data using METAREP’s NCBI Taxonomy, Gene
    Ontology, Enzyme Classification or KEGG Pathway browser.
    Install your own METAREP version.
  • Flexible central configuration, METAREP and 3rd party code base is completely open source.
  • Cross-link function with phylogeny. Slice your data at various
    taxonomic and/or functional levels. For example, search for all
    bacteria or exclude eukaryotes or search for a certain (GO/EC
    ID)/taxonomic combination.
  • Generic data format. Data types that can be populated include a
    free text functional description, best BLAST hit information, as well
    as GO ID, EC ID, and HMMs.

How to analyze your own data: You can install your own METAREP version to analyze your metagenomics annotation data [download source]. We have written a comprehensive manual that describes the installation process step by step [download manual]. Since METAREP only operates on annotated data, raw sequences need to be annotated first. Supported data types that can be loaded for each sequence include functional descriptions, best BLAST hits fields (E-Value, Percent Identity, NCBI Taxon, Percent Sequence Coverage), GO, EC, and HMM assignments. The installation also contains a set of example annotations that can be imported.

Contact Us:

We would like to hear from you. If you have questions or feedback or if you wish to contribute to the METAREP open source project please send an email to metarep-support@jcvi.org

Links:

JCVI’s METAREP Instance

METAREP Flyer

METAREP Manual

METAREP Source Code

New ways to analyze metagenomics data

Are you looking for new tools to analyze your metagenomics data? Are you using  MG-RAST, IMG/M or MEGAN for your daily metagenomics work?

JCVI is working on a user friendly alternative that you might be looking for –  a new  tool kit  for metagenomics data visualization and analysis  built using the latest web 2.0 technologies.

JCVI’s Metagenomics Reports (METAREP) is a user friendly web interface designed to help scientists browse, compare, view,  and query annotation data derived from ORFs called on metagenomics reads. It supports both functional (Gene Ontology, Enzyme Commission Classification) and browsing of taxonomic assignments. When performing a search, users can either specify fields or logical combinations of fields to flexibly filter datasets on the fly. METAREP provides lists and pie charts of top functional and taxonomic categories for browse and search results. Tools are being developed that focus on the comparative analysis of multiple datasets. The system is optimized to be user friendly and fast .

Currently, an alpha version of METAREP  is used and tested internally at JCVI. In April 2010 , we will release the beta version to a limited set of interested external users.

If you like to see the tool in action,  join us  at the DOE Genomic Science Workshop ( February 9-10, 2010) for our web and poster presentation (5:30 – 8:00 pm on each day) or sign up to become part of the beta testing process at www.jcvi.org/metarep .