Long Single-Molecule Reads Can Resolve the Complexity of the Influenza Virus Composed of Rare, Closely Related Mutant Variants

RNA viruses represent the majority of emerging and re-emerging diseases that pose a significant risk to global health – including influenza, hantaviruses, Ebola virus, and Nipah virus. When compared to DNA viruses, RNA viruses have an especially robust adaptability and evolvability due to their high mutation rates and rapid replication cycles. Development of novel medications for the prevention and treatment of these diseases requires an understanding of the mutant variants that drive an RNA-virus’ resistance mechanisms. The long read length offered by single-molecule sequencing technologies allows each mutant variant to be sequenced in a single pass. However, complete profiling of all viral genomes within a mutant spectrum is not yet possible due to the high error rate embedded in analytical protocols.

In collaboration with Alexander Artyomenko (Georgia State University), Alex Zelikovsky (Georgia State University), Nicholas Wu (The Scripps Research Institute), and Ren Sun (UCLA), Serghei Mangul and Eleazar Eskin developed a novel method for accurately reconstructing viral variants from single-molecule reads. This approach, two Single Nucleotide Variants (2SNV), tolerates the high error rate of the single molecule protocol and uses linkage between single nucleotide variations to efficiently distinguish these mutant variations from read errors.

Overview of the 2SNV method. For more information, see our book chapter.

Any method for reconstructing viral variants from single-molecule reads must overcome low volume and high error rate of sequencing data, combined with very high similarity and very low frequency of viral variants. This challenge is similar to extraction of an extremely weak signal from very noisy background with signal-to-noise ratio approaching zero. However impossible this task may seem, a satisfactory solution can be based on distinguishing randomness of the noise from systematic signal repetition. With a high sensitivity and accuracy, 2SNV is anticipated to facilitate not only viral quasispecies reconstruction, but also other biological questions that require detection of rare haplotypes such as genetic diversity in cancer cell population, and monitoring B-cell and T-cell receptor repertoire.

We present 2SNV in a chapter of conference proceedings from the 2016 RECOMB meeting. To benchmark the sensitivity of 2SNV, we performed a single-molecule sequencing experiment on a sample containing a titrated level of known viral mutant variants. We tested 2SNV on a dataset comprised of PacBio reads from 10 independent clones, ranging from 1 to 13 mutations. These 10 clones were mixed at a geometric ratio with two-fold difference in occurrence frequency for consecutive clones starting with the maximum frequency of 50% and the minimum frequency of 0.1 %. Our method is able to accurately reconstruct clone with frequency of 0.2% and distinguish clones that differed in only two nucleotides distantly located on the genome. 2SNV outperforms existing methods for full-length viral mutant reconstruction.

For more information, see our book chapter, which is available for download through Springer Publications: http://link.springer.com/chapter/10.1007%2F978-3-319-31957-5_12.

In addition, the open source implementation of 2SNV, which was developed by Alexander Artyomenko, is freely available for download at http://alan.cs.gsu.edu/NGS/?q=content/2snv.

The full citation to our paper is: 

Artyomenko, Alexander; Wu, Nicholas C; Mangul, Serghei; Eskin, Eleazar; Sun, Ren; Zelikovsky, Alex

Long Single-Molecule Reads Can Resolve the Complexity of the Influenza Virus Composed of Rare, Closely Related Mutant Variants Book Chapter

In: Research in Computational Molecular Biology, pp. 164-175, Springer International Publishing, 2016.

Links | BibTeX

Overview of results using the 2SNV method. (a) 2SNV (orange) outperforms existing haplotype reconstruction tools (blue) in viral variant reconstruction. Using PacBio reads from 10 IAV clones, (b) the pairwise edit distance between clones given in a heat-map and (c) occurring frequency of clone types.

UCLA Bioinformatics Training Environment: Programs and Philosophy

(This post is jointly authored with Alexander Hoffmann, Hilary Coller, Matteo Pellegrini, and Nelson Freimer.)

UCLA has a rich training environment for Bioinformatics that extends beyond the core academic programs.  For structured academic learning, UCLA offers an Undergraduate Bioinformatics Minor and a Bioinformatics Ph.D. Program.  In addition, UCLA coordinates multiple training programs, several of which are open to researchers from other institutions who are at all stages of their careers.  Many of these programs are either hosted or jointly sponsored by the Institute for Quantitative and Computational Biology (QCB) at UCLA, which is directed by Alexander Hoffmann (UCLA).

Over the past 10 years, driven by the ubiquity of genomics throughout the field, biology has become a data science. Every biomedical research institution has been challenged with supporting the analysis of genomic data generated by groups who traditionally have not cultivated substantial computational expertise. Many of our peer institutions delegate genomic data analyses to a specific Bioinformatics core group that operates on a “fee-for-service” model.

The Bioinformatics core “fee-for-service” model poses many problems.  First, complex issues that arise during analysis of genomic data are difficult to predict in advance.  Projects often require much more effort than anticipated by research groups, leading core groups to struggle with insufficient funds to cover the actual time spent on analysis.  Second, research groups utilizing the core often want to move the project in different directions than what was originally proposed.  In the long term, exploring additional aspects of data can be inefficient when data analysis is delegated to a core group on an as-needed basis.

At UCLA we follow a different approach.  We believe that research groups should receive the training and resources to analyze the genomic data that they generate.  This “training and collaboration” model is the best solution for efficiently completing projects and advancing skills in a research group.  Over the past ten years, UCLA has significantly invested in this training and collaboration model.  For example, UCLA’s Bioinformatics programs are explicitly organized to connect research groups with core groups across campus and provide infrastructure and training to students, faculty, and staff working in many different fields.

Bioinformatics training programs held at UCLA include:

  1. The Collaboratory. The Collaboratory of postdoctoral fellows, directed by Matteo Pellegrini (UCLA), provides an experimental and empirical research environment for bioscientists and computational scientists to collaboratively design and conduct experiments. Most bioscience laboratories have limited capabilities in large-scale data analysis. The Collaboratory’s main mission is to advance genomic data analysis by connecting UCLA bioscience faculty with QCB faculty and fellows.  The Collaboratory fellows are a select group of postdocs funded by the Collaboratory to engage in collaborative projects that leverage their specific expertise.

    The Collaboratory fellows are also responsible for organizing intensive tutorials designed to train UCLA students and postdocs in the latest next-generation sequence analysis techniques. In addition to providing computational expertise to bioscience researchers at UCLA, the Collaboratory also sets up and maintains a next-generation sequence data analysis server, and participants develop methodologies to process new types of data. The Collaboratory has a year-round schedule of workshops open to the Bioinformatics community.
  2. Bruins in Genomics Undergraduate Summer Research Program (B.I.G. Summer). B.I.G. Summer is an integrated undergraduate training and research program in genomics and bioinformatics at UCLA. Participants gain an intensive, practical experience in integrating quantitative and biological knowledge while learning how to pursue graduate degrees in the biological, biomedical or health sciences.  The program begins with two weeks of hands-on tutorial workshops that cover fundamental concepts in genomics critical to participation in today’s research.  The remaining weeks are focused on research.  Students work in pairs under the supervision of UCLA faculty mentors and QCB postdoctoral fellows.

    B.I.G. Summer offers unique opportunities that are often not available to undergraduates, including next generation sequencing analysis workshops, weekly science talks by senior researchers, a weekly journal club, professional development seminars, social activities, concluding poster sessions, and a GRE test prep course.  In addition, a special NIH-funded curriculum in neurogenomics, directed by Nelson Freimer and Eleazar Eskin, provides B.I.G. Summer participants with an intensive exposure to this rapidly growing field, in which UCLA is among the leading centers worldwide. B.I.G. Summer is organized by Alexander Hoffmann, Hilary Coller, Tracy Johnson, and Eleazar Eskin. This year, B.I.G. Summer is held from June 19th to August 11th, 2017.  The B.I.G. Summer Program is sponsored by the following generous institutions:

    UCOP for a UC-HBCU partnership Program in Genomics and Systems
    NIH NIBIB for NGS Data Analysis Skills for the Biosciences Pipeline R25EB022364
    NIH NIMH for Undergraduate Research Experience in Neuropsychiatric Genomics R25MH109172
  3. Undergraduate and MS Research Program. One of the best ways for faculty to provide training to undergraduate and graduate students is through mentorship in research labs. A substantial challenge to this approach is the increasing number of undergraduate students who want to get involved in research.  For example, there are many more Computer Science majors interested in research than can be absorbed by the number of faculty presently in the Department of Computer Science.  In order to meet rising undergraduate demand for research opportunities, we created an Undergraduate and Master’s student research program.

    This program connects researchers across campus with interested students from a variety of majors.  In doing so, we leverage UCLA’s strength in Bioinformatics to offer a greater number of research opportunities available to undergraduates with and outside of the Department of Computer Science.  Each research opportunity posted on the webpage has a list of requirements, ranging from “one course in Bioinformatics or programming” to “a full year of coursework in programming.”  For students who have completed relevant coursework or are planning their academic schedule, this program provides a clearly defined path to become involved in research projects on campus.
  4. Informatics Center for Neurogenetics and Neurogenomics (ICNN). As with other areas of biomedical science, the post-genome era raises the prospect of transformational advances in neuroscience research. However, neuroscience faces special challenges in analysis, interpretation, and management of the vast quantities of information generated by genetic and genomic technologies. The phenotypic and organizational complexity of the nervous system calls for distinct analytical and informatics strategies and expertise.

    The ICNN, directed by Nelson Freimer and Giovanni Coppola, provides advanced analysis and informatics support to a highly interactive group of neuroscientists at UCLA who conduct basic, clinical, and translational research.  Generally, today’s lack of corresponding resources in analysis and informatics constitutes a bottleneck in their research; ICNN provides for these investigators access to excellent facilities for genetics and genomics experimentation.  ICNN faculty are experts in statistical genetics, gene expression analysis, and bioinformatics, and they oversee the activities of highly-trained staff members in  accomplishing three goals: (1) Providing expert consultation and analyses for neurogenetics and neurogenomics projects;  (2) Developing and maintaining a shared computing resource that is incorporated within the large campus-wide computational cluster for computation-intensive analyses, web-servers, and state of the art software tools for a wide range of applications (including user-friendly versions of public databases, as well as workstations on which ICNN users will be trained to employ these tools); (3) Providing hands-on training in analysis and informatics to group users.
  5. Computational Genomics Summer Institute (CGSI). In 2015, Profs. Eleazar Eskin (UCLA), Eran Halperin (UCLA), John Novembre (The University of Chicago), and Ben Raphael (Princeton University) created CGSI. A collaboration with the Institute for Pure and Applied Mathematics (IPAM), led by Russ Caflisch, CGSI is developing a flexible program for improving education and enhancing collaboration in Bioinformatics research. The goal of this summer research program is to bring together mathematical and computational scientists, sequencing technology developers in both industry and academia, and the biologists who use the instruments for particular research applications.

    CGSI is a unique opportunity for junior and senior scholars in Bioinformatics to foster collaborative relationships, accelerate problem-solving, and unleash the full potential of their projects.  The program facilitates interdisciplinary collaboration and training with a mix of formal and informal events. For example, senior scholars present traditional research talks and tutorials, while junior scholars present mini-presentations and organize journal clubs.  CGSI fosters interactions over an extended period of time and is laying crucial groundwork to advance the mathematical foundations of this exciting field.  This year, CGSI will be held from July 6th-26th, 2017. CGSI is made possible by National Institutes of Health grant GM112625.

“Give a Man a Fish, and You Feed Him for a Day. Teach a Man to Fish, and You Feed Him for a Lifetime.”

Writing Tips: Why we publish Methods Papers

by Eleazar Eskin

Computational genomics is a field where many diverse academic groups collaborate, each bringing to a project their own distinct academic cultures.  In particular, each academic discipline involved in computational genomics has its own publication strategy in terms of the types of papers they publish and how they package methods and results in these papers.  Publishing papers is extremely important to careers in academia and science, because all scientists are reviewed for tenure or promotion based on our publications records.  An important factor in our review (unfortunately) is the impact factor of the journals that we publish in.  Here, we describe our lab’s publication strategy and the reasoning behind it.

Our lab is a computational lab, and the main contribution of our lab to Bioinformatics is the development of methods for solving important biological problems, particularly in the area of genetics.  These new methods are implemented in software packages that (hopefully) are used by others to enable biological discovery.  Naturally, the key papers our group produces are papers that describe and explain potential applications of these new methods.

Roughly speaking, there are two strategies for publishing methods in our field.  The first is to focus on writing methods papers that are primarily dedicated to describing the computational advances.  The second is to focus on publishing our novel methods as part of more comprehensive papers that present a biological contribution. In this case, our method is primarily described in the supplementary materials. Over the span of my career, I have seen computational researchers receive more pressure to follow the second strategy in order to have papers published in a high impact journal.  Unfortunately, following the second strategy often delays publication (sometimes for years), because peer review often involves applying the method to a new dataset and/or performing extensive functional validation.

Our group primarily follows the first strategy.  In addition, we work with other groups and, as collaborators, publish papers focused on biological contributions.  This strategy works out well for us, and we feel that writing methods-focused papers is the best way for us to make a contribution to science.  We hope that other computational biology groups will follow our example and publish more methods papers.

Here are some of the reasons we feel this is a good strategy:

  1. Doing Justice to our Work. We can fully explain the methods only in papers dedicated to methodology. Since our contribution is methods, the best way to push the science forward is to clearly describe our method and the context of its development and application. In a dedicated paper, we are most likely to have enough space to fully describe the method and explain how the approach works.  Methods papers also have the space (and are typically required) to compare the proposed method with previous methods. This comparison puts the performance of the paper in perspective to the work of others.  Methods papers ideally provide enough details that other groups can build upon our method and compare their results to our published results. Sharing authorship on these papers also allows students who were involved in the development of these methods to demonstrate their strong technical skills.  In my view, computational biologists should be evaluated by the quality and impact of their methodology development and departments when making hiring decisions should consider this impact.  The impact can be measured by the number of users of the software implementing the methods, the number of citations of the papers describing the methods and the discoveries that these methods have enabled.  These factors are more important than the impact factor of the journals where the methods are published.
  1. Self Determination of Publishing. There are no outside bottlenecks preventing us from finishing our papers quickly, and we can control the publication process of our papers. A methods paper is primarily written by members within our lab, and authors evaluate the method using both simulated and established datasets.  This structure means we need not wait for outside collaborators or experiments to finish.  Finishing the paper faster means that have more time to work on new papers.
  1. Increased Number and Improved Quality of Collaborations. The methods paper is a widely-distributed, often freely available, finished product, and many prospective collaborators approach us after reading a paper from our group. More importantly, in our collaborations, we have very little competition over authorship.  Students in the group are happy to work hard on a project just to be in the middle of the collaborative paper, because they already are first author on their own methods papers.  Our methods development students are not competing for credit with the students in the collaborators group.
  1. Project Longevity. Writing a methods paper forces the method to be finished, evaluated, and documented, and publishing the paper forces us to release the software. This process encourages the project to have more longevity. Once the method is fully developed, new students can easily pick up and build upon the previous method.  Once a student leaves the lab, the method can persist with new lab members as it is stable, well-documented, and de-bugged.  Long after they have left the lab, many of the students who wrote methods papers in our group continue to author papers related to applications of their method.

In full disclosure, we do identify one negative aspect of the methods paper publishing strategy.  High impact papers require collaborations, and it is less likely that methods developers can publish high impact journals as a senior or corresponding authors.  While it is less likely to occur, members of our lab do occasionally gain senior authorship in high impact journals through collaboration.  We have found that the combination of methods papers, where you are the senior or first author, and high impact papers, where you have middle authorship and it is clear that your role was the application of the method, is overall a positive outcome and looks good in your publication record.

For example, Eran Halperin and I published a 2004 paper in the lower-impact journal Bioinformatics that described the HAP haplotype phasing method.  The HAP method was later used in a Perlegen-led paper that was published, with Halperin and I as co-authors, in the notably high-impact journal Science. The 2005 Science paper helped me get my job at UCLA; it was clear what my contribution was as I also authored the methods paper in Bioinformatics.

Our lab has produced several other examples of methods papers paired with high-impact collaborations. Kang et al. (2008) presents the EMMA method in Genetics (impact factor of 5.963), and a collaboration with the Jake Lusis group on the HMDP presents results in Genome Research (impact factor of 11.351) (Bennett et al. 2010).  More recently, we published the CAVIAR method (Hormoziari et al., 2014) in Genetics and collaborated with Dan Geschwind’s group in applying the method to a Nature paper (Won et al. 2016).

Citations of papers mentioned in this post:

Won, Hyejung; de la Torre-Ubieta, Luis; Stein, Jason L; Parikshak, Neelroop N; Huang, Jerry; Opland, Carli K; Gandal, Michael J; Sutton, Gavin J; Hormozdiari, Farhad; Lu, Daning; Lee, Changhoon; Eskin, Eleazar; Voineagu, Irina; Ernst, Jason; Geschwind, Daniel H

Chromosome conformation elucidates regulatory relationships in developing human brain. Journal Article

In: Nature, 538 (7626), pp. 523-527, 2016, ISSN: 1476-4687.

Abstract | Links | BibTeX

Hormozdiari, Farhad; Kostem, Emrah ; Kang, Eun Yong ; Pasaniuc, Bogdan ; Eskin, Eleazar

Identifying causal variants at Loci with multiple signals of association. Journal Article

In: Genetics, 198 (2), pp. 497-508, 2014, ISSN: 1943-2631.

Abstract | Links | BibTeX

Bennett, Brian J; Farber, Charles R; Orozco, Luz; Kang, Hyun Min; Ghazalpour, Anatole; Siemers, Nathan; Neubauer, Michael; Neuhaus, Isaac; Yordanova, Roumyana; Guan, Bo; Truong, Amy; Yang, Wen-Pin; He, Aiqing; Kayne, Paul; Gargalovic, Peter; Kirchgessner, Todd; Pan, Calvin; Castellani, Lawrence W; Kostem, Emrah; Furlotte, Nicholas; Drake, Thomas A; Eskin, Eleazar; Lusis, Aldons J

A high-resolution association mapping panel for the dissection of complex traits in mice. Journal Article

In: Genome Res, 20 (2), pp. 281-90, 2010, ISSN: 1549-5469.

Abstract | Links | BibTeX

Kang, Hyun Min; Ye, Chun ; Eskin, Eleazar

Accurate discovery of expression quantitative trait loci under confounding from spurious and genuine regulatory hotspots. Journal Article

In: Genetics, 180 (4), pp. 1909-25, 2008, ISSN: 0016-6731.

Abstract | Links | BibTeX

Hinds, David A; Stuve, Laura L; Nilsen, Geoffrey B; Halperin, Eran ; Eskin, Eleazar ; Ballinger, Dennis G; Frazer, Kelly A; Cox, David R

Whole-genome patterns of common DNA variation in three human populations. Journal Article

In: Science, 307 (5712), pp. 1072-9, 2005, ISSN: 1095-9203.

Abstract | Links | BibTeX

Halperin, Eran; Eskin, Eleazar

Haplotype reconstruction from genotype data using Imperfect Phylogeny. Journal Article

In: Bioinformatics, 20 (12), pp. 1842-9, 2004, ISSN: 1367-4803.

Abstract | Links | BibTeX