Festival of Genomics: SD

July 26, 2017

By: Stephanie Allen

The 2017 Festival of Genomics in San Diego was a great event. The cast of speakers were top notch and the topics fascinating. Frontline Genomics is an impressive operation whose mission is to deliver the benefits of genomics faster. That mission is being realized through their website, magazine and the multiple Festivals of Genomics that they organize. It was my first time attending one of their Festivals and didn’t know what to expect. I went without a strict agenda and decided to take the approach of going where the day would lead me. Little did I realize that the agenda was jam-packed! On many occasions it was hard to decide which talk to attend with so much happening in parallel. I missed the talk on “How to genetically engineer a human in your garage.” Perhaps I could have finally learned how to clone myself and then would have been able to attend more than one talk at a time.

The main themes throughout the two day festival included precision therapies, research & development, enabling data and personalizing medicine. Where was I to start? As a newbie to concepts like machine learning and artificial intelligence in genomics I’ve heard from colleagues and read from multiple sources that access to data is a major bottleneck. This makes sense, especially when we think about it in the context of patient data. There are privacy issues, and issues around how to make large volumes of data available. These bottlenecks are important for future advancements in medicine and attending the talk on “Open Collaborative Data in Science is here to stay: What next for medicine?” was a good start.

As with all of the talks I attended the panelists were impressive and had amazing insights and vision. I especially liked to hear about not just where things are now but where we are heading. Nazarene Aziz, the executive director of Research Bank at Kaiser Permanente discussed their national biobanking program. She told us about their goal to make de-identified data available that details years of health histories of 500,000 members along with the DNA samples to help researchers better understand both the genetic and environmental factors that determine disease and response to medicine. They now have an active program for researchers to access data and for volunteers to anonymously donate samples and allow access to their de-identified health history. They are already over half way to their goal and to date have banked data and samples for over 250,000 Kaiser Permanente members.

Brendan Keating, an assistant professor in the departments of surgery and pediatrics at the University of Pennsylvania talked about his work that focuses on using meta-analysis for improving treatments for organ transplant patients. He was critical for the successful launch of an International Genomics Consortium (iGeneTRAIN). The consortium has made DNA samples and outcomes data for 30,000+ patients available to researchers for large scale studies that aim to minimize chronic graft rejection. Dr. Keating is also active in the Electronic Medical Records and Genomics (eMERGE) network, a National Institutes of Health (NIH)-organized and funded consortium of U.S. medical research institutions that focuses on biorepositories with electronic medical record (EMR) systems for biomedical research. The consistent message from all speakers on the panel was that databases such as these are either already available at their prospective institutions or they are working on bringing them online. What seems to be the next order of business for biobanks is the processes around the return of data to patients as was explained by Dr. Aziz who said, “Out of the 110,000 volunteers that they have genotyped, 1-5% have genetic diseases”. Some of the critical questions that are still unanswered: Should data be returned to the patients? What data should be returned? What are the implications of the data for the patient? What support can and should be provided along with it?

The next talk I attended was one that any molecular biologist could not have resisted “Effective target hunting and validation for the genomic explorer”. The themes explored here got right to the point of precision medicine. James Christensen, who is Senior Vice President and Chief Scientific Officer at Mirati Therapeutics discussed the advances that have been made in cancer diagnostics and targeted therapies. At Mirati, they have developed diagnostic tests that identify dangerous genomic alterations that cause cancer and drugs are now being developed to target those specific mutations. It was not such a long time ago that we depended solely on tissue tests to diagnose cancer and most current standards of care for cancer are not at all specific. Many of the current treatments involve poisoning both the healthy and the tumor tissue with radiation and chemotherapies. Cancer is a genetic disease and these treatments do seem outdated when you consider all of the advances in genomics that have been made in the past couple of decades. I think we can all agree that seeing targeted therapies on the horizon is encouraging.

Francesca Milleti, who is a Senior Principal Scientist at Roche in New York talked about how to optimize finding effective targets. In her discussion she brought up many good points we should all consider. She said, “How do we better understand the target gene. We need to think about the environment of the gene and the bigger networks. How do we find a needle in a haystack. Sometimes just part of the mutation is the target we seek.” At Roche, she is working on informative algorithms for mapping functions of gene targets to streamline the process of analyzing target genes. The algorithms help to better identify new drug targets and biomarkers, prioritize drug indications, interpret data from preclinical studies, and elucidate drug mechanisms of action. The type of work she is doing has the potential to save years in the drug development process. The only hope for many patients is to wait for the novel drugs that are yet to be developed. It is reassuring to hear that a big Pharma like Roche is taking this smart approach to drug development. It is a good day when all of us can celebrate shortened drug development cycles.

With so much competing content it was hard for me to choose the topic that topped my list. Although challenging to decide, hands down the lunch and learn on the topic “Mapping the Economic DNA of a Genomics Cluster, a San Diego Case Study” with some of San Diego’s local genomics superstars was my favorite. As a San Diego local engaged in the genomics industry the panel and topic had me smiling all the way through the discussion. Where do I start with so much to say? This deserves its own post! Ok, real-time brainstorming – that content coming soon! Back to the superstar panel. The amazing people I have the pleasure of knowing personally include Dawn Barry, Vice President of Applied Genomics for Illumina, Michael Heltzen, tech serial entrepreneur and Ashley Van Zeeland, chief technology officer for Human Longevity who are always an inspiration. All panelists had unique insights on the recent case study performed by the San Diego Regional Economic Development Corporation that provided hard data on the state of the genomics industry here in San Diego. Among top life science U.S. metros, San Diego’s genomic industry ranks #2 overall. San Diego is leading the charge with more than 115 genomics-related firms, more than 10,000 jobs and a 5.6 billion annual economic impact. When the data is stacked, it is clear that San Diego is the epicenter of genomics (1). There has always been a buzz around town about San Diego being a hub for genomics. Illumina and Craig Venter have put us on the map. This study confirms what a big deal we are. Our larger combined mission to usher in a new era of healthcare will be the real impact. It is beyond exciting to be a part of it!

And now the heady stuff. Just reading the title of the next talk I attended sent my head reeling. “How are data science and machine learning transforming biology”. An entire separate industry has merged with biology in recent years. How that industry is transforming biology is putting it lightly! Mark DePristo, who is head of deep learning for genetics and genomics at Google Brain, Chris Lippert a bioinformatics scientist at Human Longevity and Jill Mesirov, the associate vice chancellor for computation health sciences at UCSD delivered an intense and informative discussion on the intersection between life science problems and how we can apply deep learning technologies to solve them. Take for example that genetics are only partially informative. Within an environmental context more insights can be made. Electronic medical records can serve as a proxy for the environmental factors and sensors can be used to collect the data. The issue of complexity will increase as more data becomes available. Consider also that many of the biological models we have now are too simplistic. Another issue in biology is that context is critically important. As is the case, in gene expression. A detrimental genetic characteristic could be present in the liver but not in the heart for example. Machine learning is good at dealing with these types of problems because it relaxes the constraints when patterns are not linear. Imagine now that these issues of collection, complexity and context of data are not only solvable, but solutions are around the corner. The times they are a-changin’.

It is good to know that machine learning applications are now considered a viable solution to solve real world problems in life sciences. The final talk that I attended with Mark DePristo from Google Brain “Deep learning in medicine: an introduction and applications to next-generation sequencing and disease diagnostics” was intriguing. His talk really put in context for me the type of problems that machine learning technology can solve. He discussed a study on the inaccuracies in diagnosis of diabetic retinapothy. The standard practice for diagnosis is performed by an ophthalmologist that analyzes photographs of the retinal fundus. Ophthalmologists diagnose patients using a standardized label that translates to where on the spectrum from minimal to severe their condition falls. It was found that when different ophthalmologists look at the same photograph the label of severity varies. Even more alarming is that the same ophthalmologist will look at the same photograph at different times on the same day and label it differently. When a deep learning algorithm was applied to analyze the images the error rate fell from 25% to 6%. Tools like this will not only help patients but can help physicians with some of the more routine work that they do to reduce their workload and save them time in their busy schedules.

I thought that was pretty cool. But, what really brought it home for me was his discussion on neural networks and how they work. He did a brilliant job of breaking it down. It helped me make sense of it. I suppose having some fundamental knowledge of neuroscience and how information is processed in our brains might have helped, but not necessary. Computational neural networks are information highways that are essentially taking information through an iterative process of elimination based on predetermined inputs to categorize unknowns and place the information in appropriate buckets. Mark described the process to be similar to the neural networks in our brain that is the foundation to how we know a cat is a cat and a dog is a dog. At a very young age we started cataloguing images of cats and dogs with information we would receive from our environment until eventually we learned how to distinguish one from the other. The machine is doing the same thing. In the case of retinal fundus photos the machine looks for things like color variation to categorize it. That’s it. Easy peasy. Well that’s neural networks and only one of the many machine learning methods. Do you want to know what’s even more cool? Anybody, anywhere can tinker with a free, open source neural network to really dig in and find out how to make it tick. Visit playground.tensorflow.org and Go Play!

There you have it. That was my day at the Festival. It was a great experience. I learned a great deal and will definitely be back. I had family in town and missed the second day. I imagine it was chock full of more amazing content. The next Festival is in Boston and then London. My advice to you is to definitely attend and chart out a plan to best maximize your time. The content will not disappoint. It’s only a matter of what to prioritize. Don’t follow my lead, go to the creating a human in your garage talk so you can send your army of clones to attend all of the talks!