The upcoming RegenBase meeting (www.RegenBase.org), just before the Society for Neuroscience meeting, will be working on minimal information standards relevant to the regeneration field. If you want to see the draft standards please subscribe to this blog. As you can see it is not super active so you will not be spammed. Stay tuned.
Archive for the 'General' Category
Last weeks CHI HCA meeting in San Francisco was interesting for a couple of reasons: 1) The emergence of small, personal HCA machines suitable for individual labs or rapid expansion of throughput in core facilities allowing HTS with HCA. 2) Expanded use of machine learning to detect complex features. This requires heavy-duty image analysis (clusters???) post image acquisition. Steve Altschuler’s lab always has a novel take on analyzing image data; look for “PhenoRipper” in the near future in an HCA lab near yours.
NICHD is launching a vision process to guide planning for the next ten years. They are running workshops and writing white papers. The first workshop was on “plasticity”. Although this could have been writ large, it was pretty focused on neural plasticity. The workshop was brilliantly organized I have to say. Six focus groups tackling a small set of questions in three areas: basic science, clinical science and translational science. Two focus groups worked in parallel in each area to insure that the problem was well vetted. Equally important, each focus group was composed of basic scientists and clinical scientists, including a large percentage of assistant professors. At the start there were some keynote talks to warm up the crowd.
Gordon Fishell of NYU said something really interesting. “The fundamental element that drives development of the brain is the cell and not the gene. So to understand brain disease, you need to understand the roll of the cells”
I really liked this and will use it hence forth, with attribution. Seems like you can extend this idea in many ways. Just switch out “brain disease” with your favorite word: development, axon guidance, cell migration, map formation, whatever.
What does it mean when applied to how the “intrinsic state” of a mature neuron is maintained. We have found that it is really, really, really hard to overexpress transcription factors in neurons, especially KLFs. We are starting to think that there is some very extreme negative feed back system to prevent altering the transcriptional state of our neurons. Until we understand this we probably won’t be able to use this entry point to enhance axon regeneration. But it seems like a subidea to Gord’s axiom.
December’s Cold Spring Harbor meeting on “Automated Imaging and & High-Throughput Phenotyping” provided a very different take on the future of high content analysis field than is seen at other meetings that focus on using HCA for medium and high throughput screening. It was amusing how proud the geeks at this meeting were to be geeks!
The majority of the talks focused on four model systems, zebra fish, C. elegans, drosophlia, and arabodopsis. Perhaps because the models are all used to study development, the imaging, analysis and feature recognition problems and solutions were very similar. Huge 4-D data sets are acquired, often multi terabyte, requiring solutions to registration of adjacent files and planes that do not propagate errors across the image stack. Pavel Tamancak (MPI-CBG, Dresden) gave an amazing talk describing a gobal solution with examples from light and electron microscopy. Zhirong Bao (SKI) and John Murray (Penn) showed how microfludics can be used to allow high throughput imaging of living worms.
Investigators using off-the-shelf HCA systems are forced to use relatively simple analysis tools that measure minimal cell features, such as width and length or translocation of markers from one compartment to another. These measurements are blind to many features that are obvious to even the untrained eye. Examples include curved or straight cell processes or striped patterns of mRNAs or proteins in embryos. Many investigators at the meeting are using open source analysis packages like Cell Profiler (Broad), CellCognition (ETH), Fiji (MPI/ETH/EMBL), microManager (UCSF) and Bisque (UCSB) to identify more complex features.
The enormous data sets require some methods to reduce the analytical problems. One approach is to scan at low magnification, identify cells or features of interest and then automatically zoom in to acquire higher magnification images. On the analysis side, cell or feature classifiers, principal component analysis, and support vector machines are all being widely used to analyze HCA data and then variations on Tanimoto coefficients are used to help cluster data. One beautiful example was Yolanda Chong (Toronto) who used 69 SVM classifiers to define the morphology of budding yeast. Another example was Uwe Ohler (Duke) using a small set of classifiers to describe expression patterns in fly embryos.
Talks by Phil Keller (HHMI) and Thai Truong (Scott Fraser’s group at CalTech) provided spectacular examples of how light sheet microscopy can give very high-resolution 4D movies of developing embryos that cover very large volumes while minimizing bleaching and toxicity. These types of microscopes open a new quadrant of the time/resolution/image volume domain. This approach will likely revolutionize the analysis of organisms, organs and tissues for studies in development, mechanisms of disease and drug discovery.
In the July Biotechniques Jeff Lichtman said most biologists “hope to fail”. He said ” because they like their hypothesis; being unable to disprove it gives them confidence that they’re on the right track”.
He prefers asking questions and looking at big data sets, i.e what happens in animals. My kind of guy. He went on to paraphrase Viktor Hamburger who said “of all his teachers, the only one who was always right was the chick embryo”. I wonder what Viktor would think about HCA?
This week I went to a HTS meeting in DC. Most of the participants were refugees from pharma research groups that have cratered as the drug industry has begun outsourcing research to academics. Don’t the MBAs who now run big pharma know that you get what you pay for? The goal of the meeting was to continue to build a manual for HTS (http://assay.nih.gov). The manual is an incredible resource for people trying to develop assays for screening, regardless of the scale of the project. Check it out.
Jessica and Dario have been leading our efforts to analyze data from different deep sequencing projects in the lab. This has been much more challenging than I imagined and I thought it was going to be difficult! After many months Arpit Mehta, in the HIHG and Frank Kuo, a UM medical student, solved our pipeline problems. It is pretty alarming to me that it takes days of supercomputer time to run a basic analysis using TopHat, Bowtie and Cufflinks from the UC Berkley Center for Bioinformatics and Computational Biology. The most interesting things have been the many novel or unexpected alternatively spliced forms of genes expressed in neurons. It looks like this will turn upside down dogma about many signaling pathways. Time to get to work with the VTI again.
We are using “Pegasus”, a new Linux-based supercomputer that belongs to the Center for Computational Science to do RNAseq analysis. Pegasus has 5,000 CPUs and lives at the Terremark Network Access Point (NAP) of the Americas in downtown Miami. The NAP has a ten gigabit HPC network connection to the UM campuses. The RNseq analysis is slow – typically taking 12 hours per sample on Pegasus – kind of scary.
Yesterday I was listening to Ubbo, Saminda and Stephan argue about classes and instances for the BAO. It made my head hurt. But it also made me think.
Ontologies are all the rage: gene ontology, wine ontology and even a tick gross anatomy ontology. There are many essays, papers and web sites explaining why ontologies are useful. But I think the basic reason is that areas of knowledge are so complex and rich it is impossible for people and computers to think about them without someone taking the time to carefully describe a knowledge area’s concepts and their relationships, along with specific examples (instances).
Axon regeneration is a area that could benefit from an ontology. What are the concepts (classes)? What are their properties (features, attributes)? What are the instances? Are they genes? What else would be an instance in the axon regeneration domain. How could it dynamically incorporate changing information about molecular pathways?
Once we build an axon regeneration ontology, how would we exploit it? Would it allow us to most efficiently design combination therapies? Would it explain why some neurons can regenerate and others can’t?