InterMine 2017 Fall Workshop – Biological Data Analysis using InterMine

University of Cambridge is hosting an InterMine workshop 27 October 2017.

The course is aimed at bench biologists and bioinformaticians who need to analyse their own data against large biological datasets, or who need to search against several biological datasets to gain knowledge of a gene/gene set, biological process or function. The exercises will mainly use the fly, human and mouse databases, but the course is applicable to anyone working with data for which an InterMine database is available.

The workshop is composed of two parts:

Part 1 (2.5 – 3 hours) will introduce participants to all aspects of the user interface, starting with some simple exercises and building up to more complex analysis encompassing several analysis tools and comparative analysis across organisms. No previous experience is necessary for this part of the workshop.

The following features of the InterMine web interface will be covered:

  • Search interfaces and advanced query builder
  • Automated analysis of sets, e.g gene sets, including enrichment statistics
  • Analysis workflows
  • Tools for cross-organism analysis between InterMine databases.
  • Web services

Part 2 (1 hour) will focus on the InterMine API and introduce running InterMine searches through Python and Perl scripts. While complete beginners are welcome, some basic knowledge of Perl, and/or Python would be an advantage. The InterMineR package will also be introduced. Those not interested in this part of the workshop are welcome to leave or there will be a more advanced exercise using the web interface available as an alternative.

See here for details: https://www.gen.cam.ac.uk/events/intermine-training

 

 

Advertisements

InterMine 2.0 – Summer update

InterMine 2.0 is a large, disruptive release scheduled for this autumn, before the Xmas holidays.

There are lots of exciting features, but they will require InterMine maintainers to update their mines. Usually devs are able to update their mines with a simple git pull request. In this case, they’ll have to take specific actions to make sure their software is up to date.

Model changes

Several changes and additions to the core InterMine data model were discussed and approved by the community. See here for specific details on the new core data model.

This means that it’s likely that an InterMine 2.0 webapp will require a database built by InterMine 2.0 code.

Blue Genes

InterMine 2.0 will come with detailed instructions on how to deploy the new InterMine user interface.

Come to the next InterMine community call to see a demo of the latest features!

Gradle

We’ve got a new software build system in the works. This will change the commands you use to build a data source and deploy your webapp. See a previous blog post for details.

Closer to the time, we’ll release detailed instructions on how to update your build system to work with the new tools. And as always the InterMine team will be on hand to answer any questions or issues on the community calls and the dev list and chat.

We hope to make the transition as easy as possible!

Software Dependencies

All software dependencies will need to be on the latest version.

  • Java 8
  • Tomcat 8.5.x
  • Postgres 9.3+

API Changes

We are making some non-backwards compatible changes to our API.

/user/queries will be moved to /queries

These three end points have a parameter called xml which holds the XML query. We are going to rename this parameter to be query (as we now accept JSON queries!) to match the syntax of all the other end points.

/query/upload
/template/upload
/user/queries (POST)

If this update is going to cause you any trouble at all, please let us know ASAP!

 

If you have any questions or concerns about any of these changes, please contact us or come along to the community calls.

 

 

 

Toxygates: exposing toxicogenomics datasets and linking with InterMine

This is a guest post from our colleague Johan Nyström-Persson, who works with ToxyGates and the NIBIOHN in Japan.

Toxygates (http://toxygates.nibiohn.go.jp) has been developed as a user-friendly toxicogenomics analysis platform at the Mizuguchi Lab, National Institutes of Biomedical Innovation, Health and Nutrition (NIBIOHN) in Osaka since 2012. The first public release was in 2013. At this time, the main focus of Toxygates was exposing the Open TG-GATEs dataset, a large, systematically organised toxicogenomics dataset compiled during more than a decade by the Japanese Toxicogenomics Project (http://toxico.nibiohn.go.jp). This dataset consists of over 24,000 microarray samples. To make use of such a large dataset without time-consuming data manipulation and programming, it is necessary to have a rich user interface and access to many kinds of secondary data.

Toxygates allows anyone with a web browser to explore and analyse this data in context. Various kinds of filtering and statistical testing are available, allowing users to discover and refine gene sets of interest, with respect to particular compounds. For a reasonably sized data selection, hierarchical clustering and heat-maps can be displayed directly in the browser. Through TargetMine (http://targetmine.nibiohn.go.jp) integration (based on the InterMine framework), enrichment of various kinds is possible. Compounds can also be ranked according to how they influence genes of interest.

To support all of these functions, we came up with the concept of a “hybrid” data model which recognises that, while gene expression values by themselves may be viewed as a large matrix with a flat structure, secondary annotations of genes and samples, such as
proteins, pathways, GO terms or pathological findings, have an open-ended structure. Thus, we combine an efficient key-value store (for gene expressions) with RDF and linked data (for gene and sample annotations) to allow for both high performance and a flexible data structure.

Today, the project continues to evolve in new directions as a general transcriptomics data analysis platform. We have integrated Toxygates not only with TargetMine, but also with HumanMine, RatMine and MouseMine. Recently, users can also upload their own transcriptomics data and analyse it in context alongside Open TG-GATEs data. We may
also add more datasets in the future.

P1000874The current project members are Kenji Mizuguchi (project leader) and Chen Yi-An (NIBIOHN), Johan Nyström-Persson and Yuji Kosugi (Level Five), and Yayoi Natsume-Kitatani and Yoshinobu Igarashi (NIBIOHN).

InterMine 2.0 – Gradle

NB: To upgrade to InterMine 2.0 you must not have custom code in the core InterMine repository.

We have been planning out the tasks for future InterMine, and there is a lot of exciting projects on the horizon. We’re making InterMine more FAIR, putting InterMine in Docker and the cloud, our beautiful new user interface, Semantic Web and so on.

However a prerequisite for these exciting features is to update our build system. We are still using ant and it’s grown, let’s say, “organically” over the years — making updates and maintenance expensive and tedious.

After careful consideration and looking very seriously at other build and dependency management systems we’ve decided on Gradle. Gradle is hugely popular with a great community, and it’s used by such projects as Android, Spring and Hibernate. We were really impressed with Gradle’s power and flexibility, being able to run scripts in Gradle will give us the power we need to accomplish all our lofty goals.

Our goals for moving to Gradle

Managed dependencies

Our dependencies currently are manually managed — meaning if we need a third party library, we copy the JAR manually into our /lib directory. This is unsupportable for modern software and has resulted in lots of duplication and general heartache. With Gradle we can instead fetch dependencies automatically from online repositories.

A smaller repository

Implementing Gradle will allow us to replace many of our custom Ant-based facilities with Gradle infrastructure and widely-supported plugins. Our codebase will become smaller and more maintainable as a result.

A faster build

Currently, due to the way that InterMine implemented a custom project dependency system in Ant,  every InterMine JAR is compiled on every build and every time a webapp is deployed. This is unnecessary and wastes developer time. We will use Gradle’s sophisticated dependency management system to make the InterMine build more robust and efficient.

Maintainable, extensible, documented

The current Ant-based InterMine build system has been extended over the years as needed in an ad-hoc manner, and unfortunately no documentation exists. Adding a new ant task is a challenge, and debugging the current build process is time consuming and difficult. Moving to Gradle will base InterMine on a well maintained, extensible, documented and widely-used build system.

Simpler to run test suite

Currently, developers have to create property files and databases to run the full system tests, steps that are not straightforward to perform or execute. With Gradle’s help we hope to make this much easier, so that the wider InterMine community can benefit from running the InterMine test suite on their installations and code patches.

Simplicity

Finally, Gradle’s tests are in the same project as the main directory, thus cutting the number of separate projects will be cut in half. In addition, when building, the tests will be run automatically.

As an example, here is a new standard Gradle directory layout:

src/main/java
src/main/resources
src/test/java
src/test/resources

Currently our main and test projects are in different packages but in InterMine 2.0 these will be unified under single projects, as per standard practice.

What does this mean for you and your InterMine?

If you are currently maintaining an InterMine, moving to InterMine 2.0 is going to require a bit of effort on your part.

Operationally, commands such as database building and web application publishing are very likely to use Gradle commands rather than Ant targets or custom scripts. Users who have scripts to manage InterMine installations will need to adjust them appropriately. This shouldn’t require too much work.

InterMine users who have custom projects in the bio/sources directory to load data sources will need to make more adjustments. Project structures in InterMine 2.0 will not be the same as in earlier versions, since they will follow Gradle conventions rather than custom InterMine ones. However, the changes will not be major and we will provide a script to do as much automatic updating of custom sources as possible.

The greatest migration work will come for the most sophisticated operators who have directly patched core InterMine code. In this case, there are two options. Firstly, they can continue to patch and build core InterMine JARs themselves, though they will need to make adjustments for the Gradle build process. Secondly, we can work with them to add new configuration parameters to core InterMine to make such patching unnecessary, wherever possible. In both cases work will be required but the effort should not be large, since it is largely the structure of code that is changing rather than core logic or functionality.

 

This is a significant transition but one that should put InterMine on a solid base that lowers long-term maintenance costs and makes lots of exciting stuff possible in the future. As ever, please contact us if you have any concerns and we look forward to discussing this and any other subjects on community calls, blog comments, in our Discord chat and on the mailing list!

InterMine 2.0: Proposed Model Changes (III)

We have several new additions and changes to the InterMine core data model coming in InterMine 2.0 (due Fall 2017).

We had a great discussion on Thursday about the proposed changes. Below are the decisions we made.

Multiple Genome Versions, Varieties / Subspecies / Strains

 

We were not able to come to an agreement, but everyone still felt there might be a core data model that can allow for single and multiple genomes that will be useful for all InterMines.

The fundamental question is do we want organism to be per genome version, or allow organism to have several genome versions. In the latter case, we’d also need a helper class, e.g. “Strain”, that would contain information about the genome.

This topic is sufficiently complex that we’ve agreed to try a more formal process here, listing our different options, their potential impact etc. More information on this process soon!

Syntenic Regions

Proposed addition to the InterMine core data model

<class name="SyntenicRegion" extends="SequenceFeature" is-interface="true">
    <reference name="syntenyBlock" referenced-type="SyntenyBlock" reverse-reference="syntenicRegions"/>
  </class>
  
  <class name="SyntenyBlock" is-interface="true">    
   <collection name="syntenicRegions" referenced-type="SyntenicRegion" reverse-reference="syntenyBlock" />
   <collection name="dataSets" referenced-type="DataSet" />
   <collection name="publications" referenced-type="Publication" />
  </class>
  • We decided against making a SyntenyBlock a bio-entity, even though it would benefit from inheriting some references.
  • We also decided against the SyntenicRegion1 / SyntenicRegion1 format and instead they will be in a collection of regions.

GO Evidence Codes

Currently the GO evidence codes are only a controlled vocabulary and are limited to the code abreviation, e.g IEA. However UniProt and other data sources have started to use ECO ontology terms to represent the GO evidence codes instead.

We decided against changing the GO Evidence Code to be an ECO ontology term.

  • The ECO ontology is not comprehensive
  • Some mines have a specific data model for evidence terms

Instead we are going to add attributes to the GO Evidence Code:

  • Add a link to more information on the GO evidence codes
  • Add the full name of the evidence code.
  • Change GOEvidenceCode to be OntologyAnnotationEvidenceCode

We decided against loading a full description of the evidence code. The description on the GO site is a full page. We tried shortening but then it didn’t really add much information. Also there is no text file with the description available.

We are also going to move evidence to Ontology Annotation.

GOEvidenceCode will be renamed OntologyAnnotationEvidenceCode:

<class name="OntologyAnnotationEvidenceCode" is-interface="true">
 <attribute name="code" type="java.lang.String" />
 <attribute name="name" type="java.lang.String" />
 <attribute name="URL" type="java.lang.String" />
</class>

GOEvidence will be renamed OntologyEvidence:

<class name="OntologyEvidence" is-interface="true">
 <reference name="code" referenced-type="OntologyAnnotationEvidenceCode"/>
 <collection name="publications" referenced-type="Publication"/>
</class>

Evidence will move to OntologyAnnotation from GOAnnotation:

<class name="OntologyAnnotation" is-interface="true">
 <collection name="evidence" referenced-type="OntologyEvidence"/>
</class>

 

Ontology Annotations – Subject

Currently you can only reference BioEntities, e.g. Proteins and Genes, in an annotation. This is unsuitable as any object in InterMine can be annotated, e.g. Protein Domains. To solve this problem, we will add a new data type, Annotatable.

<class name="Annotatable" is-interface="true"> <collection name="ontologyAnnotations" referenced-type="OntologyAnnotation" reverse-reference="subject"/> </class> <class name="OntologyAnnotation" is-interface="true"> <reference name="subject" referenced-type="Annotatable" reverse-reference="ontologyAnnotations"/> </class> <class name="BioEntity" is-interface="true" extends="Annotatable"/>

This will add complexity to the data model but this would be hidden from casual users with templates.

Protein molecular weight

Protein.molecularWeight is going to be changed from an integer to a float.

Timeline

October

  • Julie makes changes to core InterMine data model and parsers
  • On ‘model-changes’ branch

November

  • Release beta FlyMine with new model changes for community review
    • Sam will help test Synteny changes
  • Finalise changes. Move changes from ‘model-changes’ branch to ‘release-candidate’ branch
  • InterMine 2.0 will be tested on a staging branch (‘release-candidate’) because the changes are so disruptive:
    • New software build system – Gradle
    • Require updated software dependencies, e.g. Java 8, Tomcat 8, Postgres 9.x
    • Model changes

December

  • “Code freeze”
    • All 2.0 changes tested on ‘release-candidate’ branch
    • Need help testing!
  • InterMine 2.0 release
    • Move changes from dev branch to master branch
    • Before Xmas

If you would like to be involved in these discussions, please do join our community calls or add your comments to the GitHub tickets. We want to hear from you!

New Branding Parameters – Mine Update Needed

We have written several non-InterMine applications that require mine-specific displays. For example:

  • iOS app needs colour and logo to distinguish between mines
  • Blue genes app needs config from mine to brand the site
  • InterMine home page
  • Registry UI
  • InterMine R – shiny app
  • Friendly mines tool

And there may be more applications in the future!

image

To make your logo and mine colour available to these applications, please set these properties in web.properties file:

branding.images.logo This image should be 45px by 45px, defaults to InterMine logo
branding.colors.header.main Main colour for your mine, defaults to grey, #595455
branding.colors.header.text Text colour designed to be readable against your main colour, defaults to white, #fff.

You will have to restart your webapp for these to take effect. You can view these parameters at the /branding API end point, e.g. flymine.org/flymine/service/branding

Here are the docs on the web.properties file, and here is FlyMine’s web.properties file. There’s also an example on Codepen.

If you need help finding the right colour, we can help, or try a colour picker!

 

InterMine 2.0: Proposed Model Changes (II)

We have several new additions and changes to the InterMine core data model coming in InterMine 2.0 (due Fall 2017).

We had a great discussion on Thursday about the proposed changes. Below are the decisions we made.

Multiple Genome Versions

Many InterMine instances have several different genome versions.

Proposed addition to the InterMine core data model

  <class name="Organism" is-interface="true">
    <attribute name="annotationVersion" type="java.lang.String"/>
    <attribute name="assemblyVersion" type="java.lang.String"/>
  </class>

Multiple Varieties / Subspecies / Strains

We were going to add variety to the Organism data type to indicate subtypes that have the same taxon ID, however some people expressed a concern that this term wasn’t generic enough.

Proposed addition to the InterMine core data model

  <class name="Organism" is-interface="true">
    <attribute name="variety" type="java.lang.String"/>
  </class>

Other suggestions:

  1. Strain
  2. Subspecies
  3. Stock
  4. Line
  5. Accession
  6. Subtype
  7. Ecotype
  8. Isolate
  9. Others? …

It was suggested that we take a vote to choose the name. Please note that you can overwrite attribute names locally. But it would be better if we could all (mostly) agree!

User Interface

Both the above changes will require updates to the core InterMine code where it is assumed that Organism.taxonID is the unique field. This assumption will be replaced so that the new fields in Organism, where present, are used for the primary key.

For user friendliness, it will be necessary to assign unique organism names. Users will then be able to easily identify distinct versions in template queries and widgets.

Syntenic Regions

Proposed addition to the InterMine core data model

<class name="SyntenicRegion" extends="SequenceFeature" is-interface="true">
    <reference name="syntenyBlock" referenced-type="SyntenyBlock" reverse-reference="syntenicRegions"/>
  </class>
  
  <class name="SyntenyBlock" is-interface="true">    
   <collection name="syntenicRegions" referenced-type="SyntenicRegion" reverse-reference="syntenyBlock" />
   <reference name="dataSet" referenced-type="DataSet" />
   <reference name="publication" referenced-type="Publication" />
  </class>
  • We decided against making a SyntenyBlock a bio-entity, even though it would benefit from inheriting some references.
  • We also decided against the SyntenicRegion1 / SyntenicRegion1 format and instead they will be in a collection of regions.

GO Evidence Codes

Currently the GO evidence codes are only a controlled vocabulary and are limited to the code abreviation, e.g IEA. However UniProt and other data sources have started to use ECO ontology terms to represent the GO evidence codes instead.

We decided against changing the GO Evidence Code to be an ECO ontology term.

  • The ECO ontology is not comprehensive
  • Some mines have a specific data model for evidence terms

Instead we are going to add attributes to the GO Evidence Code:

  • Add full description of the GO Evidence Code
  • Add a link to more information on the GO evidence codes
  • (Optional) add a link to the ECO term, if available.
<class name="GOEvidenceCode" is-interface="true">
 <attribute name="code" type="java.lang.String" />
 <attribute name="description" type="java.lang.String" />
 <attribute name="URL" type="java.lang.String" />
</class>

IEA evidence code example

Ontology Annotations – Subject

Currently you can only reference BioEntities, e.g. Proteins and Genes, in an annotation. This is unsuitable as any object in InterMine can be annotated, e.g. Protein Domains. To solve this problem, we will add a new data type, Annotatable.

<class name="Annotatable" is-interface="true"> <collection name="ontologyAnnotations" referenced-type="OntologyAnnotation" reverse-reference="subject"/> </class> <class name="OntologyAnnotation" is-interface="true"> <reference name="subject" referenced-type="BioObject" reverse-reference="ontologyAnnotations"/> </class> <class name="BioEntity" is-interface="true" extends="Annotatable"/>

This will add complexity to the data model but this would be hidden from casual users with templates.


If you would like to be involved in these discussions, please do join our community calls or add your comments to the GitHub tickets. We want to hear from you!