Talks and Workshops: Sharing our materials for re-use

Would you like to grab some ready-made slides or InterMine training workshop materials? We’ve rounded up of some recent things that have been going on. Feel free to remix materials for your own talks and outreach efforts. If you do use them, we’d love to see the result!


You should have permissions to make a copy; if not, please contact us / tweet us / pop by chat to poke us with a stick.

3-min lightning talk at GSoC Mentor Summit: Citable version on FigshareGoogle Drive (editable) version

Better Science Through Better Data: Citable version on Figshare | Google Drive (editable) version | Featured image above was live-scribed during  the talk. Licence is CC-BY from Springer Nature, and the image is available from

Blank InterMine-branded slides: Get ’em here.


BlueGenes Poster: This poster was presented at BOSC 2017Citeable version on F1000Inkscape editable version –  (download Inkscape here:

InterMine Poster for Elixir UK All Hands 2017: PDF version | Inkscape editable version 

Workshop learning materials

We run an InterMine training workshop every term, covering the basics of using the webapp, as well as discussing how to draw data from the API. If you’re near Cambridge, keep your eyes open on the blog or twitter feed, as we’ll always announce them well in advance.

Workshop training materials in PDF: Workshop Exercises – handouts with answers | Workshop slides – note that these exercises were all correct with data from HumanMine in October 2017. Numbers of results may change if we add or update new data sources in the future, but the majority of the materials should still be generally correct apart from the results counts. 

You can download the original OpenOffice files as well if you’d like to adapt the materials for your own workshops, or feel free to contact us if you’d like to coordinate some training with us.

Side note: We’re also delivering a half-day workshop training session as part of the EBI’s 4-day Introduction to Multiomics Data Integration course – applications are open now until 01 December 2017.


Data, Scientific (2017): Better Science through Better Data 2017 (#scidata17) scribe images. figshare.

Retrieved: 15:48, Nov 06, 2017 (GMT)


InterMine 2017 Fall Workshop – Biological Data Analysis using InterMine

University of Cambridge is hosting an InterMine workshop 27 October 2017.

The course is aimed at bench biologists and bioinformaticians who need to analyse their own data against large biological datasets, or who need to search against several biological datasets to gain knowledge of a gene/gene set, biological process or function. The exercises will mainly use the fly, human and mouse databases, but the course is applicable to anyone working with data for which an InterMine database is available.

The workshop is composed of two parts:

Part 1 (2.5 – 3 hours) will introduce participants to all aspects of the user interface, starting with some simple exercises and building up to more complex analysis encompassing several analysis tools and comparative analysis across organisms. No previous experience is necessary for this part of the workshop.

The following features of the InterMine web interface will be covered:

  • Search interfaces and advanced query builder
  • Automated analysis of sets, e.g gene sets, including enrichment statistics
  • Analysis workflows
  • Tools for cross-organism analysis between InterMine databases.
  • Web services

Part 2 (1 hour) will focus on the InterMine API and introduce running InterMine searches through Python and Perl scripts. While complete beginners are welcome, some basic knowledge of Perl, and/or Python would be an advantage. The InterMineR package will also be introduced. Those not interested in this part of the workshop are welcome to leave or there will be a more advanced exercise using the web interface available as an alternative.

See here for details:



InterMine 2.0 – Summer update

InterMine 2.0 is a large, disruptive release scheduled for this autumn, before the Xmas holidays.

There are lots of exciting features, but they will require InterMine maintainers to update their mines. Usually devs are able to update their mines with a simple git pull request. In this case, they’ll have to take specific actions to make sure their software is up to date.

Model changes

Several changes and additions to the core InterMine data model were discussed and approved by the community. See here for specific details on the new core data model.

This means that it’s likely that an InterMine 2.0 webapp will require a database built by InterMine 2.0 code.

Blue Genes

InterMine 2.0 will come with detailed instructions on how to deploy the new InterMine user interface.

Come to the next InterMine community call to see a demo of the latest features!


We’ve got a new software build system in the works. This will change the commands you use to build a data source and deploy your webapp. See a previous blog post for details.

Closer to the time, we’ll release detailed instructions on how to update your build system to work with the new tools. And as always the InterMine team will be on hand to answer any questions or issues on the community calls and the dev list and chat.

We hope to make the transition as easy as possible!

Software Dependencies

All software dependencies will need to be on the latest version.

  • Java 8
  • Tomcat 8.5.x
  • Postgres 9.4+

API Changes

We are making some non-backwards compatible changes to our API.

/user/queries will be moved to /queries

These three end points have a parameter called xml which holds the XML query. We are going to rename this parameter to be query (as we now accept JSON queries!) to match the syntax of all the other end points.

/user/queries (POST)

If this update is going to cause you any trouble at all, please let us know ASAP!


If you have any questions or concerns about any of these changes, please contact us or come along to the community calls.




Toxygates: exposing toxicogenomics datasets and linking with InterMine

This is a guest post from our colleague Johan Nyström-Persson, who works with ToxyGates and the NIBIOHN in Japan.

Toxygates ( has been developed as a user-friendly toxicogenomics analysis platform at the Mizuguchi Lab, National Institutes of Biomedical Innovation, Health and Nutrition (NIBIOHN) in Osaka since 2012. The first public release was in 2013. At this time, the main focus of Toxygates was exposing the Open TG-GATEs dataset, a large, systematically organised toxicogenomics dataset compiled during more than a decade by the Japanese Toxicogenomics Project ( This dataset consists of over 24,000 microarray samples. To make use of such a large dataset without time-consuming data manipulation and programming, it is necessary to have a rich user interface and access to many kinds of secondary data.

Toxygates allows anyone with a web browser to explore and analyse this data in context. Various kinds of filtering and statistical testing are available, allowing users to discover and refine gene sets of interest, with respect to particular compounds. For a reasonably sized data selection, hierarchical clustering and heat-maps can be displayed directly in the browser. Through TargetMine ( integration (based on the InterMine framework), enrichment of various kinds is possible. Compounds can also be ranked according to how they influence genes of interest.

To support all of these functions, we came up with the concept of a “hybrid” data model which recognises that, while gene expression values by themselves may be viewed as a large matrix with a flat structure, secondary annotations of genes and samples, such as
proteins, pathways, GO terms or pathological findings, have an open-ended structure. Thus, we combine an efficient key-value store (for gene expressions) with RDF and linked data (for gene and sample annotations) to allow for both high performance and a flexible data structure.

Today, the project continues to evolve in new directions as a general transcriptomics data analysis platform. We have integrated Toxygates not only with TargetMine, but also with HumanMine, RatMine and MouseMine. Recently, users can also upload their own transcriptomics data and analyse it in context alongside Open TG-GATEs data. We may
also add more datasets in the future.

P1000874The current project members are Kenji Mizuguchi (project leader) and Chen Yi-An (NIBIOHN), Johan Nyström-Persson and Yuji Kosugi (Level Five), and Yayoi Natsume-Kitatani and Yoshinobu Igarashi (NIBIOHN).

InterMine 2.0 – Gradle

NB: To upgrade to InterMine 2.0 you must not have custom code in the core InterMine repository.

We have been planning out the tasks for future InterMine, and there is a lot of exciting projects on the horizon. We’re making InterMine more FAIR, putting InterMine in Docker and the cloud, our beautiful new user interface, Semantic Web and so on.

However a prerequisite for these exciting features is to update our build system. We are still using ant and it’s grown, let’s say, “organically” over the years — making updates and maintenance expensive and tedious.

After careful consideration and looking very seriously at other build and dependency management systems we’ve decided on Gradle. Gradle is hugely popular with a great community, and it’s used by such projects as Android, Spring and Hibernate. We were really impressed with Gradle’s power and flexibility, being able to run scripts in Gradle will give us the power we need to accomplish all our lofty goals.

Our goals for moving to Gradle

Managed dependencies

Our dependencies currently are manually managed — meaning if we need a third party library, we copy the JAR manually into our /lib directory. This is unsupportable for modern software and has resulted in lots of duplication and general heartache. With Gradle we can instead fetch dependencies automatically from online repositories.

A smaller repository

Implementing Gradle will allow us to replace many of our custom Ant-based facilities with Gradle infrastructure and widely-supported plugins. Our codebase will become smaller and more maintainable as a result.

A faster build

Currently, due to the way that InterMine implemented a custom project dependency system in Ant,  every InterMine JAR is compiled on every build and every time a webapp is deployed. This is unnecessary and wastes developer time. We will use Gradle’s sophisticated dependency management system to make the InterMine build more robust and efficient.

Maintainable, extensible, documented

The current Ant-based InterMine build system has been extended over the years as needed in an ad-hoc manner, and unfortunately no documentation exists. Adding a new ant task is a challenge, and debugging the current build process is time consuming and difficult. Moving to Gradle will base InterMine on a well maintained, extensible, documented and widely-used build system.

Simpler to run test suite

Currently, developers have to create property files and databases to run the full system tests, steps that are not straightforward to perform or execute. With Gradle’s help we hope to make this much easier, so that the wider InterMine community can benefit from running the InterMine test suite on their installations and code patches.


Finally, Gradle’s tests are in the same project as the main directory, thus cutting the number of separate projects will be cut in half. In addition, when building, the tests will be run automatically.

As an example, here is a new standard Gradle directory layout:


Currently our main and test projects are in different packages but in InterMine 2.0 these will be unified under single projects, as per standard practice.

What does this mean for you and your InterMine?

If you are currently maintaining an InterMine, moving to InterMine 2.0 is going to require a bit of effort on your part.

Operationally, commands such as database building and web application publishing are very likely to use Gradle commands rather than Ant targets or custom scripts. Users who have scripts to manage InterMine installations will need to adjust them appropriately. This shouldn’t require too much work.

InterMine users who have custom projects in the bio/sources directory to load data sources will need to make more adjustments. Project structures in InterMine 2.0 will not be the same as in earlier versions, since they will follow Gradle conventions rather than custom InterMine ones. However, the changes will not be major and we will provide a script to do as much automatic updating of custom sources as possible.

The greatest migration work will come for the most sophisticated operators who have directly patched core InterMine code. In this case, there are two options. Firstly, they can continue to patch and build core InterMine JARs themselves, though they will need to make adjustments for the Gradle build process. Secondly, we can work with them to add new configuration parameters to core InterMine to make such patching unnecessary, wherever possible. In both cases work will be required but the effort should not be large, since it is largely the structure of code that is changing rather than core logic or functionality.


This is a significant transition but one that should put InterMine on a solid base that lowers long-term maintenance costs and makes lots of exciting stuff possible in the future. As ever, please contact us if you have any concerns and we look forward to discussing this and any other subjects on community calls, blog comments, in our Discord chat and on the mailing list!

Where to find InterMiners: September-December 2017 edition

We’re busy as ever, and Gos is away at the #biohack2017 in Japan right now – you can spot him in a gold shirt sitting towards the back of the room here:

Other places to find InterMiners over the next few months include:


12 September: FAIR in practice focus group – Research support professionals. Daniela will be at the British Library participating in this consultation. You may also be interested in the researchers focus group on the 13th. It looks like tickets are still available! (More)

21 September: **Cancelled**The usual community call is cancelled this week. We’ll be back as normal with updates in October, though!

25-27 September: Justin and Yo will be attending the Cambridge Bioinformatics Hackathon.


2-3 October: You’ll be able to find Justin at the Bioschemas Elixir implementation meeting in Hinxton.

5 October: InterMine dev community call – back to our normally scheduled calls. Agenda.

13-15 October: Find Yo at the 2017 GSoC mentors summit in Sunnyvale, California

19 October: It’s another community developer call, yay! 

21-25 October: Justin will be representing us at ISWC in Vienna.

25 October: Better Science through Better Data in London – we’ll be sharing the story of InterMine in a lightning talk. Open data is awesome and InterMine couldn’t exist without it!

27 October: We’ll be delivering an InterMine training course in Cambridge, including an all-new API training section. Please spread the word about this one!


November 1-2: You’ll be able to spot Justin at the Elixir UK all hands in Edinburgh.


December 4-7: Get your Semantic Web on with Daniela at SWAT4LS in Rome!

Phew, that’s a lot!




InterMine 2.0: Proposed Model Changes (III)

We have several new additions and changes to the InterMine core data model coming in InterMine 2.0 (due early 2018).

We had a great discussion on Thursday about the proposed changes. Below are the decisions we made.

Multiple Genome Versions, Varieties / Subspecies / Strains


We were not able to come to an agreement, but everyone still felt there might be a core data model that can allow for single and multiple genomes that will be useful for all InterMines.

The fundamental question is do we want organism to be per genome version, or allow organism to have several genome versions. In the latter case, we’d also need a helper class, e.g. “Strain”, that would contain information about the genome.

This topic is sufficiently complex that we’ve agreed to try a more formal process here, listing our different options, their potential impact etc. More information on this process soon!

Syntenic Regions

Proposed addition to the InterMine core data model

<class name="SyntenicRegion" extends="SequenceFeature" is-interface="true">
    <reference name="syntenyBlock" referenced-type="SyntenyBlock" reverse-reference="syntenicRegions"/>
  <class name="SyntenyBlock" is-interface="true">    
   <collection name="syntenicRegions" referenced-type="SyntenicRegion" reverse-reference="syntenyBlock" />
   <collection name="dataSets" referenced-type="DataSet" />
   <collection name="publications" referenced-type="Publication" />
  • We decided against making a SyntenyBlock a bio-entity, even though it would benefit from inheriting some references.
  • We also decided against the SyntenicRegion1 / SyntenicRegion1 format and instead they will be in a collection of regions.

GO Evidence Codes

Currently the GO evidence codes are only a controlled vocabulary and are limited to the code abreviation, e.g IEA. However UniProt and other data sources have started to use ECO ontology terms to represent the GO evidence codes instead.

We decided against changing the GO Evidence Code to be an ECO ontology term.

  • The ECO ontology is not comprehensive
  • Some mines have a specific data model for evidence terms

Instead we are going to add attributes to the GO Evidence Code:

  • Add a link to more information on the GO evidence codes
  • Add the full name of the evidence code.
  • Change GOEvidenceCode to be OntologyAnnotationEvidenceCode

We decided against loading a full description of the evidence code. The description on the GO site is a full page. We tried shortening but then it didn’t really add much information. Also there is no text file with the description available.

We are also going to move evidence to Ontology Annotation.

GOEvidenceCode will be renamed OntologyAnnotationEvidenceCode:

<class name="OntologyAnnotationEvidenceCode" is-interface="true">
 <attribute name="code" type="java.lang.String" />
 <attribute name="name" type="java.lang.String" />
 <attribute name="URL" type="java.lang.String" />

GOEvidence will be renamed OntologyEvidence:

<class name="OntologyEvidence" is-interface="true">
 <reference name="code" referenced-type="OntologyAnnotationEvidenceCode"/>
 <collection name="publications" referenced-type="Publication"/>

Evidence will move to OntologyAnnotation from GOAnnotation:

<class name="OntologyAnnotation" is-interface="true">
 <collection name="evidence" referenced-type="OntologyEvidence"/>


Ontology Annotations – Subject

Currently you can only reference BioEntities, e.g. Proteins and Genes, in an annotation. This is unsuitable as any object in InterMine can be annotated, e.g. Protein Domains. To solve this problem, we will add a new data type, Annotatable.

<class name="Annotatable" is-interface="true"> <collection name="ontologyAnnotations" referenced-type="OntologyAnnotation" reverse-reference="subject"/> <collection name="publications" referenced-type="Publication" reverse-reference="bioEntities"/> </class> <class name="OntologyAnnotation" is-interface="true"> <reference name="subject" referenced-type="Annotatable" reverse-reference="ontologyAnnotations"/> </class> <class name="BioEntity" is-interface="true" extends="Annotatable"/>

This will add complexity to the data model but this would be hidden from casual users with templates.

Also publications will be moved from BioEntity to Annotatable. This change will allow us to have both publication and annotations on things that are not BioEntities.

Protein molecular weight

Protein.molecularWeight is going to be changed from an integer to a float.

Sequence Ontology update

The sequence ontology underpins the core InterMine data model. We will update to the latest version available.

If you would like to be involved in these discussions, please do join our community calls or add your comments to the GitHub tickets. We want to hear from you!