The short answer is – more photos than they know what to do with. Researchers might not be snapping celebrities, but they do generate thousands of images of animals, cells, proteins, and countless other weird and wonderful biological phenomena. Whilst perhaps not quite as visually appealing as Brad Pitt or Beyonce, these images do have one thing in common: they all need to be stored, organized and analysed, and new software developed by Steve Taylor at the WIMM promises to do just that. Bryony Graham explains more.
As computers become increasingly powerful, scientists are generating more and more ‘big data’ – vast collections of information on biological and physical systems that are hugely diverse in nature. Scientists are collecting data on the genetic code of thousands of humans, and using computers to stimulate what might have happened in the big bang.
Another form in which data is collected is in the form of images. It’s all very well using computers to mimic what MIGHT happen in situations that we can’t look at (like the big bang) but what about looking at the stuff that we can actually see? Like a developing embryo, or a sample from a patient’s tumour, or the structure of a cell. If scientists look at enough images, they might be able to detect patterns that are extremely useful for working out what is going on in a particular biological situation.
But this isn’t easy. Firstly – you have to generate enough images of a suitable quality to get statistically significant results. Then, you have to sort them into groups with similar patterns. And after that, you might need to quantify bits within the image itself: for example how long or bright a particular object is. All of these steps present their own challenges, but new software developed by Steve Taylor in the CBRG at the WIMM together with collaborators Coritsu Labs in Australia holds the potential to revolutionise how we store, organize, and detect patterns in images.
Here you can see a collection of 1000 images of frog embryos at different stages of development. At first glance, it looks pretty intimidating and fairly dull – lots of banana-like shapes with different black bits distributed throughout the curve. But, using the tool bar and selection options on the left hand side of the screen, you can choose specific genes that you might to look at (in this case, Hex, Runx1 or Tie2) and if you click on each individually, you can see which part of the embryo these genes are active in. Even a non-biologist can see that these patterns of activity are very different.
It would take a scientist hours and hours to go through each of these images, work out when and where each of these genes is active, and log this information. PivotViewer does it in three clicks. Steve Taylor has just been awarded a grant by The Oxford Invention Fund to develop a new version of the software, called Zegami, which can manage not just thousands of images, but hundreds of thousands – and will also have the capability to work on touch screen devices. Steve hopes that this unprecedented capacity for image storage, organization and analysis will extend the use of the software beyond biological samples, and could also potentially be used to sort and analyse satellite images, data from websites like Netflix and YouTube, or MRI brain scanning data. Watch this space…
A YouTube video which gives an overview of the features of the software is shown below:
Post edited by Steve Taylor.