logotipo renata colombia

Experiencias NREN

Tracking moose and salmon with GPS

2016  moose and salmon with GPSBiotelemetry technology for wildlife and fish monitoring has evolved tremendously since the 1990’s: from moose and reindeer wearing collars with primitive radio transmitters weighing several kilos, to salmon with miniscule bio-sensors, and eagles wearing solar powered GPS-backpacks containing cameras and accelerators.

This giant leap in monitoring technology has opened up exiting possibilities for researchers. But it also produces torrents of data, opening up a gap between the volume of data available and the tools for handling it. The people from WRAM, the Umeå Center for Wireless Remote Animal Monitoring, are closing this gap, not only for their own benefit but hopefully for the benefit of the global wildlife research community.

Researchers sharing data

Based out of the Swedish University of Agricultural Sciences in Umeå they have created the WRAM data management system and e-infrastructure for automatic reception, long-time storage, sharing and analyzing of biotelemetry sensor data from animals. One of the key features of the system is a true data federation layer, enabling researchers around the world to easily cooperate and share data with each other, through high capacity research and education networks.

Head of WRAM, Holger Dettki, explains:

-20 years ago, when very high frequency (VHF) collars were standard, a field study could require numerous research assistants recording a few positions every week. Now, the researchers can amass more sensor readings in a fortnight than VHF allowed in five years.

-In 2003, the first animal collars with a GPS sensor and integrated data transfer, using mobile phone networks, hit the market. Since then, techniques have matured. The cost of tags, sensors and transmission methods has been reduced, and many new biosensors have been developed. This gives today’s ecologists a huge opportunity to research an individual animal’s behavior and physiology in the wild, which was not possible before.

Too much data

But, according to Holger Dettki, due to the huge amount of data available, single research groups are often unable to analyze their data in a timely fashion. Also, they want to share their data with similar research projects to obtain synergy effects. WRAM makes this possible, enabling fish and wildlife researchers and managers to access and analyze similar animal sensor data from different database systems throughout the world in a unified and efficient way using a single web portal. Accessing hitherto redundant data sitting in isolated local databases and facilitating collaboration on an international scale, Dettki has high hopes for this revolutionary new data management system:

The Big Data challenge

-I foresee WRAM becoming one of the most important scientific infrastructures for receiving, storing and sharing of biotelemetry sensor data. Big Data is a huge challenge for ecologists, as it will be for many other research areas in the future.

WRAM is a respose  that challenge. You can no longer rely on spreadsheets or simple text files, if you are an ecologist with an ambition to do high end research. Regrettably, some funding agencies and some members of the research community still believe that. But with the staggering amount of biotelemetry data coming in, we need tools, that are much more powerful.

More information here 

Solving endocrine disorders without borders

2016  endocrine disorders While some of our most common chronic diseases are endocrine disorders, including diabetes, obesity and thyroid conditions, there are also a number of rare conditions, such as adrenal tumours and disorders of sex development that result from problems with the endocrine system.

Because of the rarity of many of these disorders, it is often challenging for researchers and clinicians to gather sufficient amounts of patient data to support more meaningful and statistically powerful clinical trials.

Only one in two million people will develop adrenocortical carcenoma, for example, with most clinical trails only recruiting small numbers of patients for each study — certainly not enough people to confidently compare the effectiveness of treatments.

Established and led by Professor Richard Sinnott, Director, eResearch at The University of Melbourne, the endocrine genomics virtual laboratory (endoVL), funded through the National eResearch Collaboration Tools and Resources project (Nectar) is changing the face of endocrinology.

The endoVL utilises Australia’s world-class research infrastructure comprising AARNet’s very high-speed network, massive data storage, advanced software tools, high performance computing and federated access. By providing secure online access to extensive data sets and analysis tools, the endoVL allows researchers to work together and break new ground in the understanding and treatment of endocrine disorders.

With more than 8,500 adrenal tumour cases, currently registered on endoVL, it allows researchers to draw on large enough cohorts to conduct studies with real statistical power.

“It changes the robustness of the science and also the power of what researchers can do,” Professor Richard Sinnott said. “We have more than 25 large-scale clinical trials running right now, both in diabetes and in adrenal tumours, and they involve research groups globally,” he said. “Diseases don’t know boundaries or country codes, so we have to build systems that allow researchers to collaborate across borders.”

More information here 

Building a long-term archive for cultural data

2016 cultural dataDigitisation projects are making large amounts of data available to the public online. The DigiBern project, for example, is an online portal for information on the history and culture of the city and canton of Bern. It was set up by the University Library Bern. Even in such an exemplary case, however, it has become clear that libraries face further tasks after a digitisation project is complete in order to ensure that the data remain accessible over the long term. This is because the data are spread across different storage media with only a single, manual backup, and they must undergo quality checks to verify that they are readable and intact. 

Long-term security and usability

The E-Rara project, a platform for digitised printed works from Swiss libraries, is a similar case. The University Library Bern has an obligation to archive the data for the long term but was previously unable to guarantee this. The expansion of the institutional repository BORIS for the university’s publications also brought with it a need to find a solution to ensure that these data remain secure and usable for years to come.

With all this in mind, the library included the new challenge of long-term digital archiving for the first time in its strategy for the period from 2013 to 2016.

Outsource construction and operation?

In the planning phase, various options for implementing the digital archive were evaluated, one of the key questions being how feasible it would be to outsource as much of the construction and operation as possible. The decision was eventually made to build an in-house archive one step at a time. With resources tight and the library having to acquire the necessary know-how as it went along, it was decided that a pilot archive would first be installed in an initial phase lasting three years. Lengthy discussions followed regarding the requirements for a digital archive and potential solutions, and the implementation work began in 2015.

The library’s tasks following the digitisation projects as mentioned above led to two different mandates: SWITCH is tasked with providing a central platform for data storage, while Baden-based docuteam is responsible for ensuring the quality and long-term accessibility of the data.

Scalability is crucial

One of the most important requirements for the archive from the library’s point of view is that is must be geared to continuity and firmly embedded in its infrastructure and services. This guarantees that it will be reliable and sustainable. Even though only a limited body of data is being archived in the early stages, the system should be scalable and able to cope with future demands such as curating research data. Another goal is to secure and document the university’s research output so as to ensure that it can be used and quoted well into the future.

The University Library Bern’s digital archive is based on an internationally recognised reference model, the open archival information system (OAIS, ISO 14721). The OAIS standard includes an information model specifying the technical data and metadata that need to be added to digital resources so that they can be kept and used over the long term. It also includes a functional model outlining the technical and organisational tasks that must be performed for a digital archive. The standard was very helpful in designing the digital archive and is also serving as the basis for its technical implementation and future operation.

Minimising risks

The system uses open-source software from docuteam to collect and prepare data, together with the repository software Fedora Commons, which is also covered by an open-source licence. SWITCHengines is used to operate the necessary servers and for data storage. In fact, it is this new infrastructure offering that made the step-by-step approach to constructing the archive possible in the first place. Minimal resources are being used for testing at the moment, and the archive can be scaled up over the next few months without the library needing worry about infrastructure issues. This new, flexible SWITCH service also makes it possible to minimise risks in IT projects.

More information here 

Unravelling the mysteries of our immune system

2016  immune systemHundreds of millions of people suffer from autoimmune diseases, which include rheumatoid arthritis, diabetes, multiple sclerosis and Crohn’s Disease. Many academic labs, biomedical research institutions, and pharmaceutical companies are working hard to better understand autoimmune disorders and infectious diseases that attack the immune system. Five to six years ago, researchers were able to sequence hundreds of immune-system molecules (like antibodies) in the human body. Today they can sequence tens of millions.

These data are making the human immune system less of a black box as they reveal the construction of the immune system along with the how, why, and when our body responds to various diseases. This is critical for studying autoimmune diseases and developing medical techniques that augment or use our immune system, such as vaccines, therapeutic antibodies, and cancer immunotherapies, to name a few. However, storing, organizing, and analyzing these data has become a rapidly escalating big-data challenge.

To facilitate this study of immunogenetics, researchers at Simon Fraser University in Vancouver, Canada have created a Research Software Platform called iReceptor. A secure, distributed database, this tool enables researchers to share and analyze huge datasets via National Research and Education Networks. What makes this tool particularly exciting is its ability to include metadata (such as gender, ethnicity, treatment, and outcome), allowing researchers to understand which conditions activate or suppress various immune system genes.

Because iReceptor pools scarce data in a secure way, research from multiple angles of a specific condition can illuminate immune system failures, pointing to clinical treatments for rare conditions as well as commonly occurring diseases. And that opens up a whole new era of better treatments and even cures for the 80+ autoimmune diseases known to us.

More information here 

The genomics revolution in Africa is well underway

2016 eduroamAfrican scientists have begun to study genomic influences on disease across their continent, from differences in the progression of HIV in children, to developing new sequencing methods for the Ebola virus, to collecting more than 35,000 nasal swabs from children to show how concentrations of nose and throat microorganisms may play a role in pneumonia.

The Human Heredity and Health in Africa (H3Africa) Initiative supports much of this research. It was launched to facilitate the study of genomics and environmental determinants of common diseases with the goal of improving the health of African populations.

32 research groups

An important part of H3Africa is the Pan African bioinformatics network H3ABioNet, comprising 32 Bioinformatics research groups distributed throughout 15 African countries and 2 American partner institutions. The network aims at developing a bioinformatics capacity across the continent, connecting various researchers and students over vast distances through web based conferencing.

H3ABioNet is both engaging in education and training and in developing bioinformatics and genomics tools for research. Members of the network are spread all over Africa, from Tunisia in the north to South Africa, and from Morocco in the west to Sudan in the east. Accordingly, the network uses video conferencing extensively to bring members together on a regular basis. As an example, H3ABioNet recently held two video seminars focusing on Big Data. Two distinguished scientist working in the Big Data arena presented talks focused on the current challenges, strategies and ongoing infrastructure and initiatives to deal with big data and genomics.

The web conferencing tool used is the recently launched Mconf, built with the research and innovation community in mind. With it, researchers and innovation teams can collaborate seamlessly over vast distances. It uses Web Real-Time Communication (WebRTC), the latest technology in this area. Mconf is a brainchild of the University of Rio Grande do Sul, Brazil who also bank rolled its development.

SANReN – the team responsible for building and rolling out South Africa’s National Education and Research Network– has been tasked with extending the service to local researchers and innovators through its network.

Since launched in 2013, the H3ABioNet network has held numerous workshops, trained about 456 people and placed 25 fellows in one- to two-month internships, besides driving the research and development of bioinformatics and genomics tools, standardised workflows and pipelines for H3Africa research projects.

Empower research

Web conferencing is a vital part of the infrastructure needed to build a Pan African bioinformatics capacity to manage, store, process and interpret the H3Africa research data. Historically, researchers in Africa have sent their samples overseas and got somebody else to analyse the data. The H3ABioNet network wants to switch that mentality and empower the African scientists to do their own research, do the data interpretation and do the analysis.

More information here

Hola mensaje
x
Hello modal box