Development of the institute's database for catch, sampling and survey data collected in the Baltic Sea
During our surveys and other investigations we collect a variety of data. To save these data so that we can find and work with them, we need a database, which is not available in a shop. Therefore the Institute of Baltic Sea Fisheries, in collaboration with the two other Fisheries Institutes in the Thünen Institute, develops a database that fits our needs.
IT experts and scientists of the Institute of Baltic Sea Fisheries in collaboration with the two other Fisheries Institutes in the Thünen Institute develop a database which stores in one place all scientific data which are collected at our fish and fisheries related research. This database allows us to swiftly answer even broad and complex information requests.
As an example, if somebody asks whether the occurrence of a fish species in our surveys has changed over the last ten years, we are able to answer with a fast database query. In addition, the database facilitate an automatic and easy transfer of our data to the databases of the International Council for the Exploration of the Seas (ICES).
During our research we collect a variety of data. At survey stations besides geographic position, water depth and weather data, hydrographic probes are collecting temperature, salinity and oxygen saturation data at different water depths.
Different nets are used for the survey catches, from fine meshed bongo nets to large bottom otter trawls. The mesh sizes and many other parameters describing the nets must be noted down.
From the catch, data about size, weight and length distribution of individual fish from different species are collected. From many single fish sex and age are determined, beside many other data.
To conduct meaningful analyses, all data need to be linked to each other. This requires a complex, relational database structure.
IT experts and scientist which develop the database are in close contact, to both, avoid re-inventing the wheel as well as to ensure a coherent structure of all tables and data fields in the database.
As a first step we developed a database structure that allows the storage of data already existent at the institute. If necessary, the database can easily be extended by additional tables, columns and fields. Thus we are able to fulfil future requests of data storage without the need to develop a new database.
To test the database structure existent data were stepwise imported.
Any database is as good as its possibilities for scientists to retrieve their data from it. They must be able to select and export data so that they can be used for subsequent calculations and analyses. Therefore, depiction of data, the development of routines to select and search for data as well as export functions are essentials parts of our database. These are developed in close exchange with scientists of the institute.
In a further especially laborious step our IT experts develop and programme input forms and routines, so that data can be entered directly into the database.
As database management system we use the object relational database PostgreSQL, together with the extension PostGIS, which contains geographic objects and functions. This database management system is well suited to manage data which must be stored jointly with the geographic position where they were collected. To be free, open source software is another advantage of this system.
A ready structure, available since the end of 2013, allows an organized storage of all data sets that exist at our institute. First data entry forms shall be ready by autumn 2014. Subsequently, technicians and scientists will test them for further improvement.
Permanent task 1.2006 - 12.2020
Project status: ongoing