Queries Answering for Animal Tracking
The study on the animal world is immensely diverse likewise generating, creating and collecting information or resources on the natural history, distribution and classification of animals to establish an inventory of different species in the world.
An animal database/tracker usually has
Thousands of species about individual animal species. These may include text, pictures of living animals, photographs and movies of specimens, and/or recordings of sounds.
Descriptions of taxa above the species level, especially phyla, classes, orders and families.
Online databases and trackers such as movebank and ebird provide information about animals from common to extinct. It helps groups of researchers who want to know more about animal behaviour, animal movement manage, share, analyze and archive their animal data.
Collecting animal data for future use, as controlled by the data owners.
Enabling collaborations between researchers, individuals who are interested in animal movement.
Help to address new questions by combining datasets to test ideas related to ecological patterns, evolutionary processes and disease spread.
Sharing of data by researchers to the public or with other registered users.
A potential key example of using ebird to map and track birds situated in a region:
Finding birds with e bird documenting bird distribution, abundance, habitat use, and trends through checklist data collected within a simple, scientific framework.
Here is an example that contains a collection of resources that could be used to answer some questions about animal tracking.
Underlay contains collections of CSV files, meta data, meta-schemas, dataset locations, graphs, maps all linked to the dataset, also contains versioning of collections
Interlay maps out to locations of the required resources (or identify where they don’t exist), thereby providing support to the overlay.It could be apis, readmes, links to different sources etc
overlay breaks questions down to multiple ways it can be asked and how to get relevant answers to them
An example would be checking about the availability of animals(specific) in a specific region in a graph/map, The graph/map takes into consideration what queries the overlay would require and what interlay would support this and other relevant sources
this graph/map may provide information about:
The population of animal in a specific region
The availability of different species of animals in that region
Contrasting and compare between animals for e,g( The map/graph of dogs against the map/graph of iguanas in Mexico)
Probability of finding a certain group of animals than the other.
Trying to find out how many elephants are found in Africa ? In a graph is a hard to answer question as there are different estimates to this based on different researchers.
A second example would be using a voice assistant more interactively to ask certain questions to know more about animals around you.
Questions that could be asked to the voice assistant could be I saw an animal with white stripes or red spots, what could it be ?.
The voice assistant would process the data and match the keywords to informations like
— what animal have white stripes and red spots
— what kind of ecosystem the person asking the question lives in
— seasonal migration patterns + habits of such animal
To give a more specific result the voice assistant will further narrow down the query till it gets to the right answer it could ask questions like
What sounds does it make
Does it has two feet or four feet etc
Answering questions about animal availability , types, behaviour etc requires a different dataset combined together to provide answers to the different questions
This looks into what questions to expect or might be asked and what answers would be given and how this is interpreted at each layer?
Collections of versioned data that is part of a schema.
Could be an api which reads and interprets the context data
With the overlay it works in a series of process to give an output first it receives the voice input with the query and processes it looking for keywords then match the keyword in the query to the interlay which checks through various sources and reads and interprets the data and outputs the results of the query. The underlay contains versions of the various results gotten
Who maintains this system?
The datasets are maintained by collaborators,individuals, different orgs, crowdsourced etc
Common sources of knowledge