Property:Has Description

From Openresearch
Jump to navigation Jump to search

This is a property of type Text.

Showing 16 pages using this property.
O
Q
In this section we evaluate the performance of the DARQ query engine. The prototype was implemented in Java as an extension to ARQ5. We used a subset of DBpedia6. DBpedia contains RDF information extracted from Wikipedia. The dataset is offered in different parts.  +
We deploy 6 SPARQL endpoints (Sesame 2.4.0) on 5 remote virtual machines. About 400,000 triples (generated by BSBM) are distributed to these endpoints following Gaussian distribution. We follow the metrics presented in (23). For each query, we calculate the number of queries executed per second (QPS) and average results count. For the whole test, we record the overall runtime, CPU usage, memory usage and network overhead. We perform 10 warm up runs and 50 testing runs for each engine. Time out is set to 30 seconds. In each run, only one instance of each engine is used for all queries, but cache is cleared after finishing each query. Warm up runs are not counted in query time related metrics, but included in system and network overhead.  +
R
S
we investigated how the information from the VOID descriptions effect the accuracy of the source selection. For each query, we look at the number of sources selected and the resulting number of requests to the SPARQL endpoints. We tested three different source selection approaches, based on 1) predicate index only (no type information), 2) predicate and type index, and 3) predicate and type index and grouping of sameAs patterns as described in Section 4.2.  +
T
We followed these steps: – A set of 10 predefined natural language queries has been prepared for evaluation Table 4. Then, asking participants to try to answer these queries using their own tools and services. The queries were chosen in increasing order of complexity. – We implemented SPARQL queries corresponding to each of these queries to enable non-expert participants, not familiar with SPARQL, to query the knowledge graph. – We asked researchers to review the answers of the pre-defined queries that we formulated based on the SemSur ontology. We asked them to tell us whether they consider the provided answers and the way queries are formulated comprehensive and reasonable. – We finally asked the same researchers to fill in a satisfaction questionnaire with 18 questions14  +
No data available now.  +
U
Z
No data available now.  +