1 |
Imagining the thinking machine: technological myths and the rise of Artificial Intelligence
|
|
|
|
BASE
|
|
Show details
|
|
2 |
New York Yankees and Hollywood Anglos: the persistence of anglo-conformity in the American motion picture industry
|
|
|
|
BASE
|
|
Show details
|
|
3 |
A Structural-Lexical Measure of Semantic Similarity for Geo-Knowledge Graphs
|
|
|
|
In: Ballatore, Andrea; Bertolotto, Michela; & Wilson, David C. (2015). A Structural-Lexical Measure of Semantic Similarity for Geo-Knowledge Graphs. ISPRS International Journal of Geo-Information, 4(2). UC Santa Barbara: Retrieved from: http://www.escholarship.org/uc/item/9zx1b95k (2015)
|
|
BASE
|
|
Show details
|
|
4 |
An evaluative baseline for geo-semantic relatedness and similarity
|
|
|
|
BASE
|
|
Show details
|
|
5 |
An evaluative baseline for geo-semantic relatedness and similarity
|
|
|
|
BASE
|
|
Show details
|
|
6 |
The semantic similarity ensemble
|
|
|
|
In: Journal of Spatial Information Science (2013)
|
|
Abstract:
Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval data mining and information integration. To date a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong but obtains a certain degree of cognitive plausibility depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue we make an analogy between computational similarity measures and soliciting domain expert opinions which incorporate a subjective set of beliefs perceptions hypotheses and epistemic biases. Following this analogy we define the semantic similarity ensemble (SSE) as a composition of different similarity measures acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble all ensembles outperform the average performance of each ensemble's member. Hence in contexts where the best measure is unknown the ensemble provides a more cognitively plausible approach.
|
|
Keyword:
Computer Sciences; ensemble modeling; expert disagreement; geo-semantics; Geographic Information Sciences; Geography; lexical similarity; semantic similarity; semantic similarity ensemble; SSE; WordNet
|
|
URL: https://digitalcommons.library.umaine.edu/josis/vol2013/iss7/3 https://digitalcommons.library.umaine.edu/cgi/viewcontent.cgi?article=1051&context=josis
|
|
BASE
|
|
Hide details
|
|
7 |
Computing the semantic similarity of geographic terms using volunteered lexical definitions
|
|
|
|
BASE
|
|
Show details
|
|
8 |
The Similarity Jury: Combining expert judgements on geographic concepts
|
|
|
|
BASE
|
|
Show details
|
|
|
|