Research in Computational Linguistics (CL) has been growing rapidly in recent years in terms of novel scientific challenges and commercial application opportunities. This is due to the fact that a very large part of theWeb content is textual and written in many languages. A part from linguistic resources (e.g., WordNet), the research trend is moving towards the automatic extraction of semantic information from large corpora to support on-line understanding of textual data. An example of direct outcome is represented by common-sense semantic resources. The main example is ConceptNet, the final result of the Open Mind Common Sense project developed by MIT, which collected unstructured common-sense knowledge by asking people to contribute over the Web. In spite of being promising for its size and broad semantic coverage, few applications appeared in the literature so far, due to a number of issues such as inconsistency and sparseness. In this paper, we present the results of the application of this type of knowledge in two different (supervised and unsupervised) scenarios: the computation of semantic similarity (the keystone of most Computational Linguistics tasks), and the automatic identification of word meanings (Word Sense Induction) in simple syntactic structures. © Springer International Publishing Switzerland 2015.
Titolo: | Common-sense knowledge for natural language understanding: Experiments in unsupervised and supervised settings | |
Autori Riconosciuti: | ||
Autori: | Di Caro, Luigi; Ruggeri, Alice; Cupi, Loredana; Boella, Guido | |
Data di pubblicazione: | 2015 | |
Abstract: | Research in Computational Linguistics (CL) has been growing rapidly in recent years in terms of novel scientific challenges and commercial application opportunities. This is due to the fact that a very large part of theWeb content is textual and written in many languages. A part from linguistic resources (e.g., WordNet), the research trend is moving towards the automatic extraction of semantic information from large corpora to support on-line understanding of textual data. An example of direct outcome is represented by common-sense semantic resources. The main example is ConceptNet, the final result of the Open Mind Common Sense project developed by MIT, which collected unstructured common-sense knowledge by asking people to contribute over the Web. In spite of being promising for its size and broad semantic coverage, few applications appeared in the literature so far, due to a number of issues such as inconsistency and sparseness. In this paper, we present the results of the application of this type of knowledge in two different (supervised and unsupervised) scenarios: the computation of semantic similarity (the keystone of most Computational Linguistics tasks), and the automatic identification of word meanings (Word Sense Induction) in simple syntactic structures. © Springer International Publishing Switzerland 2015. | |
Editore: | Springer Verlag | |
Titolo del libro: | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | |
Volume: | 9336 | |
Pagina iniziale: | 233 | |
Pagina finale: | 245 | |
Nome del convegno: | 14th International Conference of the Italian Association for Artificial Intelligence, 2015 | |
Luogo del convegno: | Ferrara, Italy | |
Anno del convegno: | 23 September 2015 through 25 September 2015 | |
Digital Object Identifier (DOI): | 10.1007/978-3-319-24309-2_18 | |
ISBN: | 9783319243085 | |
URL: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-84983435488&doi=10.1007%2f978-3-319-24309-2_18&partnerID=40&md5=7f5e5a9beb5998ae6bf460d594456818 | |
Parole Chiave: | Artificial intelligence; Automation; Linguistics; Natural language processing systems; Semantics; Syntactics, Automatic extraction; Automatic identification; Commercial applications; Commonsense knowledge; Linguistic resources; Natural language understanding; Semantic information; Word sense inductions, Computational linguistics | |
Appare nelle tipologie: | 04A-Conference paper in volume |