Abstract:
Most of text mining algorithms uses vector space model of knowledge representation. Vector space model uses the frequency (weight) of term to determine its importance in the document. Terms can be semantically similarbut different lexicographically, which in turn will lead to the fact that the classification is based on the frequencyof the terms does not give the desired result.
Analysis of a low-quality results shows that errors occur due to the characteristics of natural language, which were not taken into account. Neglect of these features, namely, synonymy and polysemy, increases the dimension ofsemantic space, which determines the performance of the final software product developed based on the algorithm.Furthermore, the results of many complex algorithms perceived domain expert to prepare training sample, whichin turn also affects quality issue algorithm.
We propose a model that in addition to the weight of a term in a document also uses semantic weight of the term. Semantic weight terms, the higher they are semantically closer to each other.
To calculate the semantic similarity of terms we propose to use a adaptation of the extended Lesk algorithm. The method of calculating semantic similarity lies in the fact that for each value of the word in question is countedas the number of words referred to the dictionary definition of this value (assuming that the dictionary definitiondescribes several meanings of the word), and in the immediate context of the word in question. As the mostprobable meaning of the word is selected such that this intersection was more. Vector model based on semanticproximity of terms solves the problem of the ambiguity of synonyms.
Keywords:text-mining, vector space model, semantic relatedness.