Abstract:
Crowdsourcing is used for a wide variety of tasks on the Internet. From the point of view of knowledge extraction, it helps leverage knowledge in specific areas by gathering individual judgments of experts on specific subjects. In spite of crowdsourcing’s proven effectiveness in tackling various sorts of problems, researchers do not coincide in a standard framework to represent and model this approach. In this work, a multiagent system (MAS) is presented as a method for modelling crowdsourcing processes intended to obtain expert knowledge. The system, exemplified by a corpus annotation process, includes a formulation of its goals in terms of uniqueness, value and temporality, and comprises a dynamic reward scheme that produces a real measure of inter-annotator agreement (IAA) while constraining the model to a time window and a reward limit.