Bienvenido Invitado!

Búsqueda Rápida:  

Búsqueda Avanzada

Idiomas: English Español

Valid XHTML 1.0 Transitional ¡CSS Válido! Icono de conformidad con el Nivel Doble-A, 
	de las Directrices de Accesibilidad para el 
	Contenido Web 1.0 del W3C-WAI

Social Norms for Self-Policing Multi-agent Systems and Virtual Societies


Social Norms for Self-Policing Multi-agent Systems and Virtual Societies
Social Norms for Self-Policing Multi-agent Systems and Virtual Societies

Daniel Villatoro Segura

Filiación: Consejo Superior de Investigaciones Científicas. Institut d’Investigació en Intel·ligèncía Artificial (Bellaterra, España)

Biografía: No disponible

Cerrar ventana

Daniel Villatoro Segura

Acerca de los autores 

Año de publicación: 2012

Idioma: inglés

Materias: Ciencia y Tecnología

Colección: Monografies de l'Institut d'Investigació en Intel-ligencia Artificial

eBook gratuito

Resumen:

Social norms are one of the mechanisms for decentralized societies to achieve coordination amongst individuals. Such norms are conflict resolution strategies that develop from the population interactions instead of a centralized entity dictating agent protocol. One of the most important characteristics of social norms is that they are imposed by the members of the society, and they are responsible for the fulfillment and defense of these norms. By allowing agents to manage (impose, abide by and defend) social norms, societies achieve a higher degree of freedom by lacking the necessity of authorities supervising all the interactions amongst agents. In this thesis we approach social norms as a malleable concept, understanding norms as dynamic and dependent on environmental situations and agents’ goals. By integrating agents with the necessary mechanisms to handle this concept of norm, we have obtained an agent architecture able to self-police its behavior according to the social and environmental circumstances in which is located. First of all, we have grounded the difference between conventions and essential norms from a game-theoretical perspective. This difference is essential as they favor coordination in games with different characteristics. With respect to conventions, we have analyzed the search space of the emergence of conventions when approached with social learning. The exploration took us to discover the existence of Self-Reinforcing Structures that delay the emergence of global conventions. In order to dissolve and accelerate the emergence of conventions, we have designed socially inspired mechanisms (rewiring and observation) available to agents to use them by accessing local information. The usage of these social instruments represent a robust solution to the problem of convention emergence, specially in complex networks (like the scale-free). On the other hand, for essential norms, we have focused on the “Emergence of Cooperation” problem, as it contains the characteristics of any essential norm scenario. In these type of games, there is a conflict between the self-interest of the individual and the group’s interest, fixing the social norm a cooperative strategy. In this thesis we study different decentralized mechanisms by which cooperation emerges and it is maintained. An initial set of experiments on Distributed Punishment lead us to discover that certain types of punishment have a stronger effect on the decision making than a pure benefit-costs calculation. Based on this result, we hypothesize that punishment (utility detriment) has a lower effect on the cooperation rates of the population with respect to sanction (utility detriment and normative elicitation). This hypothesis has been integrated into the developed agent architecture (EMIL-I-A).We validate the hypothesis by performing experiments with human subjects, and observing that behaves accordingly to human subjects in similar scenarios (that represent the emergence of cooperation).We have exploited this architecture proving its efficiency in different in-silico scenarios, varying a number of important parameters which are unfeasible to reproduce in laboratory experiments with human subjects (because of economic and time resources). Finally, we have implemented an Internalization module, which allows agents to reduce their computation costs by linking compliance with their own goals. Internalization is the process by which an agent abides by the norms without taking into consideration the punishments/sanctions associated to defection. We have shown with agent based simulation how the Internalization has been implemented into the EMIL-I-A architecture, obtaining an efficient performance of our agents without sacrificing adaptation skills.

Ver Índice Ver Índice (0.17 Mb)

Vista previa Vista previa (0.17 Mb)

Información bibliográfica

Descripción física del libro: 184 p. : gráf. ; 24 cm

ISBN: 978-84-00-09600-7

eISBN: 978-84-00-09601-4

Publicación: Bellaterra (España) : Consejo Superior de Investigaciones Científicas, 2012

Referencia CSIC: 12195

Otros datos: Tesis. Universidad Autónoma de Barcelona (España), 2011

Adquirir la edición digital en

Este eBook está disponible en descarga gratuita

Descargas gratuitas

Descargar eBook Descargar eBook (5.75 Mb)

Este título está en nuestro catálogo electrónico desde el lunes 08 julio, 2013.