We are delighted to announce that Luis Gustavo Nardin has obtained the “best student paper award at SSC 2014” with the paper From Anarchy to Monopoly: How Competition and Protection Shaped Mafia’s Behavior
We’re glad to announce a new paper from LABSS as a contribution to the debate on the approach to computational social science: On Agent-Based Modelling and Computational Social Science.
In the first part of the paper, the field of Agent-Based Modelling is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analysed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
New paper on peer review from LABSS and UniValencia: Mechanism change in a simulation of peer review: from junk support to elitism.
Our honest, totally unbiased, objective evaluation of this work is: reading it will change your life. You will sleep better. A sense of clarity will ensue. The pictures will spring up your imagination. The only paper you really need to read this year.
Ahem. Well maybe we’re a little bit overplaying it. Ok, here’s the abstract:
Peer review works as the hinge of the scientific process, mediating between research and the awareness/acceptance of its results. While it might seem obvious that science would regulate itself scientifically, the consensus on peer review is eroding; a deeper understanding of its workings and potential alternatives is sorely needed. Employing a theoretical approach supported by agent-based simulation, we examined computational models of peer review, performing what we propose to call redesign, that is, the replication of simulations using different mechanisms. Here, we show that we are able to obtain the high sensitivity to rational cheating that is present in literature. In addition, we also show how this result appears to be fragile against small variations in mechanisms. Therefore, we argue that exploration of the parameter space is not enough if we want to support theoretical statements with simulation, and that exploration at the level of mechanisms is needed. These findings also support prudence in the application of simulation results based on single mechanisms, and endorse the use of complex agent platforms that encourage experimentation of diverse mechanisms.