After writing this post about economics, physics and econophysics, I was poking around the web, looking for Philip Ball’s articles. Ball is the author of the piece that I linked to in my post, and has written quite enthusiastically about “sociophysics” which seems, to me, to be mostly simulations in which independent entities (particles, people, institutions) act and react according to specific rules. From statistical physics simulations of interacting particles, we know that complex behaviour could emerge even with simple interactions among the particles, and I guess the hope in sociophysics is to show a similar correspondence between simple interactions among entities (‘agents’ seems to be the preferred term in sociophysics) and (emergence of) complex behaviour in the aggregate.
Philip Ball has a huge footprint on the web, a testimony to his prolific output, not only as a regular columnist for the Nature group of publications, but also as an author of quite a few books. Check out his website. One of his recent books, Critical Mass: How One Thing Leads to Another has specifically been about sociophysics. Some of the ideas appeared earlier in the form of a short article with a catchy title Physics of Institutions (pdf); see also this rather nice popular science piece titled Utopia Theory in PhysicsWeb.
Here are some of the reviews of this book: Bruce Edmonds, James Buchan for the Guardian, Steven Strogatz for Nature, and Tamás Vicsek for PhysicsWeb. The ‘Reviews’ section of Ball’s website has links to more of them.
Let me quote from Bruce Edmonds’ review:
… It is, in its way, the first “popular science” book covering a substantial section of social simulation, and talks about many of the main figures up to about 1990 (it does cover later work but not so comprehensively, which is understandable). Thus the work of Thomas Schelling, Ilya Prigogine, Brian Arthur, Alan Kirman, Robert Axtell, Joshua Epstein, Robert Axelrod, Paul Omerod, Martin Nowak, Per Bak, Duncan Watts, are all discussed.
In all of this the book is quite careful as to matters of fact – in detail all its statements are cautiously worded and filled with subtle caveats. However its broad message is very different, implying that abstract physics-style models have been successful at identifying some general laws and tendencies in social phenomena. It does this in two ways: firstly, by slipping between statements about the behaviour of the models and statements about the target social phenomena, so that it is able to make definite pronouncements and establish the success and relevance of its approach; and secondly, by implying that it is as well-validated as any established physics model but, in fact, only establishing that the models can be used as sophisticated analogies – ways of thinking about social phenomena. The book particularly makes play of analogies with the phase transitions observed in fluids since this was the author’s area of expertise.
This book is by no means unique in making these kinds of conflation – they are rife within the world of social simulation. The culture of physics is a complex of different attitudes, norms, procedures, tools, bodies of knowledge and social structures that are extremely effective at producing useful knowledge in some domains – it is not for nothing that physists have gained status within our society. However when this culture is transported into new domains, such as that of modelling social phenomena, the culture does not travel uniformly. Thus we have seen (and Critical Mass documents) an influx of simple, physics-style simulation models into sociology but they have arrived without the usual physists’ insistence that models predict unseen data. It is part of the culture of physics to aspire to the simplest possible model of phenomena but a model which only acted as a sort of vague analogy with respect to its phenomena would get short shrift in traditional physics domains. Yet frequently one reads social simulation work which takes the form of physics-style models and yet uses only vague, hand-waving justifications to justify its relevance (and, at best, a rough fitting of known, aggregate data). Models need to be constrained by the subject matter they are supposed to be about – there are two main ways of doing this: by ensuring the model is designed to behave as we know it should do (typically the parts of the model); and by checking the resulting behaviour against corresponding observed behaviour (often in aggregate). Sociophysics models tend to avoid either: they impose over-simple behaviour onto the design and don’t validate strongly against unseen data. Thus whilst such models may have interesting behaviour there is little reason to suppose that they do in fact represent observed social behaviour.
A point Edmonds makes is this:
[C]omplex behaviour can result from the interaction of lots of simple parts. This is now well established, but the implied corollary that the complexity we observe is a result of lots of simple interactions (or that it is useful to model this in this way) does not, of course, follow. Grounds for hope does not make it a reality.
This seems to be an intensely difficult ‘inverse’ problem, no? A related problem, which seems to be common to many ‘emergence‘ phenomena is the following: suppose you rig up a model with a certain set of rules (for interactions among the agents). And suppose that this model exhibits some complex behaviour. You are certainly within your rights to feel satisfied. However, how can we be sure that this is the only set of interaction rules that will lead to this ‘complex’ behaviour? If there are two (or more) sets of rules that give rise to (broadly) the same complex behaviour in the aggregate, which one should we choose? Even then, how can we be sure that that is the one that governs the real interactions among the agents?