Monthly Archives: July 2006

The expert mind

In the latest issue of Scientific American, Philip E. Ross presents an overview of what we know about the Expert Mind, culled from decades of research on chess (which he calls the Drosophila of cognitive science). Here are some of the key conclusions:

The better players did not examine more possibilities, only better ones…

… [T]he expert relies not so much on an intrinsically stronger power of analysis as on a store of structured knowledge …

… [E]xperts rely more on structured knowledge than on analysis …

… [A]bility in one area tends not to transfer to another.

… [I]t takes enormous effort to build these structures in the mind. [Herbert] Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field. Even child prodigies, such as Gauss in mathematics, Mozart in music and Bobby Fischer in chess, must have made an equivalent effort, perhaps by starting earlier and working harder than others. …

… [K. Anders] Ericsson [whose views on expertise was linked to here] eargues that what matters is not experience per se but “effortful study,” which entails continually tackling challenges that lie just beyond one’s competence. …

… [M]otivation appears to be a more important factor than innate ability in the development of expertise. It is no accident that in music, chess and sports–all domains in which expertise is defined by competitive performance rather than academic credentialing–professionalism has been emerging at ever younger ages, under the ministrations of increasingly dedicated parents and even extended families. …

… [S]uccess builds on success, because each accomplishment can strengthen a child’s motivation.

All of which lead to the ultimate conclusion:

The preponderance of psychological evidence indicates that experts are made, not born.

* * *

See also After the Bell Curve by David Kirp in the NYTimes Magazine, and the comments on this article on the blogs of Mark Thoma and Brad DeLong.


Just how competitive can scientists get?

Put yourself in the shoes of a young, hot-shot post-doc who has got several offers for a faculty position, including one from a Great University in your field. Naturally, you are keen on joining GU, except for one small glitch. GU also has a leading senior researcher — a Nobel laureate, no less! — with research interests that overlap yours considerably; the glitch is that this senior researcher is not keen on having you as a colleague. He says so in so many words in his e-mails (doc):

… I am afraid that accommodating your lab would be difficult.

… [As] you are very aware, two competing labs in the same building is something we should avoid by all means. Some people who are promoting your arrival here are ignoring this basic principle, but I don’t believe that they are doing a service to you.

I am sorry, but I have to say to you that at present and under the present circumstances, I do not feel comfortable at all to have you here as a junior faculty colleague. … I am most happy to support you if you and I are going to work with some distance between us.

What would you do? How would you react?

* * *

After thinking this over, do read these two reports in Boston Globe about the sordid saga that played itself out in MIT, involving a star neuroscientist (Alla Karpova) and a Nobel laureate (Susumu Tonegawa). Links via Inside Higher Ed (1, 2).

* * *

Update: Do read the commentary by Janet Stemwedel and Pinko Punko.

* * *

You must read the follow-up post by Janet Stemwedel, who ends her post with: “it may be wise for the tribe of science to look at whether these competitive situations are really the best way to build better scientific knowledge.”

An explosive commentary on the status of women in science

When I was 14 years old, I had an unusually talented maths teacher. One day after school, I excitedly pointed him out to my mother. To my amazement, she looked at him with shock and said with disgust: “You never told me that he wasblack”. I looked over at my teacher and, for the first time, realized that he was an African-American. I had somehow never noticed his skin colour before, only his spectacular teaching ability. I would like to think that my parents’ sincere efforts to teach me prejudice were unsuccessful. I don’t know why this lesson takes for some and not for others. But now that I am 51, as a female-to-male transgendered person, I still wonder about it, particularly when I hear male gym teachers telling young boys “not to be like girls” in that same deroga-tory tone.


Here are a few examples of bias from my own life as a young woman. As an undergrad at the Massachusetts Institute of Technology (MIT), I was the only person in a large class of nearly all men to solve a hard maths problem, only to be told by the professor that my boyfriend must have solved it for me. I was not given any credit. I am still disappointed about the prestigious fellowship competition I later lost to a male contemporary when I was a PhD student, even though the Harvard dean who had read both applications assured me that my application was much stronger (I had published six high-impact papers whereas my male competitor had published only one). Shortly after I changed sex, a faculty member was heard to say “Ben Barres gave a great seminar today, but then his work is much better than his sister’s.”

This explosive commentary in Nature [1] by Ben Barres, a neurobiologist at Stanford, is going to be discussed quite widely. There will be a lot of spin on either side, but there’s nothing like the original. Do read Ben Barres’ very personal commentary; you will learn and understand a lot more about ‘innate differences’ (see another quote below) and ‘discrimination’ from just this one source than from the tons of spin-filled meta-commentary that’s sure to follow.

In an accompanying piece (in a side bar, I think), this is what Barres says:

As a transgendered person, no one understands more deeply than I do that there are innate differences between men and women. I suspect that my transgendered identity was caused by fetal exposure to high doses of a testosterone-like drug. But there is no evidence that sexually dimorphic brain wiring is at all relevant to the abilities needed to be successful in a chosen academic career. I underwent intensive cognitive testing before and after starting testosterone treatment about 10 years ago. This showed that my spatial abilities have increased as a consequence of taking testosterone. Alas, it has been to no avail; I still get lost all the time when driving (although I am no longer willing to ask for directions). There was one innate difference that I was surprised to learn is apparently under direct control of testosterone in adults — the ability to cry easily, which I largely lost upon starting hormone treatment. Likewise, male-to-female transgendered individuals gain the ability to cry more readily. By far, the main difference that I have noticed is that people who don’t know I am transgendered treat me with much more respect: I can even complete a whole sentence without being interrupted by a man.

* * *

[1] The link is to Nature‘s website, and if it’s pay-walled, take a look at this story [via].

* * *

Cross-posted at nanopolitan, my main blog.


There’s a wonderful survey article in the New York Magazine on happiness. It features the research of such key figures as Martin Seligman (Authentic Happiness), Daniel Gilbert (Stumbling on Happiness) and Barry Schwartz (Paradox of Choice). Some excerpts:

Smarter people aren’t any happier, but those who drink in moderation are. Attractive people are slightly happier than unattractive people. Men aren’t happier than women, though women have more highs and more lows. Surprisingly, the young are not happier than the elderly; in fact, it’s the other way round, with older people reporting slightly higher levels of life satisfaction and fewer dark days.

Money doesn’t buy happiness—or even upgrade despair, as the playwright Richard Greenberg once wrote—once our basic needs are met. In one well-known survey, Ed Diener of the University of Illinois determined that those on the Forbes 100 list in 1995 were only slightly happier than the American public as a whole; in an even more famous study, in 1978, a group of researchers determined that 22 lottery winners were no happier than a control group (leading one of the authors, Philip Brickman, to coin the scarily precise phrase “hedonic treadmill,” the unending hunger for the next acquisition).

As a general rule, human beings adapt quickly to their circumstances because all of us have natural hedonic “set points,” to which our bodies are likely to return, like our weight. This is true whether our experiences are marvelous—like winning the lottery—or shattering. Not only did Brickman and his colleagues look at lottery winners but also at 29 people who’d recently become paraplegic or quadriplegic. It turned out the victims of these accidents reported no more unhappy moments than a control group. (This exceptionally counterintuitive finding, however, has not been replicated in a published paper—and subsequent studies have certainly shown that the loss of a spouse or a child can dramatically depress our happiness thermostats, as can sustained unemployment.)

There’s surprisingly little in the happiness literature about raising children, which in and of itself is odd. Odder still is that most of it suggests children don’t make parents any happier. Gilbert wrote only three scant pages about this in Stumbling on Happiness. But he says he’s been asked about it on his book tour more than almost anything else. “It really violates our intuition,” he says. “Yet every bit of data says children are an extreme source of negative affect, a mild source of negative affect, or none at all. It’s hard to find a study where there’s one net positive.” (One possible explanation, he says, is that children are sources of transcendent moments, and those highs are what people remember.)

There’s even an accompanying piece suggesting 20 strategies for a happier life!

Nikola Tesla

Couturnix has a great post — no, make that an absolutely great post — on Nikola Tesla in celebration of the latter’s 150th birthday on July 10. You’ve got to check out that post to see why I’m amazed …

Still, at least the first two commenters on that post were left rather underwhelmed by Couturnix’s link-fest. 😉 Hmmm, such is life …

Los Alamos

It has been quite a while since I noted the Los Alamos scientists’ revolt (through a blog!) that forced the then director to resign. The Economist updates us on what’s happening at Los Alamos.

… At the beginning of June the University of California, which had run Los Alamos since the days of the Manhattan Project, ceded control to a consortium known as Los Alamos National Security. Though the university remains one of the consortium’s members, it will now share what bouquets and brickbats come Los Alamos’s way with three firms that make a lot of their money as military contractors. These are Bechtel and Washington Group International, two large engineering and construction companies, and BWX technologies, a concern that specialises in managing nuclear facilities.

Unlike the university, the new consortium will be aiming to make a decent profit from its activities. It is also thought likely to change the emphasis of the laboratory from research (in a wide range of subjects, not all of them to do with defence, let alone nuclear weapons), to the more mundane business of making the detonators of nuclear warheads.

The consortium is making reassuring noises. According to Jeff Berger, its director of communications, “There is a popular misconception that we’re out to change the lab’s mission.” Nevertheless, many of Los Alamos’s researchers sense a shift of direction. Indeed, quite a few have left. …

Nature’s experiment with ‘open’ peer review

Nature, a leading science journal, is conducting an interesting experiment wherein a paper’s authors can have it reviewed ‘openly’ — like comments in a blog! This would be in addition to the regular (anonymous) review process.

While I haven’t given it much thought, others have. Arunn Narasimhan (Mechanical Engineering, IIT-Madras) has a link-ful post examining this experiment from many angles.

Nanotech research in India

[Even] with the NSTI [the Nano Science and Technology Initiative] in place, the level of funding has been sub-critical as compared to China with which India inevitably tends to be compared. In 2002, for example, compared to China’s $200 million, India spent a mere Rs.15 crores. Over the four and a half years of the NSTI, a total of about Rs.120 crores has been spent, much of which has gone towards basic research projects and related infrastructure, the implementation of which is overseen by a National Expert Committee headed by C.N.R. Rao. …

Besides funding about 100 basic science projects to date (worth about Rs.60 crores), part of the money (about Rs.20 crores) has gone towards establishing six centres for nanoscience at institutions such as the Indian Institute of Science (IISc), Bangalore, and the different IITs, six centres for nanotechnology each aimed at producing a product or a device within a reasonable time-frame and two national instrumentation/characterisation facilities. In all, 14 national institutions, including seven IITs, and 10 universities have been supported under the NSTI.

Pay no attention to the howler in that last sentence, and do read this Frontline article [Update: the link is broken; try this link] by R. Ramachandran on the state of nanoscience and nanotechnology research in India. [Thanks to Pradeepkumar for the e-mail alert.]

Annals of academic put-downs

Prayer may not be very efficient when compared to celestial mechanics, but it surely holds its own vis-a-vis some parts of economics.
— Paul Feyerabend

Quote taken from this post by John Horgan, who reads Philip Ball’s Nature article on econophysics, and asks “can economics ever be like physics?” Horgan is convinced that the answer is “no”. Do read Horgan’s post; it has an interesting discussion on the utility of physics ideas and models in social sciences.

Academic put-downs

When it comes to put-downs (often good-natured, and sometimes nasty) about other people and their calling, and sometimes, about the human condition itself, academics are certainly among the best! Here is a partial list. If you happen to know of other such put-downs, do please let me know through your comments.

  • Study Materials Science: everything else is immaterial. (slogan on our Department T-shirts)
  • There is Physics; then there is stamp collecting. (Rutherford)
  • God gave all the easy problems to the physicists. (found in Stephen Robbins’ text on Organizational Behaviour)
  • It is evolution by jerks. (attributed to biologists who hold a dim view of the theory of “punctuated equilibrium” due to Stephen Jay Gould and Niles Eldredge; quote taken from a talk by Paul Krugman, an economics professor now at Princeton)
  • Mathematical physics is that which hasn’t enough rigour and generality to be math, but not enough contact with reality to be physics. (attributed to unnamed cynics by Cosma Shalizi and William Tozier in their paper on a simple model of the evolution of simple models of evolution)
  • … philosophy is the misuse of a terminology which as invented just for this purpose. (attributed to W. Dubislav, and found in Eugene Wigner’s famous article on the unreasonable effectiveness of mathematics in the natural sciences )
  • A mathematician may say anything he pleases, but a physicist must be at least partially sane. (J.W. Gibbs)
  • Mathematics is an interesting intellectual sport but it should not be allowed to stand in the way of obtaining sensible information about physical processes. (Richard Hamming)
  • As I understand it, the claim is that the less you use Homeopathy, the better it works. Sounds plausible to me. (David Deutsch)
  • Two things are infinite: the universe and human stupidity; and
    I’m not sure about the universe. (Albert Einstein)
  • Genius may have its limitations, but stupidity is not thus handicapped. (Elbert Hubbard)
  • Make it idiot-proof, and someone will make a better idiot. (Anonymous)
  • In theory, there is no difference between theory and practice. In practice, there is…. (Yogi Berra,
    a well known academic …)

* * *

This was posted on a now defunct blog on 6 January 2005. I will add more as and when I find new ones.

In the meantime, here are some others: 1, 2, 3 and 4.

Hwang Woo Suk admits wrongdoing

Finally! Here’s the Guardian:

For a paper in the journal Science, Hwang said he had told researchers to make it look as if they were basing their results on 11 cloned embryonic stem cell lines, rather than the two lines he believed they had.

Here’s the Telegraph:

“I admit to the suspicion of fabrication,” he told prosecutors, who asked him to admit he had altered data to make it look as if he and his team had created more stem cell lines than they actually had for a research paper. “It was clearly my wrongdoing,” Dr Hwang said. “I admit it.”

The hearing in Seoul also revealed the extraordinary secret attempts by him and his staff to deceive the world about their achievements.

He admitted to telling his researchers to go along with the fraud, to make it look as if data taken from two stem cell lines came from 11 cell lines.

In fact, prosecutors also believe that even these two cells lines were fake, brought in to the lab from outside by Dr Hwang’s team without telling him. They were not cloned stem cell lines, but lines from uncloned cells.


After writing this post about economics, physics and econophysics, I was poking around the web, looking for Philip Ball’s articles. Ball is the author of the piece that I linked to in my post, and has written quite enthusiastically about “sociophysics” which seems, to me, to be mostly simulations in which independent entities (particles, people, institutions) act and react according to specific rules. From statistical physics simulations of interacting particles, we know that complex behaviour could emerge even with simple interactions among the particles, and I guess the hope in sociophysics is to show a similar correspondence between simple interactions among entities (‘agents’ seems to be the preferred term in sociophysics) and (emergence of) complex behaviour in the aggregate.

Philip Ball has a huge footprint on the web, a testimony to his prolific output, not only as a regular columnist for the Nature group of publications, but also as an author of quite a few books. Check out his website. One of his recent books, Critical Mass: How One Thing Leads to Another has specifically been about sociophysics. Some of the ideas appeared earlier in the form of a short article with a catchy title Physics of Institutions (pdf); see also this rather nice popular science piece titled Utopia Theory in PhysicsWeb.

Here are some of the reviews of this book: Bruce Edmonds, James Buchan for the Guardian, Steven Strogatz for Nature, and Tamás Vicsek for PhysicsWeb. The ‘Reviews’ section of Ball’s website has links to more of them.

Let me quote from Bruce Edmonds’ review:

… It is, in its way, the first “popular science” book covering a substantial section of social simulation, and talks about many of the main figures up to about 1990 (it does cover later work but not so comprehensively, which is understandable). Thus the work of Thomas Schelling, Ilya Prigogine, Brian Arthur, Alan Kirman, Robert Axtell, Joshua Epstein, Robert Axelrod, Paul Omerod, Martin Nowak, Per Bak, Duncan Watts, are all discussed.

In all of this the book is quite careful as to matters of fact – in detail all its statements are cautiously worded and filled with subtle caveats. However its broad message is very different, implying that abstract physics-style models have been successful at identifying some general laws and tendencies in social phenomena. It does this in two ways: firstly, by slipping between statements about the behaviour of the models and statements about the target social phenomena, so that it is able to make definite pronouncements and establish the success and relevance of its approach; and secondly, by implying that it is as well-validated as any established physics model but, in fact, only establishing that the models can be used as sophisticated analogies – ways of thinking about social phenomena. The book particularly makes play of analogies with the phase transitions observed in fluids since this was the author’s area of expertise.

This book is by no means unique in making these kinds of conflation – they are rife within the world of social simulation. The culture of physics is a complex of different attitudes, norms, procedures, tools, bodies of knowledge and social structures that are extremely effective at producing useful knowledge in some domains – it is not for nothing that physists have gained status within our society. However when this culture is transported into new domains, such as that of modelling social phenomena, the culture does not travel uniformly. Thus we have seen (and Critical Mass documents) an influx of simple, physics-style simulation models into sociology but they have arrived without the usual physists’ insistence that models predict unseen data. It is part of the culture of physics to aspire to the simplest possible model of phenomena but a model which only acted as a sort of vague analogy with respect to its phenomena would get short shrift in traditional physics domains. Yet frequently one reads social simulation work which takes the form of physics-style models and yet uses only vague, hand-waving justifications to justify its relevance (and, at best, a rough fitting of known, aggregate data). Models need to be constrained by the subject matter they are supposed to be about – there are two main ways of doing this: by ensuring the model is designed to behave as we know it should do (typically the parts of the model); and by checking the resulting behaviour against corresponding observed behaviour (often in aggregate). Sociophysics models tend to avoid either: they impose over-simple behaviour onto the design and don’t validate strongly against unseen data. Thus whilst such models may have interesting behaviour there is little reason to suppose that they do in fact represent observed social behaviour.

A point Edmonds makes is this:

[C]omplex behaviour can result from the interaction of lots of simple parts. This is now well established, but the implied corollary that the complexity we observe is a result of lots of simple interactions (or that it is useful to model this in this way) does not, of course, follow. Grounds for hope does not make it a reality.

This seems to be an intensely difficult ‘inverse’ problem, no? A related problem, which seems to be common to many ‘emergence‘ phenomena is the following: suppose you rig up a model with a certain set of rules (for interactions among the agents). And suppose that this model exhibits some complex behaviour. You are certainly within your rights to feel satisfied. However, how can we be sure that this is the only set of interaction rules that will lead to this ‘complex’ behaviour? If there are two (or more) sets of rules that give rise to (broadly) the same complex behaviour in the aggregate, which one should we choose? Even then, how can we be sure that that is the one that governs the real interactions among the agents?