Are the subjects of Asch’s experiments conformists?

There is a handful of psychology works that have the burdensome honour of informing our view of human behaviour, usually with spectacular results that tend to satisfy some of our preconceptions on how this behaviour should look like. Two of them, the Stanford prison experiment (showing that stable adults quickly become sadist prison guards, given the right situational cues) and the Milgram experiment (showing that stable adults are willing to obey to an authority, in this case the experimenter, to the point of administering painful electric shocks to unknown confederates of the experimenter), while retaining fascination in the large public, tend to be considered cautiously by scientists, and have been recently criticised in mainstream or quasi-mainstream media (see, for example, here, here, and here).

Another one, the Asch experiment, seems to enjoy better health (but see here). The experiments show that subjects disregard their own information on a simple perceptual task (matching two lines of equal length) to conform to a majority of individuals (again, confederates of the experimenter) that give a clearly wrong answer. I can not resist to quote a website that provides the full text of an article of Asch, which, supposedly:

highlighted the fragility of the person in a mass society when he is confronted with the contrary opinion of a majority, and the tendency to conform even if this means to go against the person’s basic perceptions. This is a chilling text that should be carefully read and remembered whenever we think we are swayed by the mass, against our deepest feelings and convictions.

[Incidentally, it would be interesting to study whether (and why, in case) the idea that we are easily influenceable by others is so culturally attractive. I am not talking about mundane cultural transmission, which is of course important (and it is what I study in my professional life), but to phenomena like, for example, subliminal messages and advertising, and how they captivate people of various educational background, at least in my daily experience.]

Going back to Asch, I found again an unproblematized reference to the experiments in my flight reading of yesterday, topped with a bonus of a recent fMRI experiment showing that the brain area activated during the wrong response of the subjects was linked to spatial awareness, not to conscious decision making (i.e. “subjects were calling it like they saw it” – insert sarcastic face). So here is a post to try to make some clarity.

First, what is conformism?

Unfortunately, in this case, the scientific definition is as precise as the one we use in the common language, that is: not at all. This is a pity, because a quantitative and appropriate (at least for some usages) definition of conformism has been developed in the field of cultural evolution more than thirty years ago. In Culture and the Evolutionary Process, Robert Boyd and Pete Richerson define conformist frequency-dependent bias as requiring “naive individuals be disproportionally likely to acquire the more (or less) common variant” (pag. 206, italics in the original). The critical point here is obviously the “disproportionally” part. Imagine you enter in a Caffè, and, on 10 clients, 7 are drinking wine, and 3 coffee (I just travelled in the south of France). A conformist bias does not simply require that you will be more likely to drink wine, but that your probability to drink wine will be higher than 70%. Why is this important? As Boyd and Richerson note, “almost any time there is cultural transmission” (ibidem) you will be more likely to drink wine. In fact, if you choose absolutely randomly, you will have exactly the 70% of probability of choosing wine (imagine you are blindfolded and touch a client to decide what to order – do not do this in the south of France). So, conformism, if we use this precise definition, is not “do what the majority does”, because for “doing what the majority does” one does not need any bias, but simply copying at random.

conf

When individuals are conformists, one can visualise the relationship between the frequency of a certain behaviour (say the wine drinking in the Caffè) and the probability to perform that behaviour (say to actually order wine) with a sigmoid line (see the red line in the graph above). The dotted black line is unbiased or random copying.

Now we can go back to Asch. In the classic set-up (see the various descriptions in Asch 1955), a subject is supposedly participating in an experiment involving a simple perceptual task. She is shown a card with a line (see figure below, left) and is then asked to match the line of the same length in a second card (see figure below, right). The twist is that there are other seven participants to the experiment and they are, unknowingly to the subject, instructed by the experimenter on how to respond to the test. Because of the way they are positioned in the room, the subject is always the last one to answer and can listen to what the others say. The confederates give the “true” answer for the first two trials, and then they start to give wrong, unanimous, answers for 12 of the 16 remaining trails, on which the subjects’ answers are actually tested.

Asch_experiment.svg

What are the actual, not so chilling in fact, results? 25% of the subjects never defied to the majority opinion, and kept on, for all trials, to give the “correct” answer, impermeable to any social influence, while 5% of the subjects always gave the answer of the confederates. Over all subjects, and all trials, 36.8% of answers were influenced by the majority opinion.

In a variation of the experiment, interesting from our perspective, Asch replaced a confederate with a “true” participant (or instructed a confederate, this time, to give the correct answer) and found that “subjects answered incorrectly only one fourth as often as under the pressure of a unanimous majority” (Asch 1955), that is, slightly less than one in ten times.

What if we look at these results from the point of view of Boyd and Richerson’s definition of conformism? The plot below uses the same logic of the first one (i.e. plotting the frequency of a behaviour versus the probability of performing it) using the data from Asch’s experiments.

conf2

In the classic experiment, where all the seven confederates chose the wrong line, “only” 36.8% of the times subjects did the same, and, when 6 confederates chose the wrong line, only ~10% of the times. The red trend is then dotted as we do not have data, but the only alternative I can imagine is that the probability would drop to zero even sooner. One needs to conclude not only that subjects in the Asch experiment were not conformist, but they were not even particularly socially influenced (remember the dotted black trend represents unbiased, random, copying).

And here a xkcd, just because is the 10th of August.

Update 11.08.2015. Alex Mesoudi pointed me to Efferson et al. 2008, Evolution and Human Behavior (pdf here), see in particular section 3.3. They reach (ehm…7 years before me) similar conclusions, but discuss that “the joint effect of conflicting biases means that we cannot isolate the response to frequency information” and so, if I understand correctly, that we can not claim neither conformity nor absence of it for Asch’s results. This sounds sensible to me, but is there any real empirical data where we actually can isolate the response to frequency information?

Cultural attraction, “standard” cultural evolution, and language.

[Below my contribution for a “Book Club” event, hosted by the International Cognition and Culture Institute website, and dedicated to the recent Thom Scott-Phillips’ book, “Speaking Our Minds“.]

“Speaking our minds” was a great pleasure to read. The slim book provides even a non expert like myself with an accessible but, at the same time, in-depth treatment of language evolution. Scott-Phillips proposes us a coherent and, according to him, exhaustive, picture of the origins and evolution of language. The big questions are answered: we can proceed to the next topic.

I wonder how the community of linguists will feel in regard to this bold attitude (by the way, I am all for bold attitudes). As for myself, I can comment on a particular aspect of the book, that is, the role assigned to cultural attraction to explain some of the features of language. The basic idea behind the concept of cultural attraction is spelt out with remarkable clarity in “Speaking our minds”. In short, cultural transmission, differently from biological transmission, is a mainly reconstructive process. Each time we “copy” a cultural trait we are in fact reconstructing it starting from some pieces of information we gather from others. Individual modifications are not rare, and they are not errors. They are the crux of the cultural transmission process, and, importantly, they tend to be oriented in non-random ways (hence the notion of attractor).

One example – which I discovered reading this book – is tonal languages. In tonal languages (like Mandarin Chinese) the pitch that one uses to pronounce a word might make a difference regarding its meaning. It has been discovered that the distribution of tonal languages is associated with the distribution of two genes that regulate neural development. These are not genes for tonal languages, as individuals without the genes can learn them (and vice versa), but they may represent a factor of cultural attraction if these genes make, for example, easier to detect or produce pitch differences. Imagine a population in which few individuals have the variants. Modifications of language giving more importance to the pitch will be, in this population, rare and generally not re-constructed, and the population will converge on a non-tonal language. The opposite will happen in a population in which the majority of individuals have the variants in question.

Scott-Phillips gives a few other example of factors of attraction that may shape language attributes, some related to biological or cognitive features (like the example above), and others related to communicative needs, drawing mainly on the research from the Edinburgh language evolution group. Overall, The case is convincing: cultural attraction is likely to have an important role in determining the features of the languages we speak today, and the details of their evolution. But is that all? What about all the researches that use a more “standard” evolutionary framework to study language, that is that consider it like a culturally transmitted replicator?

The idea that languages evolve like biological species, with a process of descent with modification, has a long and successful story, as the famous endorsement by Darwin witnesses. Phylogenetic analysis are today used routinely in cultural evolution, and while their application to different domains is far from being uncontroversial, their success is at least partly due to the fact that they have been productively applied to language evolution, providing stimulating results. If phylogenetic analysis works for languages, what does this tell us about the feasibility of using standard evolutionary tools to understand their historical dynamics?

Recent researches showed that the rate of changes of words is correlated with their frequency of usage. Words that are similar in related languages (a classic example is terms for numbers: think about “one” in English, “un” in French, “uno” in Italian, etc.) are also words that evolve at very slow rate, and, interestingly, are the words that are used with high frequency in daily life. This suggests a classic evolutionary pattern, one of generally faithful transmission with random modifications. Frequency of use would indeed affect rates of replacement by reducing the “mutation rate”, as words used frequently would be, for example, remembered more easily than words only rarely used.

My general perspective is that various domains of human culture are characterised by various degrees of reconstruction and preservation in the transmission of their traits, and when domains are close to the “preservative extreme” it is useful, for practical reasons, to consider them as standard evolutionary systems. Moreover, in the same cultural macro-domain, like language in this case, different aspect may be situated in different regions of the preservation/reconstruction continuum. More than asking which aspects are in general more important, it may be more productive asking when and why transmission is preservative or reconstructive, and what are the consequences for the resulting cultural dynamics. For example, one may wonder whether the contemporary widespread use of media favouring strongly preservative transmission (such as “sharing” something on Facebook, or “re-tweeting” it) may play a role in contemporary language evolution.

In sum, I strongly believe that the cluster of ideas surrounding the notion of cultural attraction (the importance of individual reconstruction in cultural transmission, the fact that modifications to cultural items are generally not random, the importance of universal, or at least relatively stable, factors of attraction), developed in the past years by anthropologists like Dan Sperber and others, is one of the most important contribution to the contemporary study of cultural evolution. I am also open to consider whether cultural attraction forces are responsible for the most interesting attributes of languages, as one could infer from Scott-Phillips’ book. A further step would be to identify which features of languages are due to cultural attraction forces and which features are due to processes included in “standard” cultural evolution models, such as random modification of words, simple contextual learning biases, and similar, and how the various processes interact. The material present in “Speaking our minds” may be an excellent starting point for this endeavour.

Here some references:

[Comments are of course welcome, but please post them on the International Cognition and Culture Institute website. Here is my contribution, with the excellent Thom’s answer.]

“If we are all cultural Darwinians what’s the fuss about?” – uncorrected proofs

Following the discussion in these two posts, and various conversations after a plenary talk of Pascal Boyer at the Human Behavior and Evolution Society Conference last summer, I decided, together with Alex Mesoudi, to write a paper comparing some aspects of cultural attraction and “standard” cultural evolution. (This is, by the way, my current main research interest, and I hope to have more to say about it in the future).

The paper reviews the two theories, in particular analysing one aspect, i.e. the fact that cultural attraction proponents see cultural transmission mainly as a reconstructive process, in which cultural traits are each time re-created  by the individuals involved (think about the oral transmission of a story), while “standard” cultural evolution proponents see cultural transmission mainly as a preservative process, faithful enough to consider cultural evolution as a process of selection between variants (think about the choice of a baby name). We tried to clarify the two positions, and our main message (I hope I can talk for Alex) is that they are not in contradiction, but they focus on different aspects of the process of cultural transmission. The disagreement, “far from representing a deadlock for cultural evolution studies, can inspire new empirical studies and draw attention to details of transmission not yet explored” (from the paper).

The paper was accepted by Biology & Philosophy the 12th of February, but, for mysterious reasons, is having a very tormented production process (I spare here the details, but I want to express my disappointment towards Springer), so here you can find  the uncorrected proofs. I do not think the published version will change much (apart from adding, for example, my new affiliation, the Philosophy & Ethics group at the Eindhoven University of Technology). Any comment is more than welcome!

Update 3.06.2015. The final paper is now online:

Acerbi, A., Mesoudi, A. (2015), If we are all cultural Darwinians what’s the fuss about? Clarifying recent disagreements in the field of cultural evolutionBiology & Philosophy

Interesting regularities in human behaviour: older authors write happier books

[Second post of the series “Things that I probably will not develop in a proper paper, but I find interesting enough to write here”. The first is on the XX century decrease of turnover rate in popular culture]

In the last couple of years, part of my research has been dedicated to explore the emotional content of published books, using the  material present in the Google Books Ngram Corpus. Our analysis produced some interesting results. While analysis like ours need to be carefully weighted and possibly re-produced with various samples (but this should happen always…), I think that tools like the Google Books Corpus represent an extraordinary opportunity, as my goal is to study human culture in a scientific/quantitative framework.

Keeping this in mind, there are few reasons to be cautious (see for example here), mainly due to the fact that we do not know which books are inside the Google Books Corpus.  It is well known, for example, that the share of scientific and technical literature greatly increases in the XX century sample, generating potential distortions (on the other side: the share of scientific and technical literature increased in reality in the XX century). In one of my first posts, I analysed how different normalisations seem to create different biases in the trends, with the frequencies of the same set of random words (which are supposed to be stable through time) decreasing when normalised with the total count of words in the sample (as Google does in the Ngram Viewer) and increasing when normalised with the count of “the” in the sample (assuming the word “the” would be a good representative of “real” writing and “real” sentences).

As a consequence, I am lately trying to back up and extend results from Google Ngram with a less-distant reading analysis, that is, to repeat the same automatic analysis, but in specific books of which we know authors, time of publication, etc. An interesting side-result of the analysis I am working on, that keeps to appear practically everywhere, is that books tend to become more “positive” with authors’ age. I calculated a ratio of the amount of words associated to negative and positive emotions (using LIWC), so that higher values represent preponderance of negative emotions versus positive ones and viceversa. The “King of Horror” Stephen King (see the plot below), for example, seems in fact to get milder with time (the “outlier” in the bottom-right of the plot is “The Colorado Kid”, considered indeed “a true diversion from King’s normal horror fare“).

King

Analysing a quasi-random sample of contemporary best-seller authors (which includes 354 books, with authors like Terry Pratchet, Dean Koontz, Michael Crichton, etc.), there is the same strongly significative correlation between authors’ age and ratio negative/positive emotions (see the plot below, p<.001). The same analysis in another sample of 200 books from the Gutenberg project (mainly XIX Century best-sellers, including the like of Charles Dickens or Robert Louis Stevenson) shows an analogous (significative, but weaker, with Spearman’s rho=-.17 and p<.05) trend.

ratioCont

This result is quite well known. James Pennebaker (the developer of LIWC) reported a similar study, where the same effect was found using written or spoken text samples from more than 3000 subjects participating in various disclosure studies (i.e. “the common feature of all studies was that the investigators were studying individuals who were disclosing emotional events or experiences in their lives”). In the same paper, Pennebaker and colleagues analysed also a sample from 10 published authors, somehow similar to my Gutenberg sample, but they did not find significative trends.

While quite incomplete (I would need a bigger sample; compare different ways to extract the emotional content; what happens in other languages? etc.), the results are quite interesting to me. First, they tell us that we get happier (or, well, that we use a more positive language…) with age, which is against the stereotype of grumpy grandpas and screaming-with-pasta-rolling-pin-in-the-hand grandmas (this is the Italian version, which is, in any case, better than the lonely/sad “seniors” of contemporary mainstream western culture). Incidentally they resonate with the hugely publicised finding that well-being would follow a U-shape trend through life, with the lowest point in the 40s, and an increase after that (I can not really say much about this. Just as a balance, here a partly skeptical view).

Second, the majority of anthropologists tend to think that general regularities in human behaviour (i) do not exist (as local “cultures” will mainly act towards differentiation) or (ii) when they do, they are very abstract and hence not informative (say, all humans need to eat). If we can predict that, with age, the balance between negative and positive emotion words changes, and that it changes in a specific direction, this seems quite specific and informative to me.

Decrease in popular culture turnover rate

[This post is the first in a series I’d call “Things that I probably will not develop in a proper paper, but I find interesting enough to write here”]

One way to quantify change in cultural dynamics is to measure the turnover rate of a particular domain. The turnover (z) is the number of new items that enter, after a certain amount of time t, in an ordered list of size N. What does this mean? A straightforward example of turnover is the new entries in a Top-list Chart. This week, for example, 4 new singles entered in the BBC UK Top-40 Single Chart (see here – of course the actual number of new entries will change from week to week).  So, for the week starting 8 March 2015, z=4 (the number of new items that enter…), t=1 week (…after a certain amount of time…), N=40 (…in an ordered list of size N). Notice that, with this information, one can calculate the turnover for all N from 1 to 40 (for example, this week, the 4 new entries are at the 1st, 7th, 13th, and 18th place, so, for, say N=10, z would be equal to 2).

These top lists are today ubiquitous, so that is relatively easy to calculate turnover for many cultural domains (here,  for example, the bestseller hardcover fiction books from the New York Times. While there is not an explicit way to filter the new entries, one can easily check from the number of “weeks on list” information the books that are in the list for the first time, and then get z). In fact, with slightly more effort, one can calculate the turnover of plenty of cultural domains, provided that is possible to extract the frequencies of traits through time.

Last year, together with Alex Bentley, I published a paper where we showed that the turnover profile (i.e. how z varies for different N) of a cultural domain is informative about the selective forces acting on that cultural domain (I talk about it in this post). The turnover profile  is an aggregate measure that considers an average of the turnover rate through time. So, for example, the turnover profile of the BBC UK Top-40 Singles for 2014 would be, for each N (from 1 to 40), how many new singles, on average, each week of 2014, entered in the correspondent top-N.

Another way to look at the same information is to consider the time dimension of the turnover rate, without aggregating.  One could check, for example, if, during 2014, there were “turbulent” periods for the UK Top-40, with many new entries, and “stable” periods with few changes. Different cultural domains (say books versus songs) could be characterised by different regimes. Finally, long-term turnover rates can suggest some more general changes in popular cultural dynamics.

On the last point, I calculated the turnover rate through time for two datasets. The first one (see figure below), is the Billboard Top-100 weekly Singles chart from 1946 to 2007 (data from Alex Bentley). Our N is now equal to 100, and the y-axis gives information on through time. The weekly turnover is averaged for each year.

billboard

The second one is the Top-10 yearly fiction books in United States from 1900 to 2000 (data from various sources, from a project of John M. Unsworth). In this case I plotted the authors turnover, averaged for each decade. For example, for the 1940 decade, z=7 means that each year, on average, 7 new authors entered in the top-10 in respect to the previous year.

unsworth

A striking feature of the two series is the decrease, starting around the 60s, of the turnover rate. This means that, in the last part of the century, the same best-selling authors and musicians tended to be more successful, comparatively, than what was happening in the first period of the data, where change in the top-N was faster. For example, in the 90s, the three most successful authors (Danielle Steel, Stephen King, and John Grisham) occupied 39 of the 100 possible positions in the top-10!

If this decrease is common in other popular cultural domains (which I suspect, but I do not know), it is interesting to wonder what kind of mechanisms could have produced it. One of my favourite hypotheses is that is exactly the fact that public top-lists started to be widespread (it is not unreasonable to think that today the phenomenon is even more prominent, almost farcical, with online diffusion of top-n of virtually everything). Below, as an example, a plot from the Google Books Ngram that shows that references to the same term “Top 10” were basically absent in popular culture (to be precise: in the English language books present in the Google sample) until the 60s. Untitled

Top lists provide a way to know what others, unrelated, individuals prefer and to avoid to choose by yourself.  Why go to the bookstore and choose by myself a book, that could turn out to be bad, when I can just check the “what’s hot” section, and rely on the judgement of (millions of) other people? Of course, one could consider both the decrease of turnover and the increase of top lists popularity as the effect of some other more general mechanism (call it “consumerism”, “globalisation”, or whatever) but this does not change the fact that top lists are perfect artefacts to support a conformist bias (in cultural evolutionary terms: a disproportionate preference for common traits).

Another hypothesis is that Danielle Steel books are actually better (i.e. more effective spreaders) than Mary Johnston books (the author of To Have and To Hold, the American bestseller of 1900, according to my data). While this may sound a little crazy, one can imagine that, as the number of books and the number of readers increased, probably exponentially, during the century, higher competition generated better and better (in the sense above) books, so that it is now more difficult to write something more effective than what is already in the top list, in respect to what was happening at the beginning of the century. I was reminded of this idea when some friends recently described to me how their daughter was caught in an “epidemic” of Harry Potter in a primary school class in Edinburgh, where in around a month all pupils (the majority of whom did not know about it before) read the first book of the series. This does not mean that we reached the highest peak of literature, or of “effectiveness”, with J. K. Rowling or Danielle Steel, but that, perhaps, to go back to an higher turnover, new authors would need to explore the “design space” of narrative in other directions.

Is culture a “scientific idea ready for retirement”?

[cross-posted, with minor changes, at the International Cognition and Culture Institute’s blog]

The website edge.org asks every year to a remarkable amount – 175 this time, if I read correctly –  of important exponents of the “Third Culture” a general question on science and/or society. The 2014 question was: “What scientific idea is ready for retirement?

I did not read (yet) all the answers, but I was surprised to see that two of them, from Pascal Boyer and John Tooby, were one and the same: culture. One could take the answers as a provocation of two evolutionary psychology-minded scholars against mainstream cultural anthropology (which I’d subscribe to) and just skip to the others.  However, knowing the work of the two, and, especially, because when people ask me what my research is about I tend to answer “human culture” or “cultural evolution”, I think I have to take this challenge quite seriously.

On one level, I agree completely with the answer: “culture” can not be considered as an unproblematic explanation of phenomena. I was recently reflecting on the fact that, while I consider myself an atheist, I find often unpleasant hearing – and especially pronouncing – profanities. Rationally I know that they are simply a series of sounds, but still I can not avoid to be annoyed.  The imaginary naive anthropologist would say: of course, it is your culture! (I am Italian, and I received a then standard catholic education). But this is exactly what we want to explain: why this specific “cultural stuff” (being bothered by profanities) and not others (say going to church or pray) is still present?

I think that everybody that read this blog would agree that it is not useful to use culture as an explanation: we can not explain X (my problematic relationship with profanities, the readiness to perceive interpersonal threats in the south of USA, etc.) with “culture”.  As Boyer writes in his answer, “that such processes could lead to roughly stable representations across large numbers of people is a wonderful, anti-entropic process that cries out for explanation”. However I feel like this is a starting point. would be interested in X as a “cultural stuff”, and then try to explain it. Boyer and Tooby do not seem to agree: “culture”, in their view, is not only mistakenly used as an explanation, but it is not a scientific concept at all. Tooby writes:

Attempting to construct a science built around culture (or learning) as a unitary concept is as misguided as attempting to develop a robust science of white things (egg shells, clouds, O-type stars, Pat Boone, human scleras, bones, first generation MacBooks, dandelion sap, lilies…)

This is a quite serious accusation. Try to build a unitary explanatory framework for egg shells and first generation MacBooks (who is Pat Boone?) seems indeed a desperate endeavour. Are we in such a situation? An accepted working definition of culture, for people interested in a naturalistic explanation of it, is usually something like “socially transmitted information”. I know this will not satisfy everybody but, for the sake of discussion, let’s assume that one can mostly agree with it (I do).

Now, if we use this working definition to decide what belong to culture, we need to acknowledge that the set has somehow fuzzy boundaries. “Social” transmission  does not isolate precisely some information/behaviours versus others: one of the clearest and most important message of recent cognitive anthropology is that socially transmitted information is in general not simply copied from one head to another, but it is reconstructed using previous individual knowledge. Or, even if we want to give more importance to the “copying” aspect, some information will be more likely to spread because of certain common features of human mind. The same Pascal Boyer has convincingly argued, for example, that minimally counter-intuitive agents, i.e. agents that mainly conform to our intuitive, universal, expectations of how an agent should behave and appear, but have a few violations, are more memorable than completely intuitive ones as well as completely counter-intuitive ones. Superman can fly and has problems with kryptonite, but his behaviour is understandable (he feels lonely, he has a strong sense of justice, etc.).  Shall we consider Superman (or religion, which, according to Boyer is successful – partly – for the same reason) less “cultural” than other domains where individual predispositions are less important? This is clearly not very satisfying.

Also “classic” cultural evolution research has emphasised the importance of social and individual learning being intertwined. There is also a name for this: Roger’s paradox. The anthropologist Alan Rogers (here the original paper) showed, with a simple model, a counter-intuitive result:  in a changing environment, in a population in which individuals are individual learners or social learners, the fitness of the latter, at equilibrium, is equal to the fitness of the former, so that there would not be selection for social learning. In short, this is due to the fact that social learners are “information scroungers” that spare the cost of individual learning but can not track the changes in the environment. While the fitness of individual learners is constant (the benefit of preforming the correct behaviour minus the cost of tune to the environment), the fitness of social learners depends on the composition of the population: the more social learners, the less reliable information, the less the fitness. At equilibrium, Rogers shows, the composition of the population is such that the fitness of social and individual learners is the same. As social learning is everywhere, this has been called a paradox. The “solutions” of Roger’s paradox (see, for example, here and here) all basically involve the possibility that individuals are both social and individual learners.

It seems, then, that it is quite difficult to use “social” transmission to isolate what culture is, as individual learning, as well as universal features of human psychology, are likely to play a role in all instances of social transmission. One could answer: yes, of course we know this is important, culture is “socially transmitted information (in which individual learning, etc. have an important part)”. However, the problem with this definition is that, like in the white-things-science of John Tooby, everything goes. Indeed the diffusion of first generation MacBooks is a good topic for cultural evolution studies, as well, I suppose, the diffusion of possible uses for egg shells (I checked Pat Boone in wikipedia: definitely a topic for us).

Is the situation for “culture” as a scientific concept that bad? I think it is quite interesting to take seriously this criticism and to ponder on the possible problems of the “socially transmitted information” definition. However I am not so pessimist. In a next post (as this became way too long) I will propose a couple of alternatives. One is to drop the “socially transmitted” part (as I suppose anthropologists like Dan Sperber would suggest), and one – which I prefer – involves the idea that studying “culture” does not imply defining a specific domain, but defining how “cultural stuff” are studied, what kind of questions are asked, what kind of properties we are interested in.  Other scientific disciplines, say physics or chemistry, not only studies all white things, but all things, of all colours, and I do not think this would be an argument to retire them.

Of course all this is quite speculative so any comment is more than welcome. And, by the way, a great 2015 to everybody!

Dog movie stars and dog breed popularity

A new research I co-authored with Stefano Ghirlanda and Hal Herzog has just been published in PLOS ONE (the paper is open access and can be found here). In this paper we continue our analysis of dog breeds popularity as a particularly interesting (and data-rich) cultural domain. We had already shown that the choice of which puppy one buys seems largely driven by fashion, i.e. social influence, more than by functional considerations (see my post from last year). Now we looked explicitly at one source of social influence, i.e. movies featuring dogs.

We found that indeed there is a strong effect of movies on the popularity of dog breeds. While this is not probably a shocking result, it is quite interesting to have precise quantitative data. We used the AKC database, totalling over 65 million dog registrations from 1926 to 2005, and analysed a total of 87 movies featuring dogs. The impact of movies has been large. We found, for example, that movies have an influence that can last up to 10 years from the initial release. The 10 movies with the strongest 10-years effect are associated with 800,000 more registrations in the AKC then what would have been expected from pre-release trends. A striking example is the 1959 Disney movie The Shaggy Dog. The registrations of Old English Sheepdogs were stable on around 100 dogs per year in the ten years preceding the release of the movie. In 1969 only, ten years after, 4,226 Old English Sheepdogs were registered.

We also found that the more a movie was successful (we estimated the number of viewers from the opening weekend earnings) the more impacted on the popularity of the breeds of the dog featured. Another interesting finding is that the influence of movies has decreased during the century. Earlier movies are in general associated with larger trend changes than later movies. This suggests that movies – perhaps because of an increased competition with other media, such as television, and, more recently, the internet – have gradually lost their influence on pop-culture.

Stefano made a nice figure showing some of the trends (click on it to see a larger version). Both the figure (here) and the data (here) are publicly available on figshare.com

DogMovieStars

Together with our previous results, which showed that behavioural characteristics, longevity, or health were not correlated with breeds popularity, this new analysis provide a quite clear picture: we do not choose, on average, dogs because they are more healthy or, for example, trainable, but because we see them in the neighbours garden, or in the last blockbuster. Why this is not bad per se (copying what others do is a quite effective strategy in many situations) is definitely bad for dogs. The only feature that we found to positively correlate with popularity is indeed the presence of genetic diseases. Of course this does not mean that dog owners actively look for breeds with genetic diseases, but, as a minimum, that they do not keep this in consideration when choosing a puppy and, more worryingly, that the huge differences in popularity and the rapid increases of some breeds provoke over-breeding which results, in turn, in an increase of genetic disorders. My take-home message here is: don’t follow fashions when choosing a puppy!

Now a couple of more cultural-evolution related questions. First: how important is how the dog is presented in a movie for the effect on the popularity of the breed? We excluded few movies in which the dog was clearly a negative character (Cujo, for example) but we did not analyse in detail this issue. My feeling is that is not so important. While our data end in 2005, the AKC provides the rankings for more recent years. Hal noted the steady increase of French bulldogs (they were at the 54th position in 2003, and at the 11th last year). It happens that in a famous scene of the hugely popular movie The Hangover, Mike Tyson holds in his arm a French bulldog (see the video below from around 1:30). Since the movie is from 2009 (and we do not have the data…) is not clear whether the movie had an influence on the increase on popularity or if it is the other way around (i.e. the authors capitalised on the growing popularity of the breed and used it for the movie), but the dog is not more than a prop in the scene in question. The idea is that the mere presence of a dog in a popular movie makes it accessible and that, as features of different breeds are, in a sense, neutral (see below), a simple advantage in accessibility generates a cascade of effects (e.g. people will talk about that breed more than about other breeds) that may greatly influence popularity.

(the case of French bulldogs is also relevant for the previous point, as French bulldogs carry – in Hal’s words – a huge load of genetic disorders)

Second: does this result suggest that we are copy-machines, easily influenceable by evil Hollywood’s producers? I do not think so. As I mentioned above I consider the choice of a breed as being, by and large, neutral. This does not mean that all breeds are equal (of course they are not). However, given the choice of owning a dog (this, I think, is a non-neutral choice. Would be interesting to see whether movies influences the total number of dogs in time), the features of different breeds can be reasonably adapted to one’s own habits (or the other way around) so that the choice has, in the majority of cases, not enormous effects on the owners. Many people that are buying French bulldogs now would have probably bought poodles fifty years ago. Something similar happens, for example, for baby names (of course they are different; yes, some of them may have an effect on your life [but see here], but, in the majority of cases, being called Alberto or Stefano does not make much difference). For this kind of cultural traits I would indeed expect social influence and media having a strong effect on popularity, but less for traits that implies more important outcomes. We are not blind copy-machines, but selective copy-machines.

Ghirlanda S, Acerbi A, Herzog H (2014) Dog Movie Stars and Dog Breed Popularity: A Case study on Media Influence on Choice. PLoS ONE 9(9): e106565

Follow

Get every new post delivered to your Inbox.

Join 37 other followers