Cyberball!

* K.D. Williams, and B. Jarvis, ‘Cyberball: A Program for Use in Research on Interpersonal Ostracism and Acceptance’, Behavior Research Methods, 38 (2006), 174-80.
* C.H.J. Hartgerink, I. van Beest, J.M. Wicherts, and K.D. Williams, ‘The Ordinal Effects of Ostracism: A Meta-analysis of 120 Cyberball Studies’, PloSONE, 10 (2015): e0127002. doi: 10.1371/journal.pone.0127002

This at last is a follow-up to the conference on mind-reading in Durham, which I mentioned here. There I heard Hannah Wojciehowski of the University of Texas give a talk about rejection; her previous mention on this blog, noting her important conversations with neuroscientist Vittorio Gallese, is worth a bump. She cited interesting work on ostracism, and in particular Kipling Williams’s use of the ‘Cyberball’ game to engineer the experience of exclusion in the lab. The game looks like this:

By clicking with the mouse, you throw the ball from person to person. The trick is that after a while the other players stop including you, and this gives the experimenters the chance to learn about ostracism. You can find a lot more information about the game on this website.
      The two articles at the top are, respectively, the paper in which the use of Cyberball was first discussed, and a reflection from a decade later, analyzing 120 studies making use of the game. There are many things of interest in the field and in the measuring of intense real-world experiences by means of an ingenious, apparently simple game. One thing in particular struck me, which is an instruction given to participants.
      Obviously it would be a bad idea to tell people that the point of Cyberball is to test their feelings about ostracism. Instead, they are told that the game is there to test their abilities at visualization: they are meant to flesh out the game in their minds, filling in shapes and colours, backgrounds and more. Here is the instruction:

Welcome to Cyberball, the Interactive Ball-Tossing Game Used for Mental Visualisation!
      In the upcoming experiment, we test the effects of practising mental visualisation on task performance. Thus, we need you to practise your mental visualisation skills. We have found that the best way to do this is to have you play an on-line ball tossing game with other participants who are logged on at the same time.
      In a few moments, you will be playing a ball tossing game with other students over our network. The game is very simple. When the ball is tossed to you, simply click on the name of the player you want to throw it to. When the game is over, the experimenter will give you additional instructions.
      What is important is not your ball tossing performance, but that you MENTALLY VISUALISE the entire experience. Imagine what the others look like. What sort of people are they? Where are you playing? Is it warm and sunny or cold and rainy? Create in your mind a complete mental picture of what might be going on if you were playing this game in real life.

This intrigues me. No particular significance is attributed to the visualization; in the paper it says that users of the program could design their own cover stories. And yet surely there is an interplay between effortful visualization and emotional effect: intuitively we might think that an attempt to imagine this as a realistic scene will make rejection more acute. Whether or not that’s true, I wonder whether this scene created in the mind necessarily replicates the real world. In imaginary fictions, settings and characters and emotions might satisfy some criteria of lifelikeness, but might, to interesting effect, elude others. Are these visualizations like fictions, in this respect or any other? I haven’t got far into the 120 studies but perhaps it’s covered there. Anyway, an interesting twist for those interested in the products of the imagination.

E-mail me at rtrl100[at]cam.ac.uk

Theory of Animal Minds

Caroline E. Spence, Magda Osman, and Alan G. McElligott, ‘Theory of Animal Mind: Human Nature or Experimental Artefact?’, Trends in Cognitive Sciences, 21 (2017), 333-43.

This has something in common with an earlier post about the use of the word ‘fear’? It’s about the terms used in psychology, and the care required to understand the question before heading for an answer. I’ve mentioned an interest in animal minds before (in this post a year ago). Spence et al. are interested in whether animals are ‘capable of empathy, problem-solving, or even self-recognition’. But here they consider the problem of understanding our own mechanisms for theorizing about the content of animal minds. Is this ‘a natural consequence of Theory of Mind (ToM) capabilities’, i.e. do we just ‘mentalize’ them as if they were like humans? Much of the article is concerned with structural, methodological questions, about how to go through the steps required to establish a good conceptual framework. Frans de Waal, whose work featured in the post mentioned above, used the term ‘anthropodenial’ as an alternative to ‘anthropomorphism’ (in an essay called ‘‘). How do we steer a sure course between what may be a slack habit of mind, and what may be an over-scrupulous avoidance of what might be rather significant evolved resources? The thing that caught my eye was the ethical dimension of the process by which we do, or don’t, attribute certain kinds of mental life to animals. Being really rigorous about not getting into anthropomorphic fallacies might make it less likely that it will seem that animals should have rights (which seems like a bad thing to me), but there are many complexities and subtleties.

Philosophical Topics, 27 (1999), 255-80 … and in a book called The Ape and the Sushi Master
E-mail me at rtrl100[at]cam.ac.uk

Reality Monitoring

Jon S. Simons, Jane R. Garrison, and Marcia K. Johnson, ‘Brain Mechanisms of Reality Monitoring’, Trends in Cognitive Sciences, 21 (2017), 462-73.

I’m back from the excellent conference I mentioned in my last post. There will be some follow-up posts, but I’ll delay them a bit because this one’s been hanging around for a while. It’s a big topic, potentially very interesting, perhaps too big to come to much of a literature-related point.
      Simons et al. are interested in how we distinguish between ‘internally generated information and information that originated in the outside world’. We need to be able to tell what comes from our imaginations, what we’ve actually experienced, what we’ve been told about, and so on. This is ‘fundamental for maintaining an understanding of the self as a distinct, conscious agent interacting with the world’. Errors can have serious consequences; impairments likewise, and are associated with major mental illnesses (e.g. schizophrenia).
      Their work is aimed at finding where in the brain this is handled (‘a network of brain regions involved in the recollection of source information, which include prefrontal, medial temporal, and parietal cortices’). They are also interested in showing and developing the link between reality monitoring and hallucinations. What catches my interest is the sense that patrolling the boundary between images and voices from inside and outside
is a significant, precarious job for our minds. I’ve already mentioned an excellent book on the voices part by Charles Fernyhough, back here.

Does this relate to literature in any interesting way? Are this capacity and its vulnerability things that could usefully be thought about in relation to the way we enter the worlds of novels or plays or poems? Do writers themselves have anything to tell us about the ways they see their readers (or their characters) navigating the real and the unreal?
      Two possibilities at the moment, and a third thing:
(i) Absorption: is there any interaction between our tendency to become absorbed in fictions, and the reality monitoring mechanisms? Is there a limit to absorption, or can we switch off our scruples in some way? Is the origin of a literary world always clear enough, and not really part of this inside / outside dynamic?
(ii) Realism and Poetics: from Aristotle onwards, there is a tradition that values literature as an imitation of the world that doesn’t break certain rules of realism. For Aristotle, a good play had to keep to limits and proportions, and it couldn’t strain credulity too far. Is there any relationship between this and reality monitoring? What’s missing is the internal generation, but there’s still a sense that we’re sensitive to the boundaries of reality, things that could have come from it, and things that couldn’t.
(iii) A little story, related to the previous point, that I may have told before. A few years ago I was in the Monterey Bay Aquarium in California, with my family. We were a long way from home. There we bumped into some friends who live in Michigan, who live thousands of miles away from us, and were themselves thousands of miles from home. We all exclaimed at this coincidence, but I said ‘well, I suppose this sort of thing happens all the time’, and one friend said ‘maybe, in novels!’. Only later did I work out what I should have said, which was… ‘You wouldn’t dare, if you were writing a novel, would you? Only reality would dare.’
      A reality-monitoring moment, of some kind, I think, though again it wasn’t coming from inside anyone’s head.

E-mail me at rtrl100[at]cam.ac.uk

Understanding Other People (Or Not)

I am attending a conference this week, indeed I am there now, so this is the nearest I’ve got yet to whatever Live Blogging is. It’s about renaissance literature and ‘theory of mind’. It’s a rich topic and a great line-up, and I may report further. For now, though, I will mention something I noticed recently.
      Via the Human Mind Project website (https://humanmind.ac.uk/) I reached an interesting and related debate filmed by the Institute of Art and Ideas. The debate brings together a literary scholar (Robert Eaglestone), a philosopher (Anita Avramides), and a psychologist (Nicholas Humphrey), and the topic is our knowledge of other minds. Eaglestone piles in counter-intuitively, saying that literature is about not understanding other minds — indeed, we have literature because we don’t understand other minds — and likewise we have ethics and politics and conversations because we don’t know what other people are thinking.
      Nicholas Humphrey says the opposite: we have to give credit to the impressive ‘heuristic tools’ our minds exercise in their efforts to know what others are thinking. Anita Avramides also wants to give credit to our social thinking, warning against solipsistic introspection, and proposing that we should start our study of the human with behaviour and interaction, rather than with the interior world. They are divided by the different things that could be meant by ‘understanding’ others, ‘knowing’ others, and the things we do with people’s minds that aren’t knowing them. They have different ideas as to what should be the threshold for deciding that it’s worth calling it ‘knowledge’.
      I enjoyed this IAI debate too: ‘The Edge of Reason‘. Another case where the interaction between different intellectual perspectives outweighs the problem of terms: it’s not always clear they are talking about the same thing, but that’s how some of the thought-provoking suggestions arise.

* Joel Robbins, ‘On Not Knowing Other Minds: Confession, Intention, and Linguistic Exchange in a Papua New Guinea Community’, Anthropological Quarterly, 81 (2008), 421-429.

This is one of the items on the conference reading list, and I found it intriguing. Many of the observations made in cognitive science are implicitly or explicitly species-level: it is taken for granted sometimes that findings in a population of university students in the USA or Finland or wherever apply to all of us. Anthropologists have a different expectation and a different focus for their attention.
      Robbins tells us that in some cultures it is widely agreed that knowing other minds is impossible. Among Melanesians this is related to speech: it is a commentary on the inability of language to convey innermost thoughts. He is arguing partly against those who say that even cultures full of ‘opacity statements’ are making assumptions about speakers’ intentions. Robbins says that in non-Western cultures intentions may be far from crucial to the meaning of speech. For example, there is a correlation between an insistence on opacity, and a lack of (or great difference in) concepts of lying, or thanking, where intentions and meaning are most entwined.
      A large part of the article is based on an absolutely fascinating discussion of the consequences when people brought up in an ‘opacity’ environment convert to Christianity, and thus find themselves in a culture where sincerity is often at issue. I don’t think this turns back necessarily or directly on the idea that mind-reading capacities are universally human, but it’s an interesting angle on a whole range of issues.

E-mail me at rtrl100[at]cam.ac.uk

Semantics and the Brain

Joseph E. LeDoux, ‘Semantics, Surplus Meaning, and the Science of Fear’, Trends in Cognitive Sciences, 21 (2017), 303-6.

LeDoux is bothered that the neuropsychology of fear, his own field, is a misnomer. There are subjective states, like fear, and there are ‘brain circuits that control them nonconsciously’, and there are links between them. However, if you call what the brain circuits are doing ‘fear’ then quite distinct things are being confused. This affects the popular reception of scientific research, and quite understandably scientists sometimes go along with it.
      LeDoux tells his own story: ‘The vernacular meaning of emotion words is simply too strong. When we hear the word “fear”, the default interpretation is the conscious experience of being in danger, and this meaning dominates. For example, although I consistently emphasized that the amygdala circuits operate nonconsciously, I was often described in both lay and scientific contexts as having shown how feelings of fear emerge from the amygdala.’
      Now he advocates only using fear to refer to the experience of fear — to the thing it inevitably evokes. The relevant bits of the amygdala are now called ‘a defensive survival circuit’. One of the important things LeDoux pushes against is the idea that the amygdala version of ‘fear’ is a more precise and concrete version of the subjective experience version of ‘fear’. They’re different things, he says, and the general use of ‘fear’ is suited to its task. He goes on to cite other similar terms: ‘motivation, reward, pain, perception, and memory’.
      It feels cheeky to write the phrase, ‘I have thought similar things myself before’, but I have. Especially about popular media reception of psychology, where they leap to tell us where love happens in the brain, and so on. On the other hand, I think that as long as the links between brain, mind, and world are made intelligently and with proper awareness, they are a source of good things too. I like the creative tension.

E-mail me at rtrl100[at]cam.ac.uk