Consciousness: The Hard Problem

I enjoyed this article. It’s about how surprisingly little progress cognitive scientists and philosophers are making towards an understanding of consciousness. I like its story of how the Hard Problem of consciousness came to be thought of as such. I like the image of some of the finest scientists and philosophers cruising among the Greenland icebergs, at an oligarch’s expense, and yet failing to agree.
      Not long ago I realised I had missed this talk at Cambridge’s Centre for Research in the Arts, Social Sciences, and Humanities (CRASSH: quality acronym; Oxford’s TORCH is a good effort but not quite on the same level). Luckily there’s video so one prominent philosopher’s point of view can still be heard. John Searle is pretty categorical that consciousness is real, and not just an illusory by-product of cognitive processes, and he also seems optimistic that neuroscientists will pin something useful down soon. (I wonder if a deeper understanding of the Default Mode Network — mentioned more than once in this blog already here and here — and other things like it yet to be discovered, will help. But that may prove to be an over-interpretable red herring in due course…)
      As the Guardian article says, some people think that consciousness may just be something that won’t be explained, any more than certain basic laws of the electromagnetic world can be questioned. I think it seems rather early in the history of neuroscience to go that far. The article also mentions a new play by Tom Stoppard called The Hard Problem, which looks like it’s going to broach the topic. How was this the first I’d heard of it? And now it’s all sold out until April.
      Once at a conference I went to a panel that spent some time discussing ‘cognitive approaches’ to literature — basically any attempt to understand literature in the light of cognitive science, the kind of thing I am doing in this blog. One speaker made two key complaints about the field, (i) being that it was not offering anything therapeutic, and (ii) being that it was not saying anything about consciousness. I wasn’t sure what to think about either point. The first seems to expect a cognitive criticism to be too much like a psychoanalytic criticism; the second seems a bit harsh since it’s not cognitive criticism, but cognitive science itself, that is struggling with the topic.
      Literature might be something that depends on consciousness: it seems to rely on reflective awareness, or at least to give us a strong sense of it. It also represents other consciousnesses, and up to a point, it allows us to think we are experiencing them. These consciousness may be quite different from our own (psychotic humans, animals, gods, aliens, androids, etc.). My daughter () wrote a story recently from the point of view of a literary character, aware of her own special vulnerability at the hands of an author, and her own particular existence in time, and imagining how she’d shape her life if she could. And if she can do it, Joyce and Proust and all can’t be too far behind. Indeed, one of the very first posts in the blog looked at how Shakespeare’s fairies talk about — and thus by inference we suppose they think about — time in a special way.
      This aspect of literature may not really have much chance of solving the Hard Problem(s), but one of the most remarkable thing that consciousness can do is to simulate other consciousnesses and imagine their differences. Literature knows a lot about that, in practice at least.

… her age, not her name… but it’d be a cool name…
E-mail me at rtrl100[at]cam.ac.uk

1 thought on “Consciousness: The Hard Problem

  1. Emma Firestone

    The Guardian article referred to here really is a good and useful thing! Can’t help but wonder where the study of consciousness would be if the major thinkers and practitioners in the effort all could write so well. (Probably not a great deal farther, but perhaps there wouldn’t be quite so much ‘profound incomprehension’ happening among the parties….)

    I guess I think that it’s a perfectly reasonable position to regard the Hard Problem of consciousness (zombies and so on) as so much empty metaphysics, and to hold that our only hope to advance the study of consciousness is to engage with actual facts about neurology, etc. What one can’t reasonably do, though, is have it both ways: that is, claim both that the Hard Problem is meaningless, and that progress in neuroscience will soon solve that problem, if it hasn’t already. One can’t maintain both that (i) once you account for someone’s observed behavior and the details of their brain organisation, there’s nothing further about consciousness to be explained; and (ii) that, Remarkably!, the ____ Theory of Consciousness can explain the ‘nothing further’, or might be on the verge of doing so. Seems that large swaths of consciousness-theorising can be challenged along these lines, i.e. for trying to have their brain and eat it too.

    Thanks too to the Guardian journalist for bringing up Giulio Tononi’s Integrated Information Theory, an audacious project that nevertheless, thinks I, avoids the snaffle mentioned above. Basically IIT aims ‘merely’ to tell us which physical systems are associated with consciousness, and which are not, purely in terms of that system’s physical organisation. There are loads of solid popular articles about IIT, like this one in the New Yorker and this one in the New York Times. The technical papers, laying out the definition of IIT, are easily accessed online too; see e.g. here.

    I think IIT is awesome, a serious and honorable attempt to offer something concrete enough to move the discussion of consciousness forward. A theory that tells us which physical systems ARE conscious, and provides answers that agree with common sense (as the Dennett/Churchland claim that consciousness is an illusory biproduct of neural function does not), is a needed and wonderful thing. What I am not yet sold on is the currently hypothesized link between Integrated Information and consciousness. Like, I could certainly imagine that having lots of information integration — or in IIT’s own terms, regions with a large Φ-value — might be one necessary condition, among others, for a physical system to be conscious. But can the value Φ possibly capture in total what makes a physical system conscious, or even what makes a system look conscious to the external observer? I don’t know. And happily, nobody else does!

    Reply

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.