Kasparov on the Brain

* Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (London: John Murray, 2017)
* Chris Beckett, The Holy Machine (London: Corvus, 2010)

Two of the random things I read on holiday featured questions about artificial intelligence. I read The Holy Machine because I’d enjoyed Chris Beckett’s Dark Eden trilogy. It’s about a man who falls in love with a robot, and I thought it did an interesting job of portraying how a computer mind might come to something like consciousness, not least because it portrayed this as both very limited and fractured, and yet also Messianically profound.
      I think I have already let on that I am a bit of a chess fan. I am not a strong player at all, but I find the game and its players fascinating. Recently, in spite of the raucous mockery of my family, I was absorbed in the live internet feed from the St Louis Rapid and Blitz tournament, in which former world champion Garry Kasparov came out of retirement. Kasparov vs Navara in rapid chess (approximately 30 minutes each) was so dramatic.
      Kasparov’s book is good on how chess has been the testing ground for the progress of computer thinking and artificial intelligence, and thus for the definition of thinking and intelligence themselves. The battle to determine the importance of brute-strength calculation (white-does-this then black-does-that then…) versus the importance of imagination and intuition (trying to teach the computer more abstract concepts of chess positioning, and how to be creative) is not just a practical one for chess. It plays out questions about our own minds, what sort of engines they are and how they operate.

*

Kasparov himself is a fascinating case. He is at pains, in the book, to remind us that his years as world champion were the result of intense preparation, excellent memory, rigorous study of the underlying principles at work in his favourite positions. This, I think, is to counteract his reputation as the Beast of Baku (he says he didn’t like that nickname), who dominated by force of will and a mercurial, aggressive imagination. Surely he did have something other than the power to learn and calculate, though, an emotional constitution (for example) that spurred him to fight not just at the board but also as he studied.
      Emotions, he points out, can have debilitating effects on a human but never have such effects on a computer. He describes some of the shocks and frustrations of his matches against the IBM computer Deep Blue, one of which resulted in a famous defeat. This is the heart of the book, and I’d have enjoyed even more of the detail of that story. However, the book has other things to focus on. For example, Kasparov develops a fairly optimistic vision of the future of artificial intelligence. After all, chess is different now that computer programs (even cheap ones) are better than any human will ever be, but it still seems to be going strong.

E-mail me at rtrl100[at]cam.ac.uk

Collaborative Skill

My holiday reading has been a bit blogworthy (partly by accident), so I am about to get back into action. I was pointed towards the Imperfect Cognitions blog (already mentioned in a post once before) by a well-wisher: it’s here, it’s prolific, it’s full of interesting stuff. The current post (August 17th) is about a workshop organised by John Sutton (homepage; he has also been mentioned in posts more than once), which relates to a big and fascinating project on skill. I’d say more but I’ve been catching up on the last few months of Imperfect Cognitions material!

E-mail me at rtrl100[at]cam.ac.uk

Early Summer Shut-Down

I usually shut down for a month in August to recharge the blog-batteries, but this year I think I need a longer recharge period. Lots to do in the months ahead, a considerable need to spend some time doing nothing much… it all points to a suspension of some routine activities. There may be occasional notes on things, or longer posts if the spirit takes me, and you’ll get updates on those if you sign up to the Feedburner thing (look right, and down). See you in September (-ish).

E-mail me at rtrl100[at]cam.ac.uk

Cognitive Futures 2017

Last year I was able to attend the annual ‘Cognitive Futures in the Humanities’ conferences in Helsinki. Briefly mentioned here. This year, no such luck: it happened in the middle of my exam marking period, and I couldn’t see how a trip to Stony Brook, NY, could be managed. Still, belatedly, regretfully, I thought I’d point to the programme info here, which is pretty comprehensive in the modern style. It gives a good idea of what cogs we cog-crankers are cranking at the moment.

E-mail me at rtrl100[at]cam.ac.uk

Reforming Methods

* Romy Lorenz, Adam Hampshire, and Robert Leech, ‘Neuroadaptive Bayesian Optimization and Hypothesis Testing’, Trends in Cognitive Sciences, 20 (2017), 155-67.
* Siobhán Harty, Francesco Sella, and Roi Cohen Kadosh, ‘Mind the Brain: The Mediating and Moderating Role of Neurophysiology’, Trends in Cognitive Sciences, 20 (2017), 2-4.

This is a second post in a row about issues in experimental design. Even though I rarely design experiments myself, it’s important to understand what sort of compromises are accepted, and what sort of changes are underway. These are two essays about how the methodology of cognitive science could improve, not least to address the ‘reproducibility crisis’. Various solutions have been suggested to deal with the current situation, in which a failure to replicate a number of well-known experimental findings has cast general doubt on methods and conclusions. There is a worry that various biases in the system — for example towards the publication of successful rather than unsuccessful experiments — that obscure the true shape of the discipline. One suggestion is ‘preregistration’, a move towards open-ness in the publication of data and methods so that everything is open to scrutiny.
      Lorenz et al. are proposing a different solution, which is to take advantage of two technological advances. The first is the development of real-time analysis of brain imaging, which means that experimenters can broaden their hypotheses and designs, and can observe neural functions in a more flexible way. The other advance is in the use of ‘active sampling approaches’, where the selection of samples is made by the computer, which progressively ‘learns’ according to an algorithm defining how to refine and direct the search. More ground can be covered, and the criteria for the choices involved (including the algorithm itself) can be made public.
      They give an example which, I think, helps get this across. Some cognitive tasks ‘recruit a combination of spatially overlapping yet distinct frontoparietal networks’, and ‘understanding their exact functional role remains a challenge’. It seems possible that the method described, involving more flexible testing of brain activity and a search mechanism based on emerging facts rather than on the scientist’s expectations, could help uncover what links to what.
      It’s not the easiest read — there’s a description of ‘experiment space’ in 2D and 3D that (I’ll be honest) is presumably metaphorical but how, and to what extent, I don’t know. And I suppose this tends to change the point at which the scientist’s choices impinge, rather than eliminating them (as if that were possible). However, people are worrying about cognitive science at a pretty basic level so maybe there are ways in which machine learning can help.

*

Harty et al. look at experiments on behaviour and argue that they aren’t sufficiently designed to account for ‘the critical antecedent of behavior, the brain’. It’s time, they say, to take into account how neurophysiology affects ‘mediating or moderating’ roles. Some variables are taken into account regularly: gender, age, socioeconomic status, level of education, and others. But brains are very complex, and there may be pertinent physical reasons why this individual, and this one, but not that one or that one, respond in a certain way to a behavioural stimulus: ‘We should endeavor to design our studies and analyze our data in ways that can address questions about why, how, and for whom the experimental manipulation is effective’.
      They hope that this will improve ‘prospects for reproducibility’: the more accurately the experiment is designed, the more parameters that are covered, the better. Also there is the possibility that this could lead to more ‘personalized cognitive interventions’ — not just experiments, then, but better applications in general. I have sometimes find myself worrying that distinctions taken by experimenters as stable (e.g. the ‘gender, age, socioeconomic status, level of education’ mentioned above) are a lot more subtle than they are taken to be, but there’s got to be an interplay between categories (can’t do without them) and refinements.

E-mail me at rtrl100[at]cam.ac.uk