There is a lot of talk these days about a fascinating idea called embodied cognition. What is it exactly? In this lively interview I talk with two people who are actively researching this question. We discuss how the body and mind “talk” to each other when baseball players catch fly balls and what role psychology plays in the design of robots.
What is Embodied Cognition?
-
- First I recommend reading Andrew Wilson and Sabrina Galonka’s blog: Notes from Two Scientific Psychologists.
-
- A Brief Guide to Embodied Cognition: Why You Are Not Your Brain from Scientific American
-
- Here is the research that was discussed in this episode, Eerland, A., Guadalupe, T., & Zwaan, R. (2011). Leaning to the Left Makes the Eiffel Tower Seem Smaller: Posture-Modulated Estimation Psychological Science, 22 (12), 1511-1514 DOI: 10.1177/0956797611420731 and here is the critique of that paper by Andrew and Sabrina: Leaning to the left makes you believe odd things about embodied cognition
-
- Miles, L., Nind, L., Macrae, C. (2010). Moving Through Time. Psychological Science, 21 (2), 222-223 DOI: 10.1177/0956797609359333
-
- Andrew’s critique of the “Moving Through Time” study: “Moving Through Time” and embodied cognition
Boston Dynamics Big Dog:
Andrew Wilson
July 3, 2012Hi Jim
Sorry to be so slow responding – life is fairly hectic just now.
“You accept that studies that do not hypothesize about an underlying causal mechanism are fine as long as they are replicable and have effect sizes big enough to be meaningful.”
Our point is slightly different, namely that the best way to to achieve meaningful effect sizes is to do experiments motivated by hypotheses about causal mechanism.
Our comments about ‘disembodied AI failing’ were indeed about understanding cognition – the internet works just fine without embodiment and that’s ok by us! But:
“You say it’s not at all clear that ACT-R is doing it right, and I agree with that, but isn’t it at least as unclear as it is for any *currently implemented* embodied system?”
No, I don’t think so. Big Dog is one example; Rolf Pfeifer’s work is another; another are robotic insects made by Barbara Webb to test embodied hypotheses about their behaviour with great success (http://homepages.inf.ed.ac.uk/bwebb/, and see Louise Barrett’s book ‘Beyond the Brain’ for an excellent overview – linked in the main post). We think these things are doing way better than anything else, which is why we’re such fans of the embodied approach.
“that embodied approaches will prove less useful for modelling what have historically been thought of as more traditionally symbolic processes, such as language, planning, theory of mind, etc”
This is a common objection, and it is true that as things stand these kinds of things have not been tackled. There are two reasons: one, these process go away when you do embodied work (no one would think there is anything like a modular theory of mind, for example) and two, no one’s really gone to town on these problems yet because they are hard. Sabrina’s been working on language recently on the blog (http://psychsciencenotes.blogspot.co.uk/search/label/language) and is making progress, although it really is complicated.
Hard is not the same as impossible, though, and the bet is that it can work. You’re right about Brooks – he delivered only up to a point, but the field has moved on since then. There are new tools (Gibsonian ecological psychology is finally getting the recognition it deserves, for example) and these offer new solutions to old problems.
I don’t know what en embodied research programme about visualisation would look like – but that’s only because it’s not my field. What we think psychology needs to do is commit to trying to apply some consistent, coherent theoretical principles across the board, instead of picking and choosing to suit ourselves, and just stick with it for a while, get a little ‘normal science’ under our belts. We think those ground rules should look like this: http://psychsciencenotes.blogspot.co.uk/2011/11/some-ground-rules-for-theory-of.html. Why? Because we look at just how spectacular the successes of embodiment have been and think that maybe it’s a hint the approach is onto something.
Jim Davies
July 3, 2012Andrew
Thanks for your thoughtful response.
Of course I agree that small effect sizes that are hard to replicate are problematic, but these are issues that are just as true of causal accounts as they are of effect studies. If you accept Newton’s apple, then I think we agree– you accept that studies that do not hypothesize about an underlying causal mechanism are fine as long as they are replicable and have effect sizes big enough to be meaningful.
I agree that the vast majority of AI is not intended to model cognition, and perhaps tells us little to nothing about it. Maybe I was misinterpreting you when you said it “fell flat on its face.” Disembodied AI of all kinds has proven extremely successful and useful in the world and on that measure I don’t see it as a failure. If you’re looking at success for telling us how people think, then I think that ACT-R is at least as successful at shedding light on human cognition as any embodied cognitive system out there. You say it’s not at all clear that ACT-R is doing it right, and I agree with that, but isn’t it at least as unclear as it is for any *currently implemented* embodied system?
Are you saying that embodied systems should be expected to fare better in the future? This is a more plausible claim. My opinion is that embodied cognitive approaches are important (I draw on this stuff heavily in my work on imagination), modelling is VERY important (be it embodied or otherwise), but that embodied approaches will prove less useful for modelling what have historically been thought of as more traditionally symbolic processes, such as language, planning, theory of mind, etc.
Rodney Brooks had a plan: to start with an embodied robot, get it moving and grasping, and to work his way up to symbolic thinking. That was a long time ago. How has that worked out for him? I’m all for new approaches, but by promising the world they are in danger of making the same overblown promises that plagued early AI.
My comments do not come from a place of wanting compromise for social reasons, but from an earnest belief that symbolic, disembodied approaches are appropriate for certain processes. In my own research I’m trying to understand how people visualize things in their heads, such as when dreaming or reading a novel– do you really think it’s worth my effort to try to get it working on a robot at this stage of the game?
Andrew D Wilson
July 3, 2012Hi Jim
This is a common objection to our way of going about things. Psychologists love compromise, and looking for that middle ground. It’s not a bad instinct, unless nothing actually operates in that middle ground, which Sabrina and I think is the case.
So a couple of points:
Identifying an effect with no account of why it happens is a real problem when that effect is small and hard to replicate. Newton had the advantage that the apple dropped the same way every time.
Hardly anyone does GOFAI any more, and while ACT-R has plenty of successes it’s not at all clear it does things the way people do.
With regards to the engineering world: the fact that those systems are non-embodied says nothing about whether they are good models of cognition. I don’t want embodied robots routing my internet traffic, but I do want them helping explain how people perceive, act and cognise.
The embodied hyothesis we favour is a bit sneaky; once you let the body and world into cognition, it actually radically alters how things work all the way up. Cognitive psychology started too far upstream and started solving problems that don’t actually exist, if embodiment is correct. This is why we think that changing the bottom-up base has consequences throughout psychology.
Jim Davies
July 3, 2012Your guests raise good points when making critiques of individual studies, but I disapprove of how dismissive they are of the problems and level of analysis that other people work with. I applaud their desire to understand cognition by working from the bottom-up. It’s a great, necessary approach. But they seem to suggest that anyone who is not doing that is just doing it wrong and not saying anything interesting. Just because a study can only describe an effect, and cannot provide a plausible underlying mechanism, does not mean that it’s not legit or not interesting. Newton’s observations about tendencies of gravity also have no underlying mechanism.
In particular I object to their baffling claim that good old-fashioned AI fell flat on its face. I think it’s true for motor control, language, and the old-brain competencies, but symbolic AI has provided countless insights into high-level, deliberative processing. The most respected cognitive architecture out there is ACT-R, which is fundamentally based on one of the oldest AI technologies ever– the production system.
On the engineering front, non-embodied AI, arguably, runs the world, controlling or helping to control everything from which gate your plane goes to all the way to web searches.
Yes, catching a softball is harder than playing chess. But it does not follow that you need an embodied robotic system to model everything in cognition.