Authors

Ed Symes

Abstract

This thesis examines how the objects that we visually perceive in the world are coupled to the actions that we make towards them. For example, a whole hand grasp might be coupled with an object like an apple, but not with an object like a pea. It has been claimed that the coupling of what we see and what we do is not simply associative, but is fundamental to the way the brain represents visual objects. More than association, it is thought that when an object is seen (even if there is no intention to interact with it), there is a partial and automatic activation of the networks in the brain that plan actions (such as reaches and grasps). The central aim of this thesis was to investigate how specific these partial action plans might be, and how specific the properties of objects that automatically activate them might be. In acknowledging that perception and action are dynamically intertwining processes (such that in catching a butterfly the eye and the hand cooperate with a fluid and seamless efficiency), it was supposed that these couplings of perception and action in the brain might be loosely constrained. That is, they should not be rigidly prescribed (such that a highly specific action is always and only coupled with a specific object property) but they should instead involve fairly general components of actions that can adapt to different situations. The experimental work examined the automatic coupling of simplistic left and right actions (e.g. key presses) to pictures of oriented objects. Typically a picture of an object was shown and the viewer responded as fast as possible to some object property that was not associated with action (such as its colour). Of interest was how the performance of these left or right responses related to the task irrelevant left or right orientation of the object. The coupling of a particular response to a particular orientation could be demonstrated by the response performance (speed and accuracy). The more tightly coupled a response was to a particular object orientation, the faster and more accurate it was. The results supported the idea of loosely constrained action plans. Thus it appeared that a range of different actions (even foot responses) could be coupled with an object's orientation. These actions were coupled by default to an object's X-Z orientation (e.g. orientation in the depth plane). In further reflecting a loosely constrained perception-action mechanism, these couplings were shown to change in different situations (e.g. when the object moved towards the viewer, or when a key press made the object move in a predictable way). It was concluded that the kinds of components of actions that are automatically activated when viewing an object are not very detailed or fixed, but are initially quite general and can change and become more specific when circumstances demand it.

Document Type

Thesis

Publication Date

2003

Share

COinS