Show simple item record

dc.contributor.authorSYMES, EDWARD MICHAEL
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.date.accessioned2013-09-16T10:24:40Z
dc.date.available2013-09-16T10:24:40Z
dc.date.issued2003
dc.identifierNOT AVAILABLEen_US
dc.identifier.urihttp://hdl.handle.net/10026.1/1741
dc.description.abstract

This thesis examines how the objects that we visually perceive in the world are coupled to the actions that we make towards them. For example, a whole hand grasp might be coupled with an object like an apple, but not with an object like a pea. It has been claimed that the coupling of what we see and what we do is not simply associative, but is fundamental to the way the brain represents visual objects. More than association, it is thought that when an object is seen (even if there is no intention to interact with it), there is a partial and automatic activation of the networks in the brain that plan actions (such as reaches and grasps). The central aim of this thesis was to investigate how specific these partial action plans might be, and how specific the properties of objects that automatically activate them might be. In acknowledging that perception and action are dynamically intertwining processes (such that in catching a butterfly the eye and the hand cooperate with a fluid and seamless efficiency), it was supposed that these couplings of perception and action in the brain might be loosely constrained. That is, they should not be rigidly prescribed (such that a highly specific action is always and only coupled with a specific object property) but they should instead involve fairly general components of actions that can adapt to different situations. The experimental work examined the automatic coupling of simplistic left and right actions (e.g. key presses) to pictures of oriented objects. Typically a picture of an object was shown and the viewer responded as fast as possible to some object property that was not associated with action (such as its colour). Of interest was how the performance of these left or right responses related to the task irrelevant left or right orientation of the object. The coupling of a particular response to a particular orientation could be demonstrated by the response performance (speed and accuracy). The more tightly coupled a response was to a particular object orientation, the faster and more accurate it was. The results supported the idea of loosely constrained action plans. Thus it appeared that a range of different actions (even foot responses) could be coupled with an object's orientation. These actions were coupled by default to an object's X-Z orientation (e.g. orientation in the depth plane). In further reflecting a loosely constrained perception-action mechanism, these couplings were shown to change in different situations (e.g. when the object moved towards the viewer, or when a key press made the object move in a predictable way). It was concluded that the kinds of components of actions that are automatically activated when viewing an object are not very detailed or fixed, but are initially quite general and can change and become more specific when circumstances demand it.

en_US
dc.language.isoenen_US
dc.publisherUniversity of Plymouthen_US
dc.titleTHE COUPLING OF PERCEPTION AND ACTION IN REPRESENTATIONen_US
dc.typeThesis
plymouth.versionFull versionen_US
dc.identifier.doihttp://dx.doi.org/10.24382/4129


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV