您好,欢迎来到年旅网。
搜索
您的当前位置:首页AUTONOMY WHAT ISIT

AUTONOMY WHAT ISIT

来源:年旅网
AUTONOMY:WHATISIT?Margaret A. BodenUniversity of Sussex

Autonomy is a buzz-word in current A-Life, and in some other areas of cognitive science too.And it’sgenerally regarded as a Good Thing. However, neither the spreading use nor the growingapprovalhav eprovided clarity.

This Special Issue ofBioSystemsis devoted to clarifying the concept, and to showing howit’sbeing used in various examples of empirical research. Giventhe obscurity that still attends theconcept, however, weshouldn’texpect to find that all the ’clarifications’ are equivalent--or evenmutually consistent. Similarly,weshouldn’texpect to find that the notion is applied identically inall the research that’sreported here. So this preliminary sketch of the conceptual landscape maybe helpful.

Very broadly speaking, autonomy is self-determination: the ability to do what one doesindependently,without being forced so to do by some outside power.The \"doing\" may bemental, behavioural, neurological, metabolic, or autopoietic: autonomy can be ascribed to asystem on a number of different levels.

This doesn’trule out the possibility of the doing’sbeing affected, eventriggered, byenvironmental events. Tothe contrary: work in ’autonomous’ (i.e. situated) robotics, and in mostcomputational neuroethology (CNE), focusses specifically on a creature’sreactive responses toenvironmental cues. Even research that’sbased on the theory of autopoiesis, which stresses thesystem’sability to form (and maintain) itself as a functioning unity,allows that a cell, or anorganism, is closely coupled with its environmental surround--so much so, that from one point ofviewtheycan be regarded as asinglesystem (Maturana and Varela 1980). So autonomy isn’tisolation. But it does involveasignificant degree of independence from outside influences.One major \"outside influence\" which A-Life enthusiasts have inmind to denyisthe alien handof the programmer--and, for autopoietic theorists (though not for situated roboticists), eventheengineer/designer.(The Designer in the skyiseschewed too, of course--in favour of biologicalev olution by natural selection.) The explanatory focus is on the specifics of the system’sinherentstructure and ’intrinsic’ properties, not on anyinstructions that happen to be imposed on it by anoutside agency. Only thus can the system’sautonomy be respected, or evenposited.

Moreover, the traditions of situated robotics and autopoiesis both denythe role ofinternal/cerebral representations--not only in computer models, but in living organisms too.Some pioneering, and influential, work in CNE posits such representations (e.g. Arbib 1981,1987, 2003; Boden 2006: 14.vii).But most, perhaps because it deals with insect behaviour,doesnot.

As a result, most workers in A-Life and CNE reject GOFAI-based models of mind andbehaviour--wherein programs are imposed on general-purpose machines, and internalrepresentations are stressed. All too often, however, this rejection is expressed as scorn, more

-1-

ideology than science. Indeed, the editor of a professional newsletter in cognitive science hasbemoaned the \"frankly insulting\" names commonly used by researchers for approaches differentfrom their own, complaining that \"The lack of tolerance [between different research programmesin AI/A-Life] is rarely positive,often absurd, and sometimes fanatical\" (Whitby 2002). If anysuch insults have crept into the papers presented in this Special Issue, it’stobehoped that thereader will be more intellectually tolerant than the author.For,quite apart from professionaletiquette, one important example of autonomy is best understood in largely GOFAI terms (seebelow).

Autonomy is a problematic concept partly because it can seem to be close to magic, or anywayto paradox. Self-determination is all very well--but howdid the \"self\" (the system) get there inthe first place? If the answer we’re offered is that it spontaneously generateditself,this risksbeing seen as empty mystification. A key contribution of some current research on autonomy isthat it disarms this paradox. But paradox isn’tthe only source of difficulty here. The concept ofautonomy is problematic also because there are various types, and varying degrees, ofindependence.

Three aspects of a system’sbehaviour--or rather,ofits control--are crucial here (Boden 1996).(The \"system\" in question may be a whole organism or a subsystem, such as a neural network ormetabolic cycle, or a computer model of either of these.) However, the three aspects don’tnecessarily run alongside each other,nor keep pace with each other evenwhen theydo.

The first is the extent to which response to the environment is direct (determined only by thepresent state of the external world) or indirect (mediated by inner mechanisms partly dependenton the creature’sprevious history). The second is the extent to which the controlling mechanismswere self-generated rather than externally imposed. And the third is the extent to which anyinnerdirecting mechanisms can be reflected upon, and/or selectively modified in the light of generalinterests or the particularities of the current problem in the environmental context. In general, anindividual’sautonomy is the greater,the more its behaviour is directed by self-generated (andidiosyncratic) inner mechanisms, nicely responsive tothe specific problem-situation yetreflexively modifiable by wider concerns.

Clearly,then, autonomy isn’tanall-or-nothing property.And--evenmore confusing--thesenses in which autopoietic systems or self-organizing networks are autonomous differ from eachother,and from the sense in which situated robots are autonomous.

The confusion is compounded because, as the brief remarks above imply,autonomy is closelyrelated to twoother notoriously slippery notions: self-organization and freedom. No member ofthis problematic conceptual trio can be properly understood without also considering the othertwo.

Let’sturn to self-organization, first (Boden 2006: 15.i.b). This is the central feature of life. Notonly is it commonly listed as a defining characteristic of life, but all the other properties that areso listed are special cases of it. These vital properties are emergence, autonomy (sic), growth,development, reproduction, evolution, adaptation, responsiveness, and metabolism.

Self-organization may be defined as the spontaneous emergence (and maintenance) of order,

-2-

out of an origin that’sordered to a lesser degree. It concerns not mere superficial change butfundamental structural development, which can occur on successive lev els of organization. Andit is spontaneous, or autonomous, in that it results from the intrinsic character of the system(often in interaction with the environment) rather than being imposed by some external force ordesigner.

It’scommonly,though not always, assumed that the generationand the functioningof a self-organized system is wholistic. In other words, theycan’tbeexplained as being due tointeractions between independently definable sub-parts. Whereas a classical AI program, or a carengine, can be analyzed into separate pieces (procedures, mechanical parts), a self-organizedsystem cannot. Each ’part’ is to some degree dependent on other ’parts’ for its very existence,and for its identityasa’part’ of the relevant type. Theoretical approaches based on autopoiesisare especially likely to stress this aspect of self-organized systems.

In the early days of A-Life, long before the field had receivedits name, the concept of self-organization was being viewed with mistrust by some pioneers evenwhile theywere studying thephenomenon in illuminating ways. William Ross Ashby is a case in point. HisHomeostatwasamajor advance in the theory,and modelling, of self-organizing systems (Ashby 1947, 1948;Boden 4.viii.c-d, 15.xi.a). Nevertheless, he suggested that the term \"self-organization\" should beavoided. Tobesure, he sometimes used it, and \"self-coordinating\" too (Ashby 1960: 10). But healso complained that such phrases were \"fundamentally confused and inconsistent\" and\"probably better allowed to die out\" (Ashby 1962: 269). He sawthem as potentially mystifyingbecause theyimply that there’sanorg anizer when in fact there isn’t. Some modern researchersagree, carefully avoiding ’self-organization’ and referring instead toorganisation(ormetaorganization): the ability to act as a unified whole (e.g. Pellionisz and Llinas 1985: Sectn.4.1).

Yetthe concept of self-organisation hasn’tdied out. It’semployed today by manyworkers inA-Life and neuroscience--not to imply some mysterious ’inner organizer,’ but to focus onthespontaneous origin and developmentof organisation at least as much as on its maintenance. Asused in the papers collected below, the term normally carries that bias. The mystification, if notthe marvelling, has lessened largely because computer models of various types of self-organization--from flocking (Reynolds 1987), through neural networks (von der Malsburg1973;Linsker 1988, 1990), to the formation of cell-membranes (Zeleny1977; Zelenyetal. 19)--nowexist. Clearly,none of these works by magic.

As for human freedom, commonly regarded as the epitome of autonomy,this too--likethevital properties listed above--is a special case of self-organization. A-Lifers, who concernthemselves with organisms well belowHomo sapiensin the phylogenetic scale, rarely mention itexplicitly.Occasionally,theyadmit that their work doesn’tcoverit(e.g. Bird et al. 2006: 2.1).But sometimes, their words seem to imply that theyconfuse it with autonomy as such. That’samistake. The examples of autonomy considered in A-Life showvarying degrees of independencefrom direct outside control. But none has the cognitive/motivational complexity that’srequiredfor freedom (remember the third aspect of autonomy listed above).

That’swhy the often-scorned GOFAI has got closer to an understanding of freedom than A-Life has done. Freedom is best understood in terms of a particular form of complex

-3-

computational architecture (Dennett 1984; Boden 2006: 7.i.g-i). It requires psychologicalresources wherein reasoning, means-end planning, motivation, various sorts of prioritizing(including individual preferences and moral principles), analogy-recognition, the anticipation ofunwanted side-effects, and deliberate self-monitoring can all combine to generatedecisions/actions selected from a rich space of possibilities. (In the paradigm case, the choice islargely conscious. But an action may be termed \"free\" because, giventhe computationalresources possessed by the person in question, itcouldhave been consciously considered bythem, and the decision could have differed accordingly.)

Compromises of freedom occur (for instance) in the clinical apraxias, in hypnosis, and whensomeone obeys hallucinated instructions from ’saints’ or ’aliens’. All these phenomena, whereinaperson’sautonomy is significantly lessened, have been helpfully theorized and/or modelled inpartly GOFAI terms (Boden 2006: 7.i.h-i; 12.ix.b).

In hypnosis and hallucination, for example, a certain type of high-levelself-monitoring isinhibited, leaving the person at the mercyofdirectivesimported from outside or internallygenerated in an unconsidered way (Dienes and Perner in press). As for apraxia, a brain-damagedpatient may be unable to plan a simple task, or to perform the relevant sub-tasks in the correctorder; or theymay be constantly diverted onto a different task while trying to carry out the firstone. These debilitating syndromes involvethe inappropriate activation and/or execution ofhierarchical action-schemas (Norman and Shallice 1980/86; Cooper et al. 1995, 1996). Suchschemas may malfunction in various ways, and/or theymay be triggered irrelevantly by pattern-recognition mechanisms that divert control of the action onto unwanted paths. In short, apraxiasare being modelled by hybrid systems, implementing both GOFAI and connectionistcomputations.

These theories/models of human freedom, and of its impairments, are relatively broad-brush.Theyare joined by a wide variety of A-Life models that seek to showevenmore preciselyhowautonomy,ofvarious kinds, can occur.

CNE has provided some highly detailed explanations of certain aspects of insect behaviour,forexample. The computer models concerned include ’virtual’ simulations (e.g. Cliff1991a,b) androbots (e.g. Webb 1996; Webb and Scutt 2000; Beer 1990). Research scattered across A-Life,connectionism, and neuroscience has offered manyintriguing suggestions about spontaneousself-organization from a random base, and has provided demonstrations of this phenomenon too(see Boden 2006: 12.ii, 12.v,14.vi, 14.viii.c, and 15.vii-viii). Some of these results are highlycounterintuitive (e.g. von der Malsburg1973; Linsker 1988, 1990). Further examples aredescribed/cited in the newpapers that follow.

One thing’sfor sure: autonomy is marked on our intellectual map. And the ambiguity andunclarity that attend the concept haven’tswamped the excitement. There’splenty of that, here.References:

Arbib.M.A.(1981), ’Visuomotor Coordination: From Neural Nets to Schema Theory’,Cognition and Brain Theory,4: 23-39.

-4-

Arbib, M. A. (1987), ’Levels of Modelling of Visually Guided Behavior’,Behavioral and BrainSciences,10: 407-465.

Arbib, M. A. (2003), ’Rana computatrix to Human Language: Tow ards a ComputationalNeuroethology of Language’,Philosophical Transactions of the Royal Society of London A,361:2345-2379. (Special issue on ’Biologically Inspired Robotics’.)

Ashby,W.R.(1947), ’The Nervous System as a Physical Machine: with Special Reference to theOrigin of Adaptive Behaviour’,Mind,56: 44-59.

Ashby,W.R.(1948), ’Design for a Brain’,Electronic Engineering,20: 379-83.

Ashby,W.R.(1960),Design for a Brain: The Origin of Adaptive Behaviour(2nd edn., revd.)(London: Chapman & Hall).

Ashby,W.R.(1962), ’Principles of the Self-Organizing System’, in H. von Foerster and G. W.Zopf (eds.),Principles of Self-Organization(NewYork: Pergamon Press), 1962, 255-278.Beer,R.D.(1990),Intelligence as Adaptive Behavior: An Experiment in ComputationalNeuroethology(Boston: Academic Press).

Bird, J., Stokes, D., Husbands, P., Brown, P., and Bigge, B. (2006), ’Tow ards AutonomousArtworks’, unpublished working paper: COGS/CCNR, University of Sussex. (Available fromjonba@sussex.ac.uk.)

Boden, M. A. (1996) ’Autonomy and Artificiality’, in M. A. Boden (ed.),The Philosophy ofArtificial Life(Oxford: Oxford University Press), 95-108.

Boden, M. A. (2006),Mind as Machine: A History of Cognitive Science,2vols. (Oxford: OxfordUniversity Press).

Cliff, D. (1991a), ’The Computational Hoverfly: A Study in Computational Neuroethology’, inJ.-A. Meyer and S. W.Wilson (eds.),Fr omAnimals to Animats: Proceedings of the FirstInternational Conference on Simulation of Adaptive Behavior(Cambridge, Mass.: MIT Press),87-96.

Cliff, D. (1991b), ’Computational Neuroethology: A Provisional Manifesto’, in J.-A. Meyer andS. W.Wilson (eds.),Fr omAnimals to Animats: Proceedings of the First International Conferenceon Simulation of Adaptive Behavior(Cambridge, Mass.: MIT Press), 29-39.

Cooper,R., Fox, J., Farringdon, J., and Shallice, T.(1996), ’Tow ards a Systematic Methodologyfor Cognitive Modelling’,Artificial Intelligence,85: 3-44.

Cooper,R., Shallice, T., and Farringdon, J. (1995), ’Symbolic and Continuous Processes in theAutomatic Selection of Actions’, in J. Hallam (ed.),Hybrid Problems, Hybrid Solutions(Oxford:IOS Press), 27-37.

-5-

Dennett, D. C. (1984),Elbow Room: The Varieties of FreeWill Worth Wanting(Cambridge,Mass.: MIT Press).

Dienes, Z., and Perner,J.(in press), ’The Cold Control Theory of Hypnosis’, in G. Jamieson(ed.),Hypnosis and Conscious States: The Cognitive Neuroscience Perspective(Oxford: OxfordUniversity Press), ch. 16..

Linsker,R.(1988), ’Self-Organization in a Perceptual Network’,Computer,21: 105-117.Linsker,R.(1990), ’Perceptual Neural Organization: Some Approaches Based on NetworkModels and Information Theory’,Annual ReviewofNeuroscience,13: 257-281.

Maturana, H. R., and Varela, F.J.(1980),Autopoiesis and Cognition: The Realization of theLiving(Boston: Reidel).

Norman, D. A., and Shallice, T.(1980/86),Attention to Action: Willed and Automatic Control ofBehavior.CHIP Report 99, University of California San Diego, 1980. (Officially published in R.Davidson, G. Schwartz and D. Shapiro (eds.),Consciousness and Self Regulation: Advances inResearch and Theory,Vol. 4(NewYork: Plenum), 1986, 1-18.)

Pellionisz, A., and Llinas, R. (1985), ’Tensor Network Theory of the Metaorganization ofFunctional Geometries in the Central Nervous System’, in A. Berthoz and G. Melvill Jones(eds.),Adaptive Mechanisms in Gaze Control(Amsterdam: Elsevier), 223-232.

Reynolds, C. W.(1987), ’Flocks, Herds, and Schools: A Distributed Behavioral Model’,Computer Graphics,21: 25-34.

Vonder Malsburg, C. (1973), ’Self-Organization of Orientation Sensitive Cells in the StriateCortex’,Kybernetik,14: 85-100.

Webb, B. (1996), ’A Cricket Robot’,Scientific American,275(6): 94-99.

Webb, B., and Scutt, T.(2000), ’A Simple Latency-Dependent Spiking-Neuron Model of CricketPhonotaxis’,Biological Cybernetics,82: 247-269.

Whitby,B.(2002), ’Let’sStop Throwing Stones’,AISB Quarterly,no. 109, 1 (one page only).Zeleny, M.(1977), ’Self-Organization of Living Systems: A Formal Model of Autopoiesis’,International Journal of General Systems,4: 13-28.

Zeleny, M., Klir,G.J., and Hufford, K. D. (19), ’Precipitation Membranes, Osmotic Growths,and Synthetic Biology’, in C. G. Langton (ed.),Artificial Life: The Proceedings of anInterdisciplinary Workshop on the Synthesis and Simulation of Living Systems(held September1987), (Redwood City,CA: Addison-Wesley), 125-139.

-6-

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- oldu.cn 版权所有 浙ICP备2024123271号-1

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务