Diana McCarty on Thu, 29 Jan 1998 02:52:20 +0100 (MET) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
<nettime> Semiosis, Evolution, Energy - missing interviews. |
{Dear nettimers, These two interviews appeared in part in the original posting by Nell (Petronella) Tenhaaf, but were accidentally cut - due to technical difficulties (diana at the helm), the introduction is the same, but there is a full extra 17k of text that was missing. The original post contained Nell's interviews with Stuart Kauffman, Claus Emmeche, Arantza Etxeberria, and Roberta Kevelson. Apologies to Nell, and confused readers. ~ diana} These interviews took place during "Semiosis, Evolution, Energy: Toward a Reconceptualization of the Sign," a conference held at Victoria College, University of Toronto, October 17 - 19, 1997. This was a cross- disciplinary meeting organized by Edwina Taborksy of Bishop's University and Barry Rutland of Carleton University, to investigate the idea that "all phenomena are energy configurations belonging to one and usually more of three distinct codal orders: physical, biological and conceptual." In particular, theoretical biology crossed paths here with problems that have been formally studied in the field of semiotics: interpretation, meaning and subjectivity. My own interests as an artist and writer have been located in the territory of crossover between biology and subjectivity for some time, although not from a base in semiotics per se. More recently my work has become focused on issues of representation in the field of Artificial Life and possibilities for engaging with these issues in my own practice. Artificial Life or Alife is a set of computer-based practices that took form in the early 1980s in the southwest of the U.S., incorporating ideas from complexity theory, chaos theory, Artificial Intelligence and theoretical biology, especially evolution and genetics. Alife is concerned with synthesizing life-like phenomena in artificial media such as computers or robots. Currently, it tries to bring understanding to issues of how the real world works, but at its inception Alife programmers as well as theoreticians were committed to the idea of making synthetic life-forms that would literally be successors to biological life-forms. Evolutionary computation, or artificial evolution which is discussed below, is one of the key methods of Alife. One issue at the center of current theoretical biology, that also affects Alife practices, is the tension between the classical Darwinian evolutionary principle of natural selection on the one hand, and the concept of self- organization in nature on the other. The latter is the idea that implicit form emerges spontaneously at all levels in the natural world, from the chemical to the organic. Stuart Kauffman places self-organization at the centre of his theory that life is not the result of randomness, but is a probable emergent feature of the universe. Further, he has formulated the theory of the autonomous agent, a construct that enables researchers to study and propose answers to questions of self-organization and the origins of life: what is a basic self-organizing unit, how does it self- perpetuate, what are its sources of energy, and what forms the constraints by which it is bounded? This hypothesis arose from theoretical biology and theories of dynamic systems, but it has since been reverberating and resonating throughout many other material and conceptual practices. Claus Emmeche is a Research Fellow and head of the Center for the Philosophy of Nature and Science Studies in the Niels Bohr Institute, University of Copenhagen. A theoretical biologist and philosopher of science, his area of research is the semiotics of explaining emergent phenomena. NT: There are actually two aspects of what you spoke about that I'm particularly interested in. One is the idea of inner qualities of the organism that one could model, and the other is the modeler's "frames of perception" that evolve in relation to this. CE: Yes, how to conceive of the mental models that the organism builds up in ongoing interaction with the environment. The first question is about qualitative experience, or qualia as the philosophers call it, which simply means that you feel something, you just experience it directly. You can feel pain or hunger or thirst, this is just the technical term philosophers use for something we all know because we all experience things: what it is like to smell a good cake, what it's like to taste it on your tongue, what it feels like to have a pain in your stomach. These are immediate expriences, afterwards you can try to conceptualize them or put words to them. But the very deep problem of having a biosemiotics, that is a biological study of semiotic interactions, or sign interactions, is to capture these experiential qualities of living animals. Maybe single-celled organisms could also have these kinds of qualitative experiences, although they may be very primitve. I mean a single bacterium may simply experience how it feels to be in a lower or higher concentration of glucose. NT: Speaking of that makes me think about a paper that I've read by Lewis Wolpert [a developmental biologist in the UK], concerning his conjecture about the first sense of top and bottom, the first sense of axis in a single-celled organism; and it extends into the drosophila fruitfly and beyond, to mammals. He speaks about the place where this organism touched the floor of the primordial soup -- that point of contact is a kind of inner representation. CE: And the idea of having an inner representation is of course very important to all of this study. We have the idea that you make somehow an internal scene. You would like to model the world or make a kind of replica of the world by building up for yourself, or for the animal itself, a scene or a mental map or whatever you'd like to call it. And the problem there is that you can do models, for instance, by using robots or using simulated animals on a computer where you have these small creatures running around on your computer screen. And you also have for each creature a model of how this creature is representing its world, that is, its immediate environment and its neighbors and its predators, enemies and so on. But this is the functional aspect of the representation, this is a question of what the algorithm is or what the program is, what kind of physical tokens this organism is using in order to represent its world, in order to somehow gain knowledge about the world. But this is just the outer side of the coin, because when we are talking about signs in the universe or signs that we recognize as signifying something for us, we are talking about a coin with two sides to it, the external side and the internal side. And the internal side is really what it feels like for the organism to have these experiences. That is what I see as the really hard problem, I'm not the only one who points to it as a really deep and hard problem to solve. Part of the problem is that you're also interested in somehow bridging the gap between human minds and animal minds. But animal minds can come in various degrees of complexity. So you can easily imagine chimpanzees having a mind which in many respects is very similar to the human mind; then we can go down in complexity and say what about dogs, what about rats, what about mice, what about insects, bugs, do they have minds in some sense and how should we conceive of that. And this is the problem of the scale of continuity between what we as humans experience, because for our experience we have both the external and internal point of view. When we as biologists do modelling we always capture these things from the external point of view. NT: Now is that because of the rules of science, or is that just because it's the only thing that's possible? CE: I think it's primarily because it's the only thing possible. I mean, we have to start somewhere. As scientists we try to figure things out in a precise way that we can describe to other persons, because if we couldn't do that, it wouldn't be science. We have to be able to describe precisely what we are doing, and when we want to describe precisely what a little animal in an environment is doing, I mean if we make this as a model, we have to describe precisely what the mental representations are in that animal. And we can do that for some simple models where we can, for instance, use neural networks in a physically embodied system like a little robot, a little box that you can follow. You can open it up and analyze its network, what it has learned. So this is the external side. But you are not sure, I mean you cannot know if this little artificial box is really experiencing anything in a qualitative way. NT: Now just a comment about that. Because one of the notions I came across at ECAL this summer was that if you are modeling in an internal and an emergent way, even if it's initially algorithmic but then it develops properties that you haven't pre-determined, isn't there an additional problem of reading back what has emerged? CE: That's a big problem, because all this research is really trying to get after emergent properties. For instance, you can have the little bug become better and better at acquiring knowledge so as to avoid obstacles in its environment and so on, finding food, remembering where the good food was and where the bad was. So you can conceive this network as evolving emergent representations. But, it's very hard to avoid putting our own conceptions into that network. So that when we look upon the network -- and this is really just a simple network of nodes or you could call it neural cells and the connections between them which have a certain strength -- but when we look upon it we would like to make sense of this little network, of this little bug. When we do that, we come very close to making the fault of anthropomorphizing the little bug. This is really a hard problem, because of course we would like to figure out how this insect or little animal can go about doing his things, in a way that doesn't involve any mysterious principles. But we cannot know beforehand how this creature somehow chooses to configure its own world. I can give you an example. The things which are important for us when we walk around in the sunshine are not the same things which are important for the bat when it flies from tree to tree in the dark, depending on its specific sonar systems. The bat's world is radically different from our world. Of course it's the same physical world, but we do not share the same Umwelt as the bat, its internal perceptions, the bat has a bat-specific Umwelt and we have a human-specific Umwelt [this is a term proposed by Jakob Von Uexk=FCll in the early 1980s in the semiotics of biology, to describe how an organism lives in a subjective universe which is a niche within the environment]. So every time you have a new species you have not only its morphology and its anatomy, but you have also a species-specific Umwelt. That is, how is the world experienced by this animal? We have certain clues to figuring out what the bat's Umwelt is because we can do experiments, we can measure how good it is at using its sonar system in order to detect differences in its environment. But this is only an indirect way, again we can by this indirect way figure out its Umwelt. But we can never know how it feels for this bat to be such a creature. So this is the hard problem. NT: I guess the other question that I framed is a bit of a gross question because it takes these ideas about subjectivity and kind of butts them up against the historical objectivity of science. Do you see any problem in that respect? CE: Yes, I see that many of the attempts in the new fields of complexity studies, artificial life, cognitive science, are really trying to explain subjectivity. But the question is, whether we possess methods of explanation which are really good enough at dealing with this specific subject matter, that is, subjectivity, experience, qualitative feeling, what it feels like to be something. Because traditionally in science we want to be very precise, we want to explain things in objective ways, we want to have a sort of behavioristic or external perspective. At this meeting here, there is a discussion going on about whether we can configure some internalist way of having emergent explanations. That is, when we understand a system, this understanding depends on the creation of new ideas or new concepts within our own minds. And I think this is a possibility we should be serious about, that we could enlarge the notion of natural science not only depending on objective modes of thought but also somehow involving methods from the humanistic sciences, like the idea of empathy or the idea of trying to interpret something which makes sense if you can really figure out what this small world of another organism looks like. But of course it's very controversial as to whether we are still doing science when we do these things. NT: It's a mix of the so-called hard sciences and soft sciences, isn't it? CE: It is. And most of these scientists still want to keep it within the basic explanatory framework of hard science. This is also what you see in the Santa Fe complexity studies. They are making attempts to do mathematics of complex systems, physics and biology of complex systems, that is, to use the methods of science in order to understand these very strange phenomena. Because it is very strange that such a thing as a human brain, which is simply just biological cells and their communication, can create mental phenomena. I mean, this is still a mystery. NT: As Stuart Kauffman said yesterday, there is so much at stake, because you do need to install a sense of synthetic science instead of a reductive and atomistic science to even start this work. So I guess that's why the complexity theorists are so insistent on staying within the framework of science. CE: That's right. In order to be synthetic, you have to have a notion of what this kind of whole you're trying to explain really is. And once you begin to describe that in detail, you somehow get involved in the reductionist method or the decomposition method. So I agree with many of the people from the complex systems sciences, that we should try to combine the reductionist method of decomposition with a new way, by simulations or new constructions or maybe new ways of experiencing things, of synthesizing or developing the very objects that we want to explain. And this is what you see in Artificial Life, we create creatures. These are not creatures we can see in nature, these are totally artificial creatures. Basically they are built upon abstract ideas which are put into computer programs. But if we are lucky enough, you sometimes see within these models that new emergent and very surprising phenomena appear, for instance, the ability to self-reproduce, the ability to evolve and so on. NT: Well I think that the concept of the autonomous agent as Stuart Kauffman proposes it and then, as Jesper Hoffmeyer says more metaphorically, the concept that ideas are autonomous agents, I think this is really rich in itself because it offers a way to start reintegrating things at different levels. [Jesper Hoffmeyer is a researcher and professor in the Institute of Molecular Biology, Unversity of Copenhagen]. CE: Yes, I think so. And there it's important to walk on two legs. That is, you have to have both the traditional scientific way of explaining it, and then on the other hand, you have to be very aware of the difference between your model and the real world, which is always much more complex than your model. Here I think of Jesper Hoffmeyer's ideas about agency as something which is very much dependent upon complex surfaces of semic interactions, that is, surfaces at which you have signification processes in the semiotic sense going on, and where the various surfaces really define the agents, I mean the inner side of the agents. This is one of the promising notions, I think, in the field. --------------------- Arantza Etxeberria is in the Department of Logic and Philosophy of Science at the University of the Basque Country, in Donostia, Spain. She spoke about problems in Artificial Life practices, in particular how to build "embodied" agents in artificial worlds through the integration of physical properties of the model's materiality, and how to overcome adaptationism in the design of evolutionary models. NT: You were saying in your talk that you think there's a place for art in the practices of artificial evolution. AE: Well I was talking about sighted evolution. I was making a distinction between the idea that evolution is blind, and sighted evolution. The thing is. if you take natural selection in any context, it is very difficult to make it really blind. We assume that it's blind, but even biologists doing models have a rough idea of what the fitness is. That's assumed to be exerted by the environment, but it's thought out in advance and then seen as just from the outside. When we want to do an artificial evolution, our biggest problem is how not to intervene, how to get the system to use its own proper fitness function. I think that the most interesting solution for this at the moment is coevolution, because it's so difficult to get the agents interacting with an environment so that they all have a common history, so that the organisms or the simulated agents and the environment will inform each other. Lots of people have tried to do this by inducing coevolution between two kinds of agents. Sometimes it will be predator and prey, or mates for sexual recombination. And it's been very interesting to see the use of games to evolve agents that are playing games with some competitor, so that all the conditions of the game change when the competitors change their strategies, so that the strategies are not fixed because you always have to account for what the other is going to do. Both are evolving, both are going to try to make it more difficult for the other. So that's sort of a common history. NT: But that just makes me think, before we cycle back to where the art comes in, that makes me think that there seem to always be incredible presuppositions built into the models, for example those kinds of models usually assume competition. At ECAL, there was a really interesting paper about the emergence of cooperation from a competition model, without game theory, without preordained rules. One is always backtracking to what the basic assumptions are. I know and you would probably agree, that in the evolutionary computation world, which you call artificial evolution but we're talking about the same thing, in that world there's a generally accepted metaphorical use of Darwinian theory. AE: I think that natural selection has a big component of competition. And for example, other ideas of biology like self-organization are much more into cooperation. But that's very difficult to achieve if you start out thinking about Darwinism, I think. NT: You're coming from two different poles there, from the forces of evolution on the one hand and internal self- organization on the other. So you're saying that within Alife, coevolution is an interesting modeling strategy. Then you've proposed that there's a potential artistic strategy using intervention in the evolution proces. AE: Well, no, I was saying two different things. I was saying that artificial evolution has been used with several purposes in mind. I think that's very important, in fact it's a very pluralistic field. So with respect to sighted evolution, I was saying that some artists have used models of evolution in which the selection is actually done by the modeler, very directly. And those are interesting models. The final product, in what I was referring to as art, is different. But I think that it's very important to take into account that Artificial Life models can have very different purposes, maybe not only being a contribution to theoretical biology but also for example to get art into the picture. And not only art, but also models for understanding education, some people have worked on that. I think that very broadly we could say Artificial Life productions are either models that try to grasp the nature of certain phenomena in the world, or something that I would call instantiations. That has to do with art, that is what I would call poetic science in a very appreciative way, because a lot of people say poetic science in a very disparaging way. And I think that it also has to do with the first goal of Artificial Life, of exploring life-as-it- could-be. Because in fact we are not trying to model anything, but we are trying to understand how certain phenomena happen, through artificial models, and that kind of undertanding is either scientific by creating new theory or new models, or even artistic. NT: But do you think it's interesting artistically because art can always be interested in new ways of creating models of life? Or do you feel it's because these biological issues, or scientific issues in a larger sense, are the current issues of our time, in the way they shape the material world through biotechnologies, or reproductive technologies. Can we dig a little bit further at why you feel it's so intereting for art? AE: Well, it's maybe neither of the things you said. From the very beginning, there has been a very big discussion in Artificial Life as to whether the models that people were doing were actually life or were not life. That's the big discussion between what they call "strong artificial life" and "weak artificial life." Well I think that discussion is sort of stupid, or nonsense. I think that all of them are productions. But, it's very interesting to analyze the purposes we have when we are building the models. And those purposes are of course, understanding phenomena which are complex and for which we don't have good analytical scientific models. My thing is that, in my opinion, it's very difficult to get good reproductions of life in simulations. I don't believe that computational models can reproduce life, that you can produce something yet it's living. But you can get some fantasia, as Marcel Danesi [Professor of Italian and Semiotics at U. of T.] was saying yesterday. You can have an understanding through them, and I think that that pluralistic way of understanding models, according to the purposes of the modeler and the kinds of things they want to achieve through the models, can produce an increasing ontology in the world. Actually artificial systems have given us more ontology, more things that we have to analyze so as to understand what they are. So now there are certain artifacts we don't know. Art can enter into that picture because these are things we interpret once and need to interpret again, which is a source of creativity. Maybe that creativity is also linked with understanding. I really think that there are very different ways to access these new phenomena, this new ontology that is being created. And it's important to be very pluralistic and leave aside the discussions about whether what we're doing is really life or not. Because that's not going to take us anywhere. NT: That's a really interesting way of putting it. Within the art practice that I'm familiar with, a recent phenomenon was incredible fascination with media production. Deconstruction is so tied up with that idea, with seeing how all of the real is already mediated. Artists take that up, consider it and communicate it. If we're now involved, as you say, in a growth of the ontological or a growth of artificats within the technoscientifically-mediated real, of course artists would also be engaged with this next level of mediation of the natural. --- # distributed via nettime-l : no commercial use without permission # <nettime> is a closed moderated mailinglist for net criticism, # collaborative text filtering and cultural politics of the nets # more info: majordomo@icf.de and "info nettime" in the msg body # URL: http://www.desk.nl/~nettime/ contact: nettime-owner@icf.de