Markos Sellis Epistemological Emergence

10
Epistemological emergence Μάρκος Σελλής A.M 017/11 Μάθημα: Φιλοσοφία της Επιστήμης Διδάσκων: Σ.Ψύλλος

Transcript of Markos Sellis Epistemological Emergence

Epistemological emergence

Μάρκος Σελλής A.M 017/11

Μάθημα: Φιλοσοφία της Επιστήμης

Διδάσκων: Σ.Ψύλλος

In an attempt to cleanse the concept of emergentism from implications like the controversial

downward causation, a new form of emergence has, well, emerged. Epistemological emergence

makes no ontological claims, limiting the emergent qualities to an epistemological status. There

have been various attempts to further refine this concept, focusing on characteristics of the cognizer

like the feeling of surprise (Ronald et al, 1999) or the inability to grasp the complexity of some

phenomena except only through the use of computer simulations (Bedau, 1997). However, the

subjective nature of surprise makes it difficult, if not impossible, to be of much use in this context.

Also the concept of simulation has been strongly criticized, especially with respect to the extent of

its purported novel epistemological implications (Frigg & Reiss 2009). Nevertheless, these versions

of emergence, and especially Bedau's concept of “weak emergence”, seem to point in a direction

that completely avoids downward causation. This alone is a reason to consider their definitions and

pursue the question of whether there is a way to strengthen the concept of epistemological

emergence without letting it dissolve into what has already been described by Nagel's robust

reductionism (Nagel, 1998).

Before attempting to answer this question, we must first outline what it means for a property to

emerge in an epistemological sense and how this can lead to ontological assumptions. Note that

throughout the text we will keep this metaphysical use of the word “ontological” in line with

O'Connor, Timothy, Wong & Hong Yu (2012). This metaphysical ontology is not to be confused

with the ontology of sciences like for example biology or sociology: Adding a fact to those

ontologies is a matter of whether there is a genuine fact that makes true the propositions of the

relevant domain (Psillos, 2009). From a factualist point of view, as Psillos puts it “one can be a

realist about a number of domains, without necessarily taking stance on independent metaphysical

issues”. This optimistic attitude has more to do with the epistemic thesis (Psillos, 1999) of a realist,

than with her metaphysical assumptions.

Having said that, let us resume our attempt to distinguish epistemological from ontological

emergence. Few people would doubt the fact that we are beings with limited cognitive capacities.

These limitations may be straightforward like for example the 7±2 digit limitation of our working

memory (Miller, 1956), or more generic decision process limitations, like for example those that

arise from the heuristic methods that we employ when reasoning (Fiske, 2008). The numerous

causal chains leading from interactions between too many or too complex entities, to the rise of

apparently new qualities, may overwhelm our cognitive limits. We then tend to attribute to these

qualities a new ontological status. Let us not forget that unraveling the causal chains behind a

phenomenon is one aspect of explanation. If, for some reason, we fail to do so, the entity of interest

remains inexplicable and the only way to elaborate on it, is to add it to our basic ontology. Some

philosophers have not hesitated to do exactly this in the case of emergent phenomena.

However, until now, scientific inquiry has brought forward no such exotic emergent causal factors.

On the contrary, whenever these novel causes were assumed, they were eventually explained away.

The “vital force” is one famous example: The suggestion that life is simply made of ordinary matter

seemed implausible. Of course we now know that the word “simply” in the last sentence couldn't

have been a more inappropriate bedfellow, both of the word “matter” as well as of the word

“made”. The complexity of even a single cell organism is immense. Scientists consider phenomena,

like the way some tiny proteins fold around themselves, to be extremely complicated.

Epistemological emergence seems more plausible if we keep in perspective the complexity involved

in even the smallest organisms.

One of the first examples used to illustrate the concept of emergence, is that of chemical

composition. A new substance is produced that seems to possess properties that could not have been

predicted - no matter how hard we studied the initial components. The first who worked out a

comprehensive emergentist picture were the British Emergentists (O'Connor et al, 2012), a tradition

that started (Beckerman, 1993) with John Stuart Mill's (1875) book “A system of Logic”. In this

book we read:

The chemical combination of two substances produces, as is well known, a third substance with

properties different from those of either of the two substances separately, or both of them taken

together. Not a trace of the properties of hydrogen or of oxygen is observable in those of their

compound, water."

Mill used this simple example to illustrate the emergence of new properties, however we may think

of it as an illustration of how it may appear to us that new properties have emerged. Indeed, at first

glance it seems that hydrogen and oxygen are flammable gases, whereas water is a transparent

liquid that tends to extinguish fire. It sounds rather obvious that new properties have indeed

emerged. On a second thought however, water is not a liquid in the majority of the places where it

can be found in our solar system – whereas oxygen and hydrogen can be found abundantly in liquid

or even more condensed forms. Furthermore, although Martin Luther King's phrase “there was a

certain kind of fire that no water could put out“ was used as a metaphor (King, 1968), it is in fact

quite true: Water would cause some types of fire to ignite, as water can act as a catalyst. Even water

transparency is relevant: Should our optical system function in other wavelengths such as the

ultraviolet part of the spectrum, instead of our familiar “visual window”, water would appear to us

no less invisible then tar.

The above remarks are not in order to postulate some sort of phenomenological argument in support

of our incapacity to determine the “actual world”. Rather they are a naive attempt to illustrate how

epistemological limitations may lead to ontological claims. Their naivety rests on the fact that we

considered the chemical substances as viewed in a commonsensical non-scientific manner. We must

stress the fact that the whole discussion on emergence is conducted on the assumption that the

answer to the question “emergent for whom?” is not “for a lay person”; but for the scientific and

philosophical community. In fact we brought forward these non-scientific views, exactly because

the members of these communities are of course “simply” humans – in the sense that sometimes

they argue on the basis of their ordinary human experience (although perhaps some might

indignantly frown upon this reminder). There is abundant evidence that demonstrates how scientists

can be too deceived by biases and heuristics that govern our basic modes of reasoning (Kahneman

& Tversky, 1973). The relevance of this approach may become more obvious if we consider the fact

that some of the arguments that support the ontological emergence of consciousness, are based on

the “mysterious first person perspective” or our subjective impressions of ourselves and the

difficulty to connect these self-referential impressions with the “third person” scientific perspective.

After having a quick look at Mill's water, let us add some more water to the mill, by considering

three familiar but complex entities that most people would definitely not call emergent: The Sun,

my cat or a meteorite. I am far from knowing every detail about their composition and function.

However, their scientific explanation seems the best available, since it is coherent with my previous

knowledge and assumptions. Therefore I tend not to attribute to them a completely new ontological

status (neither as entities nor as properties). During human history we did exactly this with these

entities (except of my cat – although other cats were involved) by deifying them – that is attributing

to them a unique and basic ontological property that was not to be further causally explained. There

is a long tradition in this type of ontological inflationary explanation. However, since all indications

found through scientific inquiry seem, at least so far, to be pointing in the opposite direction,

perhaps we should resist the tendency to follow these deeply rooted pre-scientific traditions that

offer explanation through ontological inflation. When phenomena of extraordinary and dense

complexity are involved, the causal-chain paths towards the truth may seem barren, and our only

escape towards explanation may to indeed follow these ancient paths. It is in such cases that

cautioning against our spontaneous urge may be of some value.

So far we mainly analyzed how an apparent emergent quality can be attributed to ontological

emergence. However, there were only indirect hints at what epistemological emergence is. As we

mentioned before, there have been attempts to define the concept with accuracy. Ronald (1999)

offers the following definition, from the perspective of artificial life: “The language of design L1

and the language of observation L2 are distinct, and the causal link between the elementary

interactions programmed in L1 and the behaviors observed in L2 is non-obvious to the observer -

who therefore experiences surprise”. This is a specific and technical definition that does not seem to

to have broader applications: When we compose an already known substance X from the elements

Z and Y, we are not caught by surprise every time we face the result of our experiment. We expected

X and we also expected it to possess some form of emergent properties P(X). They are emergent

because no mater how much we study the properties of Z and Y, we cannot find an explanation of

how these came into being. It does not therefore come as a surprise that this definition lacks the

required generality to adequately address the concept of emergence.

A more concrete definition is given by Bedau: "Macrostate P of S with microdynamic D is weakly

emergent iff P can be derived from D and S's external conditions but only by simulation" (Bedau,

1997). He further explains the notion of simulation as follows

Although perhaps unfamiliar, the idea of a macrostate being derived "by simulation" is

straightforward and natural. Given a system's initial condition and the sequence of all other

external conditions, the system's microdynamic completely determines each successive microstate

of the system. To simulate the system one iterates its microdynamic, given a contingent stream of

external conditions as input. Since the macrostate P is a structural property constituted out of the

system's microstates, the external conditions and the microdynamic completely determine whether P

materializes at any stage in the simulation. By simulating the system in this way one can derive

from the microdynamic plus the external conditions whether P obtains at any given time after the

initial condition.

In order to further clarify what is involved in using a simulation, it would be useful to describe the

alternative (non-simulative) way of calculating the macrostate P. The author does not offer an

explicit description of this matter, however it seems that the word “iterates“ plays an important role

in the concept of simulation. If for example the macrostate P can be expressed as a function of time

P(t), then we could represent the iteration as P(t+1)=F(P(t)), where F is a function expressing the

calculations that must be performed in order to evaluate state P at moment t+1, given state P at the

previous moment t. The fact that time is expressed in a discrete instead of the familiar continuous

form appears awkward at first – however the distance between t and t+1 is arbitrary: We can reduce

it as much as it is required in order to model the phenomenon at the desired level of detail. This

recursive formula practically means that, in order to calculate the macrostate P at any given

moment, we first need to calculate all the states up to that moment. There is no shortcut available –

we need to go through all the moments up to the specific time of interest. Computer simulations like

Conway 's Game of Life proceed in a similar manner: Each output state P(t+1) is calculated through

the previous states P(t). In general, there is no feasible way to compute directly any state P without

taking into account all previous steps.

The alternative, non-simulative method to this recursive formula of computation, would be written

as P(t) = G(t) (with the restriction that G(t) cannot be expressed as a function of P). This equation

does not involve the calculation of any previous states. We can compute directly the state P at any

given moment t. In other words, it is an analytical expression. It provides us with a shortcut that

enables us to evaluate P directly, no matter how far in time it is.

Let us consider what this means for the concept of simulation. It seems that it is our most expensive

(in terms of computing time) method. It is also our most generic method, for example one could use

arithmetic methods for calculations even in the case of a simple pendulum – ignoring the fact that

an analytical expression is available. In other words, it is our least effective method , in the sense

that we use it when no shortcut to the solution is available (in the form of an analytical expression).

Automated calculations can now be performed with the use of computers, therefore we tend to

employ such iterative methods more and more often. In the study of complex phenomena, where

analytic expressions are a rare companion to the scientist, their use in invaluable. They remain

however, our least effective method in terms of the ability to jump forward in time, given finite

resources. If the equation expresses a scientific theory - or as Duhem may have put it, if it is viewed

as the voice of our prophet, then this type of prophet speaks about the future in the most slow (yet

impressive) voice among our other prophets. We thus seem to be moving away from a notion of

simulation with novel, distinct epistemological properties, capable of carrying the weight of weak

emergence's definition. However, we should also take into consideration that other approaches have

moved in exactly the opposite direction.

Davies (2004) for example has gone so far as to suggest that there is a top limit in the computing

capacity of our universe and furthermore, he claims to have computed this limit. He found it to be

approximately equal to the amount of computation that is required in order to calculate the possible

ways that a protein can fold around itself. Therefore Davies conveniently concludes that this is the

epistemological limit that renders something as weakly emergent – in particular that renders life on

earth weakly emergent in a global, literally universal manner. This is a rather bold claim that is

based on the line of thought which contrasts Laplace's omniscient Demon that is able calculate

everything in the universe, with Landauer's definition of a physical law:

“The calculative process, just like the measurement process, is subject to some limitations. A

sensible theory of physics must respect these limitations, and should not invoke calculative routines

that in fact cannot be carried out.”

He then assumes that the Demon "would be obliged to make do with the computational recourses of

the universe", since it inhabits this universe rather that some Platonic world. The assumption that

we already possess the knowledge to define an absolute upper limit to the computational

capabilities of the universe is a somewhat daring assumption. A Demon that obeys a rule of this

sort, does not sound quite demonish – whereas on the other hand, the coincidence of the two

calculation results (proteins and universe capacity) seem a demonic coincidence.

There is an interesting example that may shed more light on the concept of simulation, by

comparing the two methods (simulative and non-simulative). It focuses not on their purported

epistemological differences but rather on their similarities. It was first brought forward by

Srivastava et al (1990) as mentioned in Frigg and Reiss (2009):

Consider the so-called double pendulum, a pendulum in which the bob is fixed not to a string but a

hinge-like ‘arm’ consisting of two branches that are connected to each other with an axis and that

enclose an angle α. If we block the hinge (and thereby keep α fixed) we have a normal pendulum

whose equation is analytically solvable (even if we take friction into account). If we remove this

constraint and let the angle between the two branches vary, then the equations of motion become

non-integrable and solutions have to be calculated numerically. Does this change our

understanding of how the equation describing the double pendulum relates to the world?

The authors answer negatively and conclude that, although computer simulations may pose

interesting epistemological problems, they have a lot in common with more general problems that

arise from modeling and scientific experimenting. Focusing on these aspects of simulation instead

of “convincing ourselves that simulations are unlike anything else” would be much more beneficial.

This conclusion also seems to further weaken Bedau's definition of weak emergence.

We use simulations in order to study emergent phenomena because they are complex phenomena.

Since we have no other method that would yield analytical solutions to such problems, we have no

option but to use this slow yet effective method. In large engineering projects, such as the

construction of airplanes, the use of simulations is again mandatory, although there are no emergent

phenomena. However, there are extremely complex problems that can be faced in no other way. We

also construct prototypes and models, in order to confront these challenges – to simulate complex

phenomena.

But what then could be an appropriate definition for an epistemologically emergent property? Could

it be that while emergence struggled to rid itself of the ontological weight, it completely

evaporated? After all, Nagel's (1998) robust account of reductionism seems to take into

consideration the epistemological criticism and, in a sense, admits the epistemological emergence of

properties that cannot be reduced to their constituents, at least not without losing their explanatory

strength. Without a more concrete definition of emergence this question remains open. A growing

number of publications on the subject reflects our eagerness to understand complex emergent

phenomena. Among these we find long standing questions concerning the nature of human

intelligence and life. We should not be intimidated by the fact that no straightforward answer has

yet emerged.

References

Beckermann, A., Flohr, H., & Kim, J. (Eds.), (1992). Emergence or reduction? Essays on the

prospects of nonreductive physicalism, Walter de Gruyter, Berlin.

Bedau, M., A., (1997), Weak emergence. In J. Tomberlin, editor, Philosophical Perspectives 11:

Mind, Causation, and World, pages 375–399. Blackwell.

Davies, P., (2004) Emergent Biological Principles and the Computational Properties of the

Universe, Complexity 10 (2).

Fiske, S. T., & Taylor, S. E. (2008). Social cognition: From brains to culture. Boston: McGraw-Hill

Higher Education

Frigg. R., Reiss. J., (2009): The Philosophy of Simulation: Hot New Issues or Same Old Stew?

Synthese.

Kahneman, D., Tversky, A., (1973). "On the psychology of prediction". Psychological Review 80:

237–251.

King, M., L., (1968). I’ve Been to the Mountaintop. [online] Available at: http://mlk-

kpp01.stanford.edu/index.php/encyclopedia/documentsentry/ive_been_to_the_mountaintop/

[Accessed: 20 Sep 2012]

Mill, J. S. (1875). A system of logic, ratiocinative and inductive: Being a connected view of the

principles of evidence and the methods of scientific investigation. London: Longmans,

Green, Reader, and Dyer.

Miller, G. A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity

for processing information". Psychological Review 63 (2): 81–97.

Nagel, T., (1998) Reductionism and antireductionism. Novartis Fdn Symp.

O'Connor, Timothy and Wong, Hong Yu, "Emergent Properties", The Stanford Encyclopedia of

Philosophy (Spring 2012 Edition), Edward N. Zalta (ed.), URL =

<http://plato.stanford.edu/archives/spr2012/entries/properties-emergent/>.

Psillos, S., (1999), Scientific Realism: How Science Tracks Truth, London: Routledge

Psillos, S. (2009). Knowing the structure of nature. Palgrave Mcmillan.

Ronald, E. M. A., Sipper, M., Capcarrere, S., (1999) Testing for emergence in artificial life. In D.

Floreano, J.D. Nicoud, and F. Mondada, editors, Advances in Artificial Life: Proceedings of

the 5th European Conference on Artificial Life (ECAL’99), volume 1674 of Lecture Notes

in Artificial Intelligence, pages 13–20. Springer-Verlag,Heidelberg.