The concept of complexity seems to be simple and elusive at the same time. Everyone understands what we mean when we call an object “complex,” but if we define this attribute clearly and distinctly, we encounter many difficulties.
The first difficulty arises when we want to specify to what aspect of an object we refer by calling it “complex.” Does this mean that an object has a rich structure that cannot be described easily? Or that it fulfils a difficult function? Or that it is intricate to generate this object?
Those three aspects of an object—its structure, its function, and its generative history—do not have to be equivalent in respect to complexity. Let us show this by some examples.
- A part of most intelligence tests are sequences of numbers that appear to be irregular (i.e., to have a complex structure). The task of the test person is to find out the rather simple mathematical formula that produced the sequence in question. Here, we have a complex structure but a simple generative process. In addition, this generative process is performed to fulfill a complex function, namely, to contribute to the quantification of intelligence, which is a much-debated psychological concept.
- A geometrically simple artifact like a parabolic mirror for an optical telescope is intricate to make; for example, it needs different time-consuming steps to polish it. Here, we have a simple structure but a complex generative process. Moreover, the function of such a mirror is, from a physical point of view, rather simple: It has just to focus the incoming rays of light.
- Locks have also a simple function, namely, to hinder burglars from breaking into a house. But in order to fulfill their function, they must show a complex inner structure so that it is not too easy to pick the lock.
- Mathematical set theory has a rather simple structure, which can be defined by a few axioms, but it is used to fulfill complex functions, for example, to talk in a clear way about very abstract philosophical problems, such as: Do there exist different kinds of infinity?
We can remark on a common feature of all these examples: The more space and time we need (or seem to need) for describing the structure, the function, or the generative history of an object, the more complex this object is in respect to its structure, its function, or its generative history.
The next difficulty consists in finding a good quantitative characterization of the relation between, on one hand, the time and space needed for describing an object and, on the other hand, the degree of complexity we ascribe to it. Such a correlation must be as abstract as necessary in order to be principally applicable to any kind of object, but it must also be as concrete as possible in order to be practically applicable to specific objects, their structure, their function, and their generative history.
The best available proposal for an abstract conceptual framework into which all those aspects can be integrated so that the complexity of a specific object can be concretely measured is based upon the idea of computation. By “computation,” we understand an ordered sequence of mathematically describable operations that is effective for solving a problem and that can be executed by a computer if it is formulated as an algorithm in a programming language. To sum up two ratios is as well a computation in that sense as the meteorological modeling of tomorrow’s weather.
The computational problem to be solved in our complexity-theoretic context is to find an adequate description of the structure, the function, or the generative history of some object. Since the “natural” language of a computer is coded in binary form, we refer from now on by “description” only to strings of zeroes and ones stored in the computer.
To define complexity measures on the basis of the idea of computation, we have to look at the physical resources a computation requires to solve our problem. The first such resource that can be used for measuring complexity is the minimal time a program needs to compute a solution (i.e., to output a description of a chosen object). An important question that arises in this context is: How much does the running time of a computation depend upon the descriptive length of the problem? Is it possible to define different classes for the dependence of the running time of a program upon that length? As a central part of computer science, the theory of computational complexity tackles this problem.
The theory of algorithmic complexity focuses not on time, but on space, namely, on the minimal computer storage required for a program that can solve our problem (i.e., that outputs a description of a chosen object). Its complexity is then defined as the number of bits of the shortest program that carries out this task. The longer this program is, the more complex the object is. Of course, the concrete value depends also upon the type of computer on which the program is run.
Time and space needed for a computation are taken together into account to define the algorithmic depth of an object. This complexity measure is defined as the average running time of all programs that output a description of a chosen object, whereby the respective contribution of a program to this value is weighted inversely proportionally to its running time.
The theory of complexity analyzes the above-mentioned and many more measures of complexity. It belongs, like cybernetics, information theory, and semiotics, to the structural sciences. These sciences try, on one hand, to construct formal systems (like all possible machines in cybernetics and all possible codes in semiotics), without taking into account the specific nature of objects that might realize those systems. On the other hand, structural sciences have a clear orientation toward the application of their models upon a wide variety of empirical phenomena. Therefore, it is not surprising to find mathematicians (like Kolmogorov), information scientists (like Chaitin), computer scientists (like Bennett), physicists (like Murray Gell-Mann), biologists (like Bernd-Olaf Küppers), cognitive scientists (like Allen Newell), economists (like Herbert A. Simon), and social scientists (like Niklas Luhmann) among those people that are much interested in complexity. These scientists have contributed to the foundations of the theory of complexity, which today is a network of formal models that help to describe the various aspects of complexity in different empirical contexts.
References:
- Davis, M. (1982). Computability and unsolvability. New York: Dover.
- Gell-Mann, M. (1994). The quark and the jaguar: Adventures in the simple and the complex. New York: W.H. Freeman.
- Kauffman, S. A. (1995). At home in the universe: The search for laws of self-organization and complexity. New York: Oxford University Press.