(Thanks to Troy Shinbrot for contributing to this answer)
Complex systems are spatially and/or temporally extended nonlinear systems characterized by collective properties associated with the system as a whole--and that are different from the characteristic behaviors of the constituent parts.
While, chaos is the study of how simple systems can generate complicated behavior, complexity is the study of how complicated systems can generate simple behavior. An example of complexity is the synchronization of biological systems ranging from fireflies to neurons (e.g. Matthews, PC, Mirollo, RE & Strogatz, SH "Dynamics of a large system of coupled nonlinear oscillators," Physica 52D (1991) 293-331). In these problems, many individual systems conspire to produce a single collective rhythm.
The notion of complex systems has received lots of popular press, but it is not really clear as of yet if there is a "theory" about a "concept". We are withholding judgment. See
One way to define "fractal" is as a negation: a fractal is a set that does not look like a Euclidean object (point, line, plane, etc.) no matter how closely you look at it. Imagine focusing in on a smooth curve (imagine a piece of string in space)--if you look at any piece of it closely enough it eventually looks like a straight line (ignoring the fact that for a real piece of string it will soon look like a cylinder and eventually you will see the fibers, then the atoms, etc.). A fractal, like the Koch Snowflake, which is topologically one dimensional, never looks like a straight line, no matter how closely you look. There are indentations, like bays in a coastline; look closer and the bays have inlets, closer still the inlets have subinlets, and so on. Simple examples of fractals include Cantor sets (see [3.5], Sierpinski curves, the Mandelbrot set and (almost surely) the Lorenz attractor (see [2.12]). Fractals also approximately describe many real-world objects, such as clouds (see http://makeashorterlink.com/?Z50D42C16) mountains, turbulence, coastlines, roots and branches of trees and veins and lungs of animals.
"Fractal" is a term which has undergone refinement of definition by a lot of people, but was first coined by B. Mandelbrot, http://physics.hallym.ac.kr/reference/physicist/Mandelbrot.html, and defined as a set with fractional (non-integer) dimension (Hausdorff dimension, see [3.4]). Mandelbrot defines a fractal in the following way:
A geometric figure or natural object is said to be fractal if it combines the following characteristics: (a) its parts have the same form or structure as the whole, except that they are at a different scale and may be slightly deformed; (b) its form is extremely irregular, or extremely interrupted or fragmented, and remains so, whatever the scale of examination; (c) it contains "distinct elements" whose scales are very varied and cover a large range." (Les Objets Fractales 1989, p.154)
See the extensive FAQ from sci.fractals at
Often chaotic dynamical systems exhibit fractal structures in phase space. However, there is no direct relation. There are chaotic systems that have nonfractal limit sets (e.g. Arnold's cat map) and fractal structures that can arise in nonchaotic dynamics (see e.g. Grebogi, C., et al. (1984). "Strange Attractors that are not Chaotic." Physica 13D: 261-268.)
See the fractal FAQ:
(Thanks to Pavel Pokorny for contributing to this answer)
A Cantor set is a surprising set of points that is both infinite (uncountably so, see [2.14]) and yet diffuse. It is a simple example of a fractal, and occurs, for example as the strange repellor in the logistic map (see [2.15]) when r>4. The standard example of a Cantor set is the "middle thirds" set constructed on the interval between 0 and 1. First, remove the middle third. Two intervals remain, each one of length one third. From each remaining interval remove the middle third. Repeat the last step infinitely many times. What remains is a Cantor set.
More generally (and abstrusely) a Cantor set is defined topologically as a nonempty, compact set which is perfect (every point is a limit point) and totally disconnected (every pair of points in the set are contained in disjoint covering neighborhoods).
Georg Ferdinand Ludwig Philipp Cantor was born 3 March 1845 in St Petersburg, Russia, and died 6 Jan 1918 in Halle, Germany. To learn more about him see:
To read more about the Cantor function (a function that is continuous,
differentiable, increasing, non-constant, with a derivative that is zero
everywhere except on a set with length zero) see
(Thanks to Leon Poon for contributing to this answer)
According to the correspondence principle, there is a limit where classical behavior as described by Hamilton's equations becomes similar, in some suitable sense, to quantum behavior as described by the appropriate wave equation. Formally, one can take this limit to be h -> 0, where h is Planck's constant; alternatively, one can look at successively higher energy levels. Such limits are referred to as "semiclassical". It has been found that the semiclassical limit can be highly nontrivial when the classical problem is chaotic. The study of how quantum systems, whose classical counterparts are chaotic, behave in the semiclassical limit has been called quantum chaos. More generally, these considerations also apply to elliptic partial differential equations that are physically unrelated to quantum considerations. For example, the same questions arise in relating classical waves to their corresponding ray equations. Among recent results in quantum chaos is a prediction relating the chaos in the classical problem to the statistics of energy-level spacings in the semiclassical quantum regime.
Classical chaos can be used to analyze such ostensibly quantum systems as the hydrogen atom, where classical predictions of microwave ionization thresholds agree with experiments. See Koch, P. M. and K. A. H. van Leeuwen (1995). "Importance of Resonances in Microwave Ionization of Excited Hydrogen Atoms." Physics Reports 255: 289-403.
(Thanks to Justin Lipton for contributing to this answer)
How can I tell if my data is deterministic? This is a very tricky problem. It is difficult because in practice no time series consists of pure 'signal.' There will always be some form of corrupting noise, even if it is present as round-off or truncation error or as a result of finite arithmetic or quantization. Thus any real time series, even if mostly deterministic, will be a stochastic processes
All methods for distinguishing deterministic and stochastic processes rely on the fact that a deterministic system will always evolve in the same way from a given starting point. Thus given a time series that we are testing for determinism we
Define the error as the difference between the time evolution of the 'test' state and the time evolution of the nearby state. A deterministic system will have an error that either remains small (stable, regular solution) or increase exponentially with time (chaotic solution). A stochastic system will have a randomly distributed error.
Essentially all measures of determinism taken from time series rely upon finding the closest states to a given 'test' state (i.e., correlation dimension, Lyapunov exponents, etc.). To define the state of a system one typically relies on phase space embedding methods, see [3.14].
Typically one chooses an embedding dimension, and investigates the propagation of the error between two nearby states. If the error looks random, one increases the dimension. If you can increase the dimension to obtain a deterministic looking error, then you are done. Though it may sound simple it is not really! One complication is that as the dimension increases the search for a nearby state requires a lot more computation time and a lot of data (the amount of data required increases exponentially with embedding dimension) to find a suitably close candidate. If the embedding dimension (number of measures per state) is chosen too small (less than the 'true' value) deterministic data can appear to be random but in theory there is no problem choosing the dimension too large--the method will work. Practically, anything approaching about 10 dimensions is considered so large that a stochastic description is probably more suitable and convenient anyway.
Control of chaos has come to mean the two things:
The idea that chaotic systems can in fact be controlled may be counterintuitive--after all they are unpredictable in the long term. Nevertheless, numerous theorists have independently developed methods which can be applied to chaotic systems, and many experimentalists have demonstrated that physical chaotic systems respond well to both simple and sophisticated control strategies. Applications have been proposed in such diverse areas of research as communications, electronics, physiology, epidemiology, fluid mechanics and chemistry.
The great bulk of this work has been restricted to low-dimensional systems; more recently, a few researchers have proposed control techniques for application to high- or infinite-dimensional systems. The literature on the subject of the control of chaos is quite voluminous; nevertheless several reviews of the literature are available, including:
It is generically quite difficult to control high dimensional systems; an alternative approach is to use control to reduce the dimension before applying one of the above techniques. This approach is in its infancy; see:
(Thanks to Justin Lipton and Jose Korneluk for contributing to this answer)
There are many different physical systems which display chaos, dripping faucets, water wheels, oscillating magnetic ribbons etc. but the most simple systems which can be easily implemented are chaotic circuits. In fact an electronic circuit was one of the first demonstrations of chaos which showed that chaos is not just a mathematical abstraction. Leon Chua designed the circuit 1983.
The circuit he designed, now known as Chua's circuit, consists of a piecewise linear resistor as its nonlinearity (making analysis very easy) plus two capacitors, one resistor and one inductor--the circuit is unforced (autonomous). In fact the chaotic aspects (bifurcation values, Lyapunov exponents, various dimensions etc.) of this circuit have been extensively studied in the literature both experimentally and theoretically. It is extremely easy to build and presents beautiful attractors (see [2.8]) (the most famous known as the double scroll attractor) that can be displayed on a CRO.
For more information on building such a circuit try: see
There are many "chaos toys" on the market. Most consist of some sort of pendulum that is forced by an electromagnet. One can of course build a simple double pendulum to observe beautiful chaotic behavior see
My favorite double pendulum consists of two identical planar pendula, so that you can demonstrate sensitive dependence [2.10], for a Java applet simulation see http://www.cs.mu.oz.au/~mkwan/pendulum/pendulum.html. Another cute toy is the "Space Circle" that you can find in many airport gift shops. This is discussed in the article:
One of the simplest chemical systems that shows chaos is the Belousov-Zhabotinsky reaction. The book by Strogatz [4.1] has a good introduction to this subject,. For the recipe see http://www.ux.his.no/~ruoff/BZ_Phenomenology.html. Chemical chaos is modeled (in a generic sense) by the "Brusselator" system of differential equations. See
The Chaotic waterwheel, while not so simple to build, is an exact realization of Lorenz famous equations. This is nicely discussed in Strogatz book [4.1] as well.
Billiard tables can exhibit chaotic motion, see http://www.maa.org/mathland/mathland_3_3.html, though it might be hard to see this next time you are in a bar, since a rectangular table is not chaotic!
(Thanks to Serdar Iplikçi for contributing to this answer)
Targeting is the task of steering a chaotic system from any initial point to the target, which can be either an unstable equilibrium point or an unstable periodic orbit, in the shortest possible time, by applying relatively small perturbations. In order to effectively control chaos, [3.8] a targeting strategy is important. See:
One application of targeting is to control a spacecraft's trajectory so that one can find low energy orbits from one planet to another. Recently targeting techniques have been used in the design of trajectories to asteroids and even of a grand tour of the planets. For example,
(Thanks to Jim Crutchfield for contributing to this answer)
This is the application of dynamical systems techniques to a data series, usually obtained by "measuring" the value of a single observable as a function of time. The major tool in a dynamicist's toolkit is "delay coordinate embedding" which creates a phase space portrait from a single data series. It seems remarkable at first, but one can reconstruct a picture equivalent (topologically) to the full Lorenz attractor (see [2.12])in three-dimensional space by measuring only one of its coordinates, say x(t), and plotting the delay coordinates (x(t), x(t+h), x(t+2h)) for a fixed h.
It is important to emphasize that the idea of using derivatives or delay coordinates in time series modeling is nothing new. It goes back at least to the work of Yule, who in 1927 used an autoregressive (AR) model to make a predictive model for the sunspot cycle. AR models are nothing more than delay coordinates used with a linear model. Delays, derivatives, principal components, and a variety of other methods of reconstruction have been widely used in time series analysis since the early 50's, and are described in several hundred books. The new aspects raised by dynamical systems theory are (i) the implied geometric view of temporal behavior and (ii) the existence of "geometric invariants", such as dimension and Lyapunov exponents. The central question was not whether delay coordinates are useful for time series analysis, but rather whether reconstruction methods preserve the geometry and the geometric invariants of dynamical systems. (Packard, Crutchfield, Farmer & Shaw)
(Thanks to Bruce Stewart for Contributions to this answer)
In order to address this question, we must first agree what we mean by chaos, see [2.9].
In dynamical systems theory, chaos means irregular fluctuations in a deterministic system (see [2.3] and [3.7]). This means the system behaves irregularly because of its own internal logic, not because of random forces acting from outside. Of course, if you define your dynamical system to be the socio-economic behavior of the entire planet, nothing acts randomly from outside (except perhaps the occasional meteor), so you have a dynamical system. But its dimension (number of state variables--see [2.4]) is vast, and there is no hope of exploiting the determinism. This is high-dimensional chaos, which might just as well be truly random behavior. In this sense, the stock market is chaotic, but who cares?
To be useful, economic chaos would have to involve some kind of collective behavior which can be fully described by a small number of variables. In the lingo, the system would have to be self-organizing, resulting in low- dimensional chaos. If this turns out to be true, then you can exploit the low- dimensional chaos to make short-term predictions. The problem is to identify the state variables which characterize the collective modes. Furthermore, having limited the number of state variables, many events now become external to the system, that is, the system is operating in a changing environment, which makes the problem of system identification very difficult.
If there were such collective modes of fluctuation, market players would probably know about them; economic theory says that if many people recognized these patterns, the actions they would take to exploit them would quickly nullify the patterns. Market participants would probably not need to know chaos theory for this to happen. Therefore if these patterns exist, they must be hard to recognize because they do not emerge clearly from the sea of noise caused by individual actions; or the patterns last only a very short time following some upset to the markets; or both.
A number of people and groups have tried to find these patterns. So far the published results are negative. There are also commercial ventures involving prominent researchers in the field of chaos; we have no idea how well they are succeeding, or indeed whether they are looking for low-dimensional chaos. In fact it seems unlikely that markets remain stationary long enough to identify a chaotic attractor (see [2.12]). If you know chaos theory and would like to devote yourself to the rhythms of market trading, you might find a trading firm which will give you a chance to try your ideas. But don't expect them to give you a share of any profits you may make for them :-) !
In short, anyone who tells you about the secrets of chaos in the stock market doesn't know anything useful, and anyone who knows will not tell. It's an interesting question, but you're unlikely to find the answer.
On the other hand, one might ask a more general question: is market behavior adequately described by linear models, or are there signs of nonlinearity in financial market data? Here the prospect is more favorable. Time series analysis (see [3.14]) has been applied these tests to financial data; the results often indicate that nonlinear structure is present. See e.g. the book by Brock, Hsieh, LeBaron, "Nonlinear Dynamics, Chaos, and Instability", MIT Press, 1991; and an update by B. LeBaron, "Chaos and nonlinear forecastability in economics and finance," Philosophical Transactions of the Royal Society, Series A, vol 348, Sept 1994, pp 397-404. This approach does not provide a formula for making money, but it is stimulating some rethinking of economic modeling. A book by Richard M. Goodwin, "Chaotic Economic Dynamics," Oxford UP, 1990, begins to explore the implications for business cycles.
The process of obtaining a solution of a linear (constant coefficient) differential equations is simplified by the Fourier transform (it converts such an equation to an algebraic equation, and we all know that algebra is easier than calculus!); is there a counterpart which similarly simplifies nonlinear equations? The answer is No. Nonlinear equations are qualitatively more complex than linear equations, and a procedure which gives the dynamics as simply as for linear equations must contain a mistake. There are, however, exceptions to any rule.
Certain nonlinear differential equations can be fully solved by, e.g., the "inverse scattering method." Examples are the Korteweg-de Vries, nonlinear Schrodinger, and sine-Gordon equations. In these cases the real space maps, in a rather abstract way, to an inverse space, which is comprised of continuous and discrete parts and evolves linearly in time. The continuous part typically corresponds to radiation and the discrete parts to stable solitary waves, i.e. pulses, which are called solitons. The linear evolution of the inverse space means that solitons will emerge virtually unaffected from interactions with anything, giving them great stability.
More broadly, there is a wide variety of systems which support stable solitary waves through a balance of dispersion and nonlinearity. Though these systems may not be integrable as above, in many cases they are close to systems which are, and the solitary waves may share many of the stability properties of true solitons, especially that of surviving interactions with other solitary waves (mostly) unscathed. It is widely accepted to call these solitary waves solitons, albeit with qualifications.
Why solitons? Solitons are simply a fundamental nonlinear wave phenomenon. Many very basic linear systems with the addition of the simplest possible or first order nonlinearity support solitons; this universality means that solitons will arise in many important physical situations. Optical fibers can support solitons, which because of their great stability are an ideal medium for transmitting information. In a few years long distance telephone communications will likely be carried via solitons.
The soliton literature is by now vast. Two books which contain clear discussions of solitons as well as references to original papers are
Spatio-temporal chaos occurs when system of coupled dynamical systems gives rise to dynamical behavior that exhibits both spatial disorder (as in rapid decay of spatial correlations) and temporal disorder (as in nonzero Lyapunov exponents). This is an extremely active, and rather unsettled area of research. For an introduction see:
An interesting application which exhibits pattern formation and spatio-temporal chaos is to excitable media in biological or chemical systems. See
(Thanks to Pavel Pokorny for Contributions to this answer)
A Cellular automaton (CA) is a dynamical system with discrete time (like a map, see [2.6]), discrete state space and discrete geometrical space (like an ODE), see [2.7]). Thus they can be represented by a state s(i,j) for spatial state i, at time j, where s is taken from some finite set. The update rule is that the new state is some function of the old states, s(i,j+1) = f(s). The following table shows the distinctions between PDE's, ODE's, coupled map lattices (CML) and CA in taking time, state space or geometrical space either continuous (C) of discrete (D):
time state space geometrical space PDE C C C ODE C C D CML D C D CA D D D
Perhaps the most famous CA is Conway's game "life." This CA evolves according to a deterministic rule which gives the state of a site in the next generation as a function of the states of neighboring sites in the present generation. This rule is applied to all sites.
For further reading see
(Thanks to Zhen Mei for Contributions to this answer)
A bifurcation is a qualitative change in dynamics upon a small variation in the parameters of a system.
Many dynamical systems depend on parameters, e.g. Reynolds number, catalyst density, temperature, etc. Normally a gradually variation of a parameter in the system corresponds to the gradual variation of the solutions of the problem. However, there exists a large number of problems for which the number of solutions changes abruptly and the structure of solution manifolds varies dramatically when a parameter passes through some critical values. For example, the abrupt buckling of a stab when the stress is increased beyond a critical value, the onset of convection and turbulence when the flow parameters are changed, the formation of patterns in certain PDE's, etc. This kind of phenomena is called bifurcation, i.e. a qualitative change in the behavior of solutions of a dynamics system, a partial differential equation or a delay differential equation.
Bifurcation theory is a method for studying how solutions of a nonlinear problem and their stability change as the parameters varies. The onset of chaos is often studied by bifurcation theory. For example, in certain parameterized families of one dimensional maps, chaos occurs by infinitely many period doubling bifurcations
There are a number of well constructed computer tools for studying bifurcations. In particular see [5.2] for AUTO and DStool.
The transition to chaos for a Hamiltonian (conservative) system is somewhat different than that for a dissipative system (recall [2.5]). In an integrable (nonchaotic) Hamiltonian system, the motion is "quasiperiodic", that is motion that is oscillatory, but involves more than one independent frequency (see also [2.12]). Geometrically the orbits move on tori, i.e. the mathematical generalization of a donut. Examples of integrable Hamiltonian systems include harmonic oscillators (simple mass on a spring, or systems of coupled linear springs), the pendulum, certain special tops (for example the Euler and Lagrange tops), and the Kepler motion of one planet around the sun.
It was expected that a typical perturbation of an integrable Hamiltonian system would lead to "ergodic" motion, a weak version of chaos in which all of phase space is covered, but the Lyapunov exponents [2.11] are not necessarily positive. That this was not true was rather surprisingly discovered by one of the first computer experiments in dynamics, that of Fermi, Pasta and Ulam. They showed that trajectories in nonintegrable system may also be surprisingly stable. Mathematically this was shown to be the case by the celebrated theorem of Kolmogorov Arnold and Moser (KAM), first proposed by Kolmogorov in 1954. The KAM theorem is rather technical, but in essence says that many of the quasiperiodic motions are preserved under perturbations. These orbits fill out what are called KAM tori.
An amazing extension of this result was started with the work of John Greene in 1968. He showed that if one continues to perturb a KAM torus, it reaches a stage where the nearby phase space [2.4] becomes self-similar (has fractal structure [3.2]). At this point the torus is "critical," and any increase in the perturbation destroys it. In a remarkable sequence of papers, Aubry and Mather showed that there are still quasiperiodic orbits that exist beyond this point, but instead of tori they cover cantor sets [3.5]. Percival actually discovered these for an example in 1979 and named them "cantori." Mathematicians tend to call them "Aubry-Mather" sets. These play an important role in limiting the rate of transport through chaotic regions.
Thus, the transition to chaos in Hamiltonian systems can be thought of as the destruction of invariant tori, and the creation of cantori. Chirikov was the first to realize that this transition to "global chaos" was an important physical phenomena. Local chaos also occurs in Hamiltonian systems (in the regions between the KAM tori), and is caused by the intersection of stable and unstable manifolds in what Poincaré called the "homoclinic trellis."
To learn more: See the introductory article by Berry, the text by Percival and Richards and the collection of articles on Hamiltonian systems by MacKay and Meiss [4.1]. There are a number of excellent advanced texts on Hamiltonian dynamics, some of which are listed in [4.1], but we also mention