Discrete Dynamics Lab
About DDLab
 from chapters 1 and 2 of the new manual 
Contents


DDLab is an interactive graphics program for researching the
dynamics of finite binary networks, from Cellular Automata to
random Boolean networks and beyond, including their attractor basins.
The program is relevant to the study of complexity, emergent
phenomena, neural computation and aspects of theoretical biology such
as modeling gene regulatory networks, and to the study of discrete
dynamical networks in general in a variety of fields.
Networks can be created with any architecture from cellular automata to
random Boolean networks. Cellular automata have a homogeneous rule and a
nearest neighbor connections, whereas random Boolean networks have
arbitrary connections, and rules which may be different at each site.
The results in references
[Wuensche 19922000]
may be implemented with DDLab.
The DDLab source code is written in C.
The program is maintained for
the following platforms: DOS/PC, UNIX/XWindowsSun OS 4.1 and Solaris,
Linux, and Irix SGI. DDLab has been ported to hpux/X11/HP by users.
This manual covers all versions, which function in
essentially the same way, though aspects of the graphics presentation
might be slightly different.
A discrete dynamical network in DDLab can be imagined as a software
simulation of a collection light bulbs which transmit information to
each other about their on/off state, and turn on or off according to the
arriving signals. In more abstract terminology, the network is made up
of elements or ``cells'', connected to each
other by directed links or ``wires'', where a wire has an input and output
terminal. A cell takes on a value of either 0 or 1, and transmits this
value down its output wires. Its value is updated as a function of the
values on its input wires. Updating is usually done in parallel, in
discrete ``timesteps'', but may also be sequential in a predetermined
order.
This is the system in a nutshell. It remains to set up the
network according to the various parameters,
 The number of elements, the system size, n.
 How the elements are arranged in space: a 1d, 2d or 3d array
with axial dimensions i,j,h, or some other arrangement.
This network ``geometry'' may have real meaning (depending
on the ``wiring scheme'' below), or it may simply
allow convenient indexing and representation.
 The ``wiring scheme''. Defining the location of the output
terminals of each cell's input wires, the element's ``neighborhood''.
Cellular automata have a homogeneous
``nearest neighbor'' (local) neighborhood throughout the network.
Random Boolean networks may have a
completely arbitrary wiring scheme (a ``pseudoneighborhood''),
assigned at random, or the wiring scheme may be biased in some
way, for example, by confining an element's pseudoneighborhood
close to itself. The wiring scheme also defines boundary
conditions. Cellular automata wiring requires ``periodic boundary
conditions'', where an array's edges wrap around to their opposite
edges.
 The ``rule scheme'', the rules or Boolean functions in the
network. Each element applies a rule to its inputs to compute its
output. Usually this is made into a lookup table, the ``ruletable'',
listing the outputs of all possible input patterns. Cellular
automata have a homogeneous rule scheme,
the same rule throughout the network. Random Boolean
networks may have a completely arbitrary rule scheme, or again, it
may be biased in some way.
DDlab is able to create these networks, and graphically represent
and analyze both the networks themselves and the dynamics resulting
from the changing patterns of 0s and 1s as the complex feedback web
unfolds.
The networks, whether cellular automata or random Boolean networks,
may be set up in one, two or three dimensions, with periodic boundary
conditions. The neighborhood (or ``pseudoneighborhood'') size,
k, may vary from 0 to 13, and the network may have a mix
of k sizes. Network wiring can also be set up to correspond to a regular
ndimensional hypercube, where k=log2(n).
Network updating may
be sequential as well as parallel, noisy as well as deterministic.
``Cellular automata'' is henceforth abbreviated to
``CA'', and
``random Boolean networks'' to ``RBN''.
DDLab has two alternative ways of looking at network dynamics.
Local dynamics, running the network forwards,
and global dynamics, which entails running the network backwards.
Running forwards (local dynamics) generates the network's spacetime
patterns from a given initial state. Many alternative graphical
representations (and methods for gathering/analyzing data) of spacetime
patterns are available to illustrate different aspects of local network
dynamics, including ``filtering'' to show up emergent structures more
clearly.
Running ``backwards'' (global dynamics) generates multiple predecessors
rather than a trajectory of unique successors. This procedure
reconstructs the branching subtree of ancestor patterns rooted on a
particular state. States without predecessors may be disclosed, the so
called ``gardenofEden'' states, the ``leaves'' of the
subtree. Subtrees, basins of attraction (with a topology of trees
rooted on attractor cycles), or the entire basin of
attraction\index{basin of attraction}
field (referred to collectively as
``attractor basins'') can be displayed as directed graphs in real time,
with many presentation options, and methods for gathering/analyzing
data. The attractor basins of ``random maps'' may be generated,
with or without some bias in the mapping.
It can be argued that attractor basins represent the network's
``memory'' by their hierarchical categorization of statespace. Each
basin is categorized by its attractor and each subtree by its root.
Learning/forgetting algorithms allow attaching/detaching sets of states
as predecessors of a given state by automatically mutating rules or
changing connections. This allows sculpting the basin of attraction
field to approach a desired scheme of hierarchical categorization. More
generally, preliminary ``inverse problem'' or ``reverse engineering''
algorithms are included to find the network that satisfies a given set of
transitions.
Whereas large networks may be run forward to look at spacetime
patterns, or backward to look at subtrees, the size is limited
when generating the entire basin of attraction field, given that
statespace grows exponentially with system size $n$. The program's upper
limit for basin of attraction fields is n=31 (lower in practice). Otherwise
the limit is n=65025, based on the maximum size of a square 2d network,
255x255, where $n$ can be represented by an unsigned short int.
This applies to spacetime patterns (in 1d, 2d and 3d),
and to single basins or subtrees, though in practice much smaller
sizes are appropriate for attractor basins. Larger sizes may be tried,
but may impose unacceptable time, memory or display constraints.
Networks with disordered or ``chaotic'' dynamics, with low indegree or
branchiness in their subtrees, allow backwards computation of much larger
networks than for ordered dynamics which have high indegree. For
CA, rules giving chaotic dynamics have a high Z parameter, rules
giving ordered dynamics have a low Z parameter
[Wuensche 1999].
For RBN, running backwards generally imposes a greater computational load
than for CA.
DDLab is an applications program, it does not require writing code.
The network's parameters, and the graphics display and presentation,
can be very flexibly set, reviewed and altered from DDLab's
graphical user interface.
Changes can be made onthefly, including changes to rules, connections,
current state, scale, and alternative presentations highlighting
different properties of spacetime patterns. Networks of whatever
dimension can be interchangeably represented in 1d, 2d, and 3d, as well
as in a 3d isometric projection. The latter is especially useful for
showing the spacetime patterns of 2d networks such as Conway's
``gameofLife'' .
[Conway 1982]
The network architecture, states, data, and the screen image can be
saved and loaded in a variety of tailormade file formats.
Various quantitative, statistical and analytical measures and data on
both forward dynamics and attractor basin topology are available in
DDLab, as well as various global parameters for rules and network
architecture. The measures and data, shown graphically as well as
numerically, include the following:
 Rule parameters lambda (or P) and Z.
 The frequency of canalizing inputs. This can be set to any
arbitrary level.
 Various measures on forward dynamics such as pattern density,
frozen islands, damage spread between two networks, the Derrida plot,
ruletable lookup frequency (which allows ``filtering''), input entropy, and
the variance of the input entropy, which allows ordered, complex and chaotic
rules to be discriminated automatically.
 Various global measures on the topology of attractor basins
including gardenofEden density and a histogram of indegree
frequency.
The following is descriptive summary of DDLab's functions.
For detailed information refer to the new DDLab manual.
DDLab's graphical user interface
allows setting, viewing and
amending network parameters by responding to prompts or accepting
defaults. The prompts present themselves in a main sequence and also
in a number of context dependent prompt windows. You can
backtrack to previous prompts in the sequence, and in some cases
skip forward. A flashing cursor indicates the current
prompt. ``Return'' (or the left mouse button) steps forward through
the prompts, ``q'' (or the right mouse button) backtracks, or
interrupts a run.
The size of the ``neighborhood'', the number of inputs each cell
receives, k, can vary from 0 to 13.
k can be homogenious, or there
can be a mix of kvalues in the network. The kmix may be set and
modified in a variaty of ways, including defining the proportions of
different k's to be allocated at random in the network.
A kmix may be saved/loaded from a file, but is also implicit in the wiring scheme.
The network's wiring scheme (i.e. its connections) has default settings
for regular CA (for 1d, 2d and 3d) for each neighborhood
size, k=0 to 13
(see chapter \ref{The local neighborhood, and network geometry}).
Wiring can also be set at random (nonlocal wiring), with a wide variety of
constraints and biases, or by hand.
A wiring scheme can be set and
amended just for a predefined subnetwork within the network, and
may be saved/loaded from a file.
In most cases regular 2d wiring defines a square grid on the torus, and
includes the von Neumann and Moore neighborhoods of 5 and 9, cells.
However the 6 and 7 cell regular 2d neighborhood is wired to define a
triangular grid. Regular 3d wiring defines a cubic grid with periodic
boundary conditions.
Nonlocal wiring can be constrained in various ways,
including confinement within a local patch of cells with a set diameter
in 1d, 2d and 3d. Part of the network only can be designated to accept
a particular type of wiring scheme, for example rows in 2d and layers in
3d. The wiring can be biased to connect designated rows or layers.
The network parameters can be displayed and amended in a 1d, 2d or 3d
graphic format, in a ``spread sheet'',
or as a network graph which can be
rearranged in various ways, including dragging nodes with the mouse.
A network may have one homogenious rule as for CA, or a rulemix
as for RBN. The rulemix can be confined to a subset of (selected) rules.
Rules (and totalistic codes) may be set and modified in a wide
variety of ways, in decimal, hex, as a ruletable bit pattern (using the
mouse), at random or loaded from a file. The ``gameofLife'',
``majority'', and other predefined rules or rule biases can be selected.
A rule scheme can be set and
amended just for a predefined subnetwork within the network, and
may be saved/loaded from a file.
Rules may be changed into their equivalents (by reflection and negative
transformations), and transformed into equivalent rules with larger or
smaller neighborhoods.
Rules transformed to larger neighborhoods are
useful to achieve finer mutations. Rule parameters
lambda and Z, and the frequency of canalizing inputs
in a network can be set to any arbitrary level.
An initial network state (the seed) is required to run the network
forward and generate spacetime patterns. A seed is also required to run
backwards to generate a subtree or single basin. A basin of attraction
field does not require a seed.
As in setting a rule, there are a wide variaty of methods for defining
the seed,
in decimal or hex, as a bit pattern in 1d, 2d or 3d, at random
(with various constraints or biases), or loaded from a file. The bit
pattern method is a mini drawing program, using the mouse (or keyboard)
to set cell values (0,1), particularly useful for 2d and 3d.
Its possible to create a system of independent or weakly coupled
subnetworks within the base network, either directly, or by saving
smaller networks to a file, then loading them at appropriate positions
in the base network.
Thus a 2d network can be tiled with subnetworks,
and 1d, 2d or 3d subnetworks can be inserted into a 3d base network.
The parameters of the subnetworks can be totally different from the
base network, provided the base network is set up appropriately, with
sufficent ``memory'' to accomadate the subnetwork. For example, to load
an RBN into a CA, the CA may need be set up as if it were an RBN. To
load a mixedk subnetwork into singlekbase network,
k in the base
network needs to be at least as big as the biggest k in the
subnetwork. Options are available to easily set up networks in this
way. Once loaded, the wiring can be finetuned to interconnect the
subnetworks.
A network can be automatically duplicated to create a total network made
up of two identical subnetworks.
This is useful to see the difference pattern (or damage
spread) between two networks from similar initial states.
Many options are provided for the presentation of attractor basins and
spacetime patterns. Again, many of these settings can be changed ``on
the fly''.

Cells in spacetime patterns are coloured
according to their value (0,1) or alternatively according
to their neighborhood at the previous time step, the entry
in the lookup table that determined the cells' value. A
key press will toggle between the two. Spacetime patterns
can be filtered to suppress cells that updated according to
the most frequently occuring neighborhoods, thus exposing
``gliders'' and other structures.
The presentation can be set to highlight cells that have
not changed in the previous x generations, where x
can be set to any value. The
emergence of such frozen elements is associated with
``canalizing inputs'', and underlies Kauffman's RBN model
[Kauffman 1993,
Harris 1997].
A 1d spacetime pattern may be presented in successive
vertical sweeps, or may be continuously scrolled. 2d
networks such as the ``gameofLife'' can be displayed
simply on a 2d grid, and also as a spacetime pattern
(2d+time) in a 3d isometric projection. 3d networks are
presented within a 3d ``cage''. The presentation of
spacetime patterns can be switched ``on the fly'' between
1d, 2d, 2d+time, and 3d, irrespective of their native
dimensions. DDLab automatical unravels or bundles up the
dimensions. There are many other onthefly options,
including changing the scale of spacetime patterns,
changing the seed, rule/s, wiring, and the size of 1d
networks.
Concurrently with these standard presentations, spacetime
patterns can be displayed in a separate window according to
the network graph layout. This can be rearranged in many
ways, including various default layouts. For example a 1d
spacetime pattern to be shown in a circular layout.

Options for attractor basins allow the selection of the
basin of attraction field, a single basin (from a selected
seed), or a subtree (also from a seed). Because a random
seed is likely to be a gardenofEden state, to generate
subtrees an option is offered to run the network forward a
given number of steps to a new seed before running
backward. This guarantees a subtree with at least that
number of levels.
Options (and defaults) are provided for the layout of
attractor basins, their size, position, spacing, and type
of node display (as a spot, in decimal, hex or a 1d or 2d
bit pattern, or none). Regular 1d and 2d CA produce
attractor basins where subtrees and basins are equivalent
by rotational symmetry. This allows ``compression'' of
basins (by default) into nonequivalent prototypes, though
compression can be turned off. Attractor basins are
generated for a given system size, or for a range of
sizes. As attractor basins are generating, the reverse
spacetime pattern can be simultaniously displayed.
An attractor basin run can be set to pause to see data on
each transient tree, each basin, or each field. Any
combination of this data, including the complete list of
states in basins and trees, can be saved to a file.
Normally a run will pause before the next ``mutant''
attractor basin, but this pause may be turned off to create
a continuous demo of new attractor basins. A ``screensave''
demo option shows new basins continually growing at random
positions.

At any time, a spacetime pattern or attractor basin run can
be interrupted to pause, save or print the screen image,
change various parameters, or backtrack through options.
In DOS the graphics will start up at a resolution of
640x480 (VGA) but can be started at a higher resolution
with a program parameter, or the resolution can be changed later.
In the Unix/Linux version the DDLab window starts up at 1024x768
or a size that is automaticaly set to comfortably fit on the monitor,
and can be resized, moved and iconised in the usual way.
The default background color is black, but can be reset to white
either from within the program or with a program parameter.
The text size and spacing is set automatically according to the screen
resolution, but can be resized.
DDLab allows filing a wide range of filetypes, including
network parameters, data, and the screen image.
For compatability with DOS, filenames follow the DOS format,
so a file name has
up to eight characters (starting with a letter)
plus a 3 character extension.
For example "myfile25.dat". In DDLab, only the first part of
the filename is selected without the extention
(or the default filename provided is accepted),
the extension is set automatically to identify the file type.

Network parameters and states can be saved and loaded for the
following: kmix, wiringschemes, rules, ruleschemes,
wiring/rule schemes, and network states.

Data on attractor basins, at various levels of detail can be
automatically saved. A file of ``exhaustive pairs'', made up of
each state and its successor, can be created.
Various data including mean entropy and entropy variance of
spacetime patterns can be automaticaly generated and saved.
This allows a sorted sample of CA rules to be created,
discriminating between order, complexity and chaos.
A large collection of complex
rules, those featuring ``gliders'' or other large scale emergent
structures, can be assembled. Preassembled files of 1d CA
rules sorted by this method
[Wuensche 1999]
are provided with DDLab, for k=5, 6 and 7.

The screen image is saved and loaded using an efficient
homemade compressed format which is only applicable within
DDLab.
In DOS, to save the image in a standard format (and print to any
printer), use a ``stay resident screen grabber''. A number of
specialist screen grabbers are available, and others are part of
``paint'' programs. Alternatively run DDLab as a DOS
application in Microsoft Windows and use their ``paint'' screen
grabber.
To save the image in Unix or Linux use a program such as XView to grab
the DDLab window or part of it, and save it in many standard
formats.

The screen image can be printed at any time directly from
DDLab. In DOS, the following printers have been tested and
seem reliable: Epson MX82 dotmatrix printer, Cannon BJC 4000
bubble jet. Printing to other printers is worth a try but is
unpredictable.
In the Unix or Linux the screen can be printed from options
within DDLab as a PostScript file to a laser
printer. Alternativly use XView to grab the image and print.
A wide variaty of network ``mutations'', as well as
changes in presentation,
can be be made, many onthefly while the simulation is running.

When running forward, keypress options allow changes to be made
to the network and presentation onthefly. This includes
``mutations'' to wiring, rules, current state, and size. A
number of 1d ``complex'' rules (with glider interactions) can be
set for k=5, 6 and 7.

When running backward, and attractor basins are complete, a key
press will regenerate the attractor basin of a mutant
network. Various mutation options can be preset
including random bitflips
in rules and random rewiring of a given number of wires. Sets
of states can be specified and highlighted in the attractor
basin to see how mutations affect their distribution.
Some of the measures and data on network dynamics available in DDLab
are listed below. In most cases this information can be displayed
graphicaly.

The following measures on network connectivity, i.e. the wiring, are available,
 Average k (inputs), number of
reciprocal links, and self links.
 Histograms of the frequency distribution of
inputs (i.e. k), outputs, or both
(i.e all connections) in the network.
 The recursive inputs/outputs to/from a network element,
whether direct or indirect, showing
the ``degrees of separation'' between elements.
 The network graph,
where the wiring is analyzed in two ways, as an adjacency
matrix or table, and derived from this, as a
a graph that can be analyzed in various ways, and
rearranged and unraveled, including dragging vertices and
defined components to new positions with ``elastic band''
edges.

The following measures on local dynamics, i.e. running the system forward
from some initial state, its spacetime patterns
or trajectories, are available,
 A ruletable lookup frequency histogram, which can be toggled
between 2d and 3d to include a time dimension.
 The entropy of the lookup frequency over time.
 The variance of the entropy, and an entropy/density graph,
where complex rules have their own distinctive signatures.
 A plot of mean entropy against the variance of the entropy for
an arbitrarily large sample of CA rules, which allows ordered,
complex and chaotic rules to be classified automatically,
also shown as a 2d
frequency histogram. Ordered, complex and chaotic dynamics
are located in different regions allowing a statistical
measure of their frequency. In addition the rules can be
sorted by entropy variance allowing complex rules to be found
automatically.
 The pattern density over time .
 The activity/stablity of network elements.
Frozen islands, the fraction of
``genes'' that have not changed over the last x
generations, the fraction of frozen 0s and 1s, and ``genes''
colored according the fraction of time they have been ``on''
(i.e. 1) in the same window of x timesteps, falling
into preset ``frequency bins''.
 The damage spread, or
pattern difference, between two networks in 1d or 2d. A
histogram of damage spread can be automaticaly generated for
identical networks with initial states differing by 1 bit.
 The ``Derrida plot'', and Derrida coefficient, analogous
to the Liapunov exponent in
continuous dynamical systems, measures how pairs of network
trajectories diverge/converge in terms of their Hamming
distance. This indicates if a random Boolean network is in
the ordered or chaotic regime .
 A scatter plot of successive iterations in a 2d
phase plane, the ``return map'', showing a fractal structure,
especially for chaotic rules.

The following measures on global dynamics, i.e. attractor basins,
are available,
 Data on attractor basins. The number of basins in the basin
of attraction field, their size, attractor period and branching
structure of transient trees. Details of states belonging to
different basins, subtrees, their distance from attractors or
the subtree root, and their indegree.
 A histogram of attractors (or skeletons) showing the frequency
of arriving at different
attractors from random initial states. This
provides statistical data on the basin of attraction field
for large networks.
The number of basins, their relative size, period,
and the average runin length can be measures statistically.
An analogous method shows the frequency of arriving at different
``skeletons'', partly frozen patterns.
 GardenofEden density plotted against the lambda and
Z parameters, and aginst network size.
 A histogram of the indegree
frequency of attractor basins or subtrees.
 The statespace matrix, a scatterplot of the left half against
the right half of each state bit string, using colour to identify
different basins, or attractor cycle states.
 The attractor metagraph, an analysis of the basin of attraction field
tracking where all possible 1bit flips to attractor states end up,
whether to the same, or to which other, basin.
The information is presented in two ways, as a jumptable: a matrix
showing the jump probabilities between basins, and as a
metagraph: a graph with weighed vertices and edges giving a graphic
representation of the jumptable. The metagraph itself can be
analyzed and manipulated in various ways, and
rearranged and unraveled, including dragging vertices and
defined components to new positions with ``elastic band''
edges.
There are three different reverse algorithms for generating the
preimages of a network state.
 An algorithm for 1d CA, or networks with 1d CA wiring but
heterogenious rules.
 A general algorithm for random Boolean networks, which also
works for the above.
 An exaustive algorithm that works for any ``random mapping''
inluding the two cases above.
The first two reverse algorithms
generate the preimages of a state
directly; the speed of computation decreases with both neighborhood
size k, and network size. The speed of the third exhaustive algorithm
is largely independent of k, but is especially sensitive to network size.
The method used to generate preimages will be chosen automatically, but
can be overridden. For example, a regular 1d CA can be made to use
either of the two other algorithms for benchmark purposes and for a
reality check that all methods agree. The time taken to generate
attractor basins is displayed in DDLab. For the basin of attraction
field a progress bar indicates the proportion of states in statespace
used up so far.

The CA reverse
algorithm applies specifically for networks with 1d CA
wiring (local wiring) and homogeneous k, such as 1d
CA, though the rules may be heterogeneous. This is the most
efficient thus fastest algorithm, described in
[Wuensche 1992,
1999]
[wuensche 92, 99]. Furthermore, compression of 1d
CA attractor basins by rotation symmetry speeds up the
process.

Any other network architecture, with nonlocal wiring, will
be handled by a slower general reverse algorithm
described in
[Wuensche 1992,
1994]
A histogram
revealing the inner workings of this algorithm can be
displayed. Regular 2d or 3d CA will also use this general
reverse algorithm though in principle more efficient
algorithms that take advantage of 2d or 3d local wiring
could be devised. However, compression algorithms will come
into play in 2d to take advantage of the many rotation
symmetries on the torus.

A third, brute force, reverse algorithm first sets up a
mapping, a list of ``exhaustive pairs'' of each state in
statespace and its successor (this can be saved). The
preimages of states are generated by reference to this
list. The exhaustive testing method is restricted to small
systems because the size of the mapping increases
exponentially as 2^n, and scanning the list for preimages
is slow compared to the direct reverse algorithms for CA and
RBN. However, the method is not sensitive to increasing
neighborhood size k, and is useful for small networks
with large k. Exhaustive testing is also used for
sequential updating. The basin of attraction field of the 4x4
``gameofLife'', where k, generated by this method
took about 30 minutes on my 66MHz 486).
The random mapping routine
creates a list of ``exhaustive pairs'', assigning a successor state at
random to each state in state space, possibly with some bias
(rules and wiring previously set are ignored).
The attractor basins are reconstructed by reference to this
random map with the exhaustive testing algorithm.
The space of random maps for a given system size corresponds to the
space of all possible basin of attraction fields and is the superset of
all other deterministic discrete dynamical systems.
By default, network updating is synchronous, in parallel.
DDLab also allows sequential updating, both for
spacetime patterns and attractor basins.
Default orders are forwards, backwards or a ramdom order, but
any specific order can be set out of
the $n!$ possible orders for a network of size $n$.
The order can be saved/loaded from a file.
Learning and forgetting algorithms allow attaching and detaching sets
of states as predecessors of a given state by automatically mutating
rules or wiring couplings. This allows ``sculpting'' the attractor basin
to approach a desired scheme of hierarchical categorization. Because
any such change, especially in a small network, usually has significant
side effects, the methods are not good at designing categories from
scratch, but might be useful for fine tuning a network which is already
close to where its supposed to be.
When an attractor basin is complete, within the learning routine,
a ``target'' state,
together with a number of ``aspiring preimages'' (predecessors) can,
be selected. These
states may be just highlighted in successive mutant attractor basins, or
the learning/forgetting algorithms will attempt to attach/detach the
aspiring preimages to/from the target state, and can be set for either
ruletable bitflips or wire moves. In fact the bitflip method cannot
fail. New attractors can be created and subtrees transplanted. The
result of learning/forgetting, including side effects, will be apparent
in the new attractor basins. The algorithms, and their implications are
described in\cite .
More generally, a very preliminary method for reverse engineering a
network, also known as the inverse problem, is included in DDLab, by
reducing the connections in a fully connected network to satisfy an
exhaustive map (for network sizes n<=13.
The inverse problem is finding a
minimal network that will satisfy a full or partial mapping
(i.e. fragments of attractor basins such as trajectories).
Compiled versions of DDLab currently
run on the following platforms:
 DOS/PC: compiled with Watcom C version 11.
 Linux/PC: RedHat version 7.1, compiled with gcc.
 Unix/XWindowsSun}: Sun Solaris, compiled with gcc.
 Irix/SGI: for IRIX 6.5.11,
MipsC 7.3.1.2. Thanks to Oskar Itzinger
(oitzinger@opec.org) for the Irix port.
Compiled versions of DDLab for other platforms may also be available,
for example hpux/X11/HP (Rick Riolo (rlriolo@umich.edu) made a
hpux/X11/HP port in 1996) and
Debian Linux (Valerio Aimale (valerio@biosgroup.com) made a
Debian Linux port in 1999).
See one of the following DDLab web sites
for current information,
I am reliably informed that
the DOS version of DDLab runs on a Mac using Virtual PC.

Conway,J.H., (1982) "What is Life?" in "Winning ways for your
mathematical plays", Berlekamp,E, J.H.Conway and R.Guy, Vol.2, chap.25,
Academic Press, New York.
 Gutowitz,H.A., ed., (1991) "Cellular Automata, Theory and Experiment",
MIT press.

Harris,E.S., B.K.Sawhill,
A.Wuensche, and S.Kauffman, (1997) ``Biased Eukaryotic Gene
Regulation Rules Suggest Genome Behaviour is Near Edge of Chaos'',
Santa Fe Institute Working Paper 9705039.
 Hopfield,J.J. (1982) "Neural networks and physical systems with emergent
collective computational abilities", Proceedings of the National Academy of
Sciences 79:25542558.

Kauffman,S.A., (1993) The Origins of Order, SelfOrganization and
Selection in Evolution, Oxford University Press.
 Langton,C.G., (1986) "Studying Artificial Life with Cellular Automata",
Physica D, 22, 120149.
 Langton,C.G., (1990) "Computation at the Edge of Chaos: Phase Transitions and
Emergent Computation", Physica D, 42, 1237.
 Somogyi,R., and C.Sniegoski, (1996) ``Modeling the
Complexity of Genetic Networks: Understanding Multigenetic and
Pleiotropic Regulation'', COMPLEXITY, Vol.1/No.6, 4563.
 Wolfram,S., ed. (1986) "Theory and Application of Cellular Automata",
World Scientific, .
 Wuensche,A., and M.J.Lesser.,(1992),
"The Global Dynamics of Cellular Automata; An Atlas of Basin of Attraction
Fields of OneDimensional Cellular Automata",
Santa Fe Institute
Studies in the Sciences of Complexity, AddisonWesley, Reading, MA, 1992.
 Wuensche.A.,(1994),
"The Ghost in the Machine: Basins of Attraction of
Random Boolean Networks",
in Artificial Life III, Santa Fe Institute
Studies in the Sciences of Complexity, AddisonWesley, Reading, MA, 1994.
(
Absrtact, Cognitive Science Research Paper 281,
Univ. of Sussex, 1993)

"Complexity in OneD Cellular Automata:
Gliders, Basins of Attraction and the Z parameter",
Santa Fe Institute working paper 9404025.
(Abstract)
 Wuensche.A.,(1996),
"The Emergence of Memory; Categorisation Far From
Equilibrium",
in "Towards a Science of Consciousness:
The First Tuscon Discussions and Debates"
eds SR Hameroff, AW Kaszniak, AC Scott, MIT Press, Cambridge, MA, 1996.
(
Abstract, Cognitive Science Research Paper 346,
Univ. of Sussex, 1994)
 Wuensche,A.,(1997), "Attractor
Basins of Discrete Networks",
Cognitive Science Research Paper 461, Univ. of Sussex,
D.Phil thesis.
(
Abstract /
+overview and how to order)
 Wuensche,A., (1998)
"Genomic Regulation Modeled as a Network with
Basins of Attraction", Proceedings of the 1998
Pacific Symposium on Biocomputing, World Scientific, Singapore, 1988.
(HTML /
gzipped PostScript / Abstract
)
 Wuensche,A., (1998)
"Classifying Cellular Automata Automatically",
Santa Fe Institute working paper 9802018.
(gzipped PostScript,
/ Abstract)
 Wuensche,A., (1998)
"Discrete Dynamical Networks and their Attractor Basins",
Proceedings of
Complex Systems '98,
University of New South Wales, Sydney, Australia.
Also in html in the online journal
COMPLEXITY INTERNATIONAL,
also Santa Fe Institute working paper 9811101
(
gzipped PostScript /
Abstract/
HTML).

Wuensche,A., (1999)
"
Classifying Cellular Automata Automatically:
Finding gliders, filtering, and relating spacetime patterns,
attractor basins, and the Z parameter.
",
COMPLEXITY,
Vol.4/no.3, 4766, 1999. 1998 preprint available
(gzipped PostScript /
Abstract).
 Wuensche,A., (2000) Basins of Attraction in Cellular Automata;
OrderComplexityChaos in Small Universes,
COMPLEXITY, Vol.5/no.6, 1925, 2000.
back to the start
back to the DDLab home page
Last modified: July 2003