The WB-type network, unstable states and evolution

The syntax for B-type neural networks, as Turing described it (1, 2), has a serious functional limitation. If all internode connections, such as circuit cd (Figure 1), are intersected by a B-type modifier (Figure 2), the resulting network cannot perform all logical operations. Click here for more on Turing neural networks. Other authors have suggested solving this problem by inventing new structures outside the scope of Turing’s work (2-5). I suggest that the simplest solution to the original functional problem lies not in inventing new structures, but in relaxing the syntax for Turing neural networks slightly, such that some rather than all internode connections are intersected by B-type modifiers – thus mixing A-type and B-type connections in the same network. I have called this mixed network the WB-type. The WB-type network can perform all logical operations without requiring the invention of new structures, while remaining capable of being trained in the manner Turing described (1, 2).

Figure 1 Figure 2

The variables v and w in Figure 2 hold binary values which affect the value emerging at point d in the circuit. Two possible binary settings (v and w) yield four possible states for the B-type connection. These states are: (i) pass the value entering the connection at c with the exchange of “0” and “1” (logical NOT), (ii) always send “1” at d regardless of what enters at c, (iii) spontaneously alternate between states (i) and (ii), and (iv) spontaneously alternate between states (ii) and (i). The last two states I have called unstable states, because they spontaneously alternate. Teuscher does not include the unstable states in his B-type network experiments and goes as far as to invent new network structures which do not have them (3, 4). Copeland and Proudfoot mistakenly overlook the existence of the unstable states all together (5, 6) – see also Copeland and Proudfoot miss the mark. Neither author considers the possible usefulness of unstable states in terms of network behaviour. It seems to me that if genetic algorithms are to be used to configure Turing networks (as Turing suggested, see Unorganized machines and the brain), then the unstable states – like any other property of the network – are likely to be functionally exploited by evolution. Unlike rational human designers, evolution does not concern itself with elegance or minimality per se – what survives is simply what works, and survival usually involves exploiting every facet of flexibility an “organism” has to offer.

Figure 3

The output stream of a B-type connection in an unstable state can be considered a half-bandwidth data channel, compared with a B-type connection in state (i). Instead of all bits in such a stream carrying information, only every second bit is informational (Figure 3). The bits in between will all be “1”, forming a kind of mask superimposed over the information flow. The bit stream x in Figure 3 is the output of a B-type in an unstable state – x’ indicates the information content with the mask removed.

Figure 4

In addition, if a bit stream from a B-type in each of the two unstable states is combined with the logical AND operator, two half-bandwidth bit streams can become a single, hybrid, full-bandwidth bit stream. In Figure 4, x is a bit stream from a B-type in one unstable state – eg. state (iii) – and y is a bit stream from a B-type in the other unstable state – eg. state (iv). Again, red indicates informational bits and blue indicates mask bits. If a bit from each stream is paired and undergoes logical AND, a hybrid bit stream is created (z) which contains the informational content of both x and y.

A complete exploration of Turing networks (as he described them) should include a consideration of the functional advantages of the unstable states – as these make up 2 of the 4 possible B-type modifier conditions. While unstable states may be awkward or inconvenient for human designers to use, this is more of a reflection of our conceptual limitations – genetic algorithms would find no such inconvenience in implementing their functionality. As Turing himself suggested the use of genetic algorithms to configure his networks, ignoring the unstable states on first impressions, seems doubly unwise.

The savage, all-or-none, nature of binary weights has been stated as a limitation of Turing networks (3, 4). But this again may reflect rational, minimal, human-design predilections. While it is more convenient to have a single type of information flowing through a single connection, there is no requirement for this. Obviously, any digital system can approximate any continuous one to an arbitrary degree of accuracy. For example, if four binary channels were used instead of one to transmit a particular variable, the variable could have five values instead of two – which is considerably more gradual than all-or-nothing binary. There is no reason why Turing networks could not be used to handle floating point values in this way – it is simply not particularly convenient for human designers.

Convenient design is not the first thing that comes to mind when considering the structure of the human brain. The gross anatomy of the brain may seem uniform and orderly between individuals, but at the cellular level, neural connectivity is a tangled mass, with no two individuals having the same connectivity pattern. This tangled mass seems much more similar to the tangled structure of a recurrent neural network than the minimal, orderly circuit designs of conventional computer components. And it was the design of messy brain-like networks which concerned Turing in his 1948 paper (1, 2). Given the power of genetic algorithms to deliver inscrutable, yet highly functional components (7,8), it may be inadvisable to overlook the unstable states in the case of Turing networks simply because they do not seem immediately useful. We have shown above that the unstable states do perform potentially useful functions. In addition, our WB-type networks, evolved using genetic algorithm selection, do contain unstable states, and so it is possible that their functional usefulness has been exploited by evolution.

    References

  1. Turing AM. Intelligent Machinery. In: Ince DC, editor. Collected works of A. M. Turing: Mechanical Intelligence. Elsevier Science Publishers, 1992.
  2. Webster CS. Alan Turing’s unorganized machines and artificial neural networks – his remarkable early work and future possibilities. Evolutionary Intelligence 2012; 5: 35-43.
  3. Teuscher C, Sanchez E. Study, implementation and evolution of the artificial neural networks proposed by Alan M. Turing – a revival of his “schoolboy” ideas. Logic Systems Laboratory, Swiss Federal Institute of Technology, Lausanne, Switzerland, June 2000, Report Revision 1.70.
  4. Teuscher C. Turing’s Connectionism – An Investigation of Neural Network Architectures. London: Springer-Verlag, 2002.
  5. Copeland BJ, Proudfoot D. On Alan Turing’s anticipation of connectionism. Synthese 1996; 108: 361-377.
  6. Copeland BJ, Proudfoot D. Alan Turing’s forgotten ideas in computer science. Scientific American 1999; 4(280): 76-81.
  7. Taubes G. Evolving a conscious machine. Discover 1998; 19(6): 72-79.
  8. Davidson C. Creatures from primordial silicon. New Scientist 1997; 156(2108): 30-34.

Leave a Reply