Copeland and Proudfoot miss the mark

The following is a discussion of the errors and inconsistencies in Copeland and Proudfoot’s writing on Alan Turing’s neural networks.The syntax for B-type neural networks, as Turing described it (1), has a serious functional limitation. If all A-type internode connections, such as circuit cd (Figure 1), are intersected by a B-type modifier (Figure 2), the resulting network cannot perform all logical operations. Click here for more on Turing neural networks. Recent and very interesting work has been done on Turing neural networks by Teuscher, but this work departs significantly from Turing’s original specifications by inventing a number of new structures (2). I suggest that the simplest solution to the original functional problem lies not in inventing new structures, but in relaxing the syntax for Turing neural networks slightly, such that some rather than all internode connections are intersected by B-type modifiers – thus mixing A-type and B-type connections in the same network (I have called this a WB-type network – click here for more). Such a solution allows the network to perform all logical operations without the invention of new structures and allows the network to retain the important capacity of being able to be trained. We have demonstrated the efficacy of this simple approach with the generation of functional networks derived using genetic algorithm “training” in the manner Turing suggested (1).

Figure 1 Figure 2

Copeland and Proudfoot recognise the functional limitation of Turing’s neural networks in an early article in the journal Synthese (3) – stating:

“Turing claims that it is a ‘general property of B-type machines… that with suitable initial conditions they will do any required job, given sufficient time and provided the number of units is sufficient’ (Turing 1948, 15). This follows from the more specific claim that given ‘a B-type unorganised machine with sufficient units one can find initial conditions which will make it into a universal [Turing] machine with a given storage capacity’ (Turing 1948, 15). […]It is reasonably obvious that not all Boolean functions can be computed by B-type machines as defined and thus that each of the claims quoted in the preceding paragraphs is false.” – emphasis mine, page 367 (3).

Copeland and Proudfoot go on to propose another, particularly redundant solution to this problem which involves the invention of their very own kind of connection modifier – ironically calling it the simplest remedy:

“The simplest remedy seems to be to modify the substitution in terms of which B-type machines are defined: a B-type results if every unit-to-unit connection within an A-type machine is replaced by two of the devices in Figure 2 linked in series.” – page 367 (3).

Copeland and Proudfoot’s Figure 2 is the same as the Figure 2 above. Placing two such Turing B-type modifiers in series produces the Copeland and Proudfoot double connection modifier (CP-double-modifier) as in Figure 3.

Figure 3

Set in the way they intend, the CP-double-modifier will either cancel itself out and result in the functional equivalent of an A-type connection (Figure 1), or always send “1” regardless of input and result in the functional equivalent of a trained B-type connection (Figure 2). It seems much more straightforward to me to use a Turing A-type or B-type connection in the first place (hence mixing them within the same network – see WB-type networks), rather than to invent a structure requiring six more nodes, which does the same job. Copeland and Proudfoot fudge the solution to this problem further in a more recent article in Scientific American where they fail to mention the functional limitation of Turing networks altogether and simply pass off their CP-double-modifier arrangement as Turing’s own work (4). A diagram (page 79) shows the CP-double-modifier as a small box with one red and one green training fibre. Rather than talking about “0” and “1” settings on each interfering input or training lead as Turing did, they discuss only modifiers with “pass unchanged” and “interrupt” modes. It is clear that these modifiers are CP-double-modifiers, despite their superficial appearance, because Turing B-type modifiers (Figure 2) cannot perform a “pass unchanged” operation under any conditions (1).How the one red and one green fibre of the CP-double-modifier boxes relate to the realities of what they must contain in terms of Turing networks is never explained (4). CP-double-modifiers have four “trainable” nodes (nodes v to y in Figure 3), not two – but which of the four correspond to the one red and one green fibre of Copeland and Proudfoot’s diagram (page 79) is never stated.It is perhaps somewhat understandable to gloss over certain inconvenient detail when writing within the space constraints of a publication like Scientific American. However, Copeland and Proudfoot continue their obfuscation of the true nature of Turing networks on their website, where no such space limitations exist. Again we are presented with small modifier boxes each with one red and one green training fibre, and which have “pass unchanged” and “interrupt” modes – all of which are Copeland and Proudfoot’s own inventions. But despite this, these structures are again passed off as Turing’s original work. Their webpage states that “Turing showed that even the connection-modifier itself can be built out of nand-neurons.” But Turing showed that his connection modifier (Figure 2) could itself be made from NAND neurones (1), not the CP-double-modifier (Figure 3). A reader can confirm these facts for themselves simply by reading the references available on the same webpage. Also, you may wish to compare an authentic Turing B-type network with Copeland and Proudfoot’s doctored version on their website. Not only is Copeland and Proudfoot’s Scientific American paper (4) inconsistent with the facts of Turing’s original 1948 paper (1), it is also inconsistent with their own earlier Synthese paper on Turing networks (3).

Modifier conditions

One detail which seems to be consistently incorrect in Copeland and Proudfoot’s writing about Turing’s neural networks is the number of conditions which a B-type connection modifier has. A B-type modifier, as Turing described it (1), has two “trainable” nodes (v and w in Figure 2, repeated below), both which can carry either a “1” or a “0&quot. Hence the total number of modifier conditions is four (the total number of permutations of two binary digits). In Turing’s words:

“It is easily seen that the connection [cd, Figure 2] can have three conditions. It may (i) pass all signals through with the interchange of 0 and 1, or (ii) it may convert all signals to 1, or again (iii) it may act as in (i) and (ii) in alternate moments. (Alternative (iii) has two sub-cases.)” – page 11 (1).

Figure 2 – repeated

Turing groups two of the four possible B-type modifier conditions together – stating in the final sentence that “alternative (iii) has two sub-cases”. The reason he does this is that both sub-cases of alternative (iii) are unstable conditions. In one the B-type modifier will cycle through conditions (i) and (ii) in alternate moments, and in the other it will cycle through conditions (ii) and (i) in alternate moments. No change or interference in the state of v or w (Figure 2) is required to perpetuate the alternation – the unstable conditions alternate spontaneously. This is a point which Copeland and Proudfood appear to miss entirely – they fail to realise that the unstable conditions are spontaneous. In their discussion of B-type modifiers in their earlier Synthese paper they state that interference in modifier settings is required to cause alternation of the modifier conditions, and that there are only two B-type modifier conditions (not four):

“Depending on the polarity of the constant signal at C, the signal at E will either (i) be 1 if the signal at D is 0 and 0 if the signal at D is 1, or (ii) always be 1 no matter what the signal at D. If by means of interference the state of A [node v in Figure 2] is changed from moment to moment, the device will cycle through these two alternatives. (This interference may be supplied either from outside or from within the network).” – page 367 (3).

The first two conditions described by Copeland and Proudfoot above are identical to Turing’s description (penultimate quote). However, the third erroneously states that changes in modifier settings are required “from moment to moment” to achieve a cycling of the first two alternatives (i) and (ii). This mistake also explains why Copeland and Proudfoot state that there are only two modifier conditions (3,4) – they see the two unstable states as nothing more than the result of changes in modifier settings from moment to moment. The biographical box of their Scientific American paper (4) states that as directors of The Turing Project, Copeland and Proudfoot are engaged in the simulation of B-type neural networks. However, it is difficult to see how anyone who has simulated Turing B-type networks could make this fundamental error. The spontaneous alternation of the two sub-cases of unstable modifier conditions is hard to miss when they are blinking across your computer screen like Christmas lights. As far as I am aware Copeland and Proudfoot are yet to publish working B-type networks of any kind.

Evolutionary consequences

Turing recognised the potential for genetic algorithms (GAs) to solve the combinatorial problem of training B-type neural networks to perform useful tasks (1) – calling the process a genetical search before the term GA was coined. Genetic algorithms are not rational processes and often yield solutions which are unexpected and quite different to those designed in a rational manner (5,6). An important part of the power of the GA approach is that solutions are free from human preconceptions about what is possible and what is not. For the same reason it is usually a mistake to second guess, or otherwise constrain, the “solution space” beforehand when tackling complex problems with GAs. By limiting themselves to only two of the four possible connection modifier settings Copeland and Proudfoot also make the mistake of limiting possible GA solutions a priori (3,4). This is plainly an anti-evolutionary move – one which is more in keeping with the minimal, rational design ethos of conventional computer circuits, than Turing’s desires to construct very large, evolved, brain-like networks. Our GA derived networks do contain connection modifiers in unstable states. And so it is entirely possible that the unstable modifier conditions have useful features which evolution has exploited – see WB-type networks for more.

Further criticism of Copeland and Proudfoot

by Andrew Hodges, Alan Turing’s biographer at the University of OxfordClick here to read a discussion of what Hodges calls “the most ludicrous proposal ever aired in Scientific American.”And click here for further, detailed criticism of Copeland and Proudfoot’s work on Turing.

    References

  1. Turing AM. Intelligent Machinery. In: Ince DC, editor. Collected works of A. M. Turing: Mechanical Intelligence. Elsevier Science Publishers, 1992.
  2. Teuscher C, Sanchez E. Study, implementation and evolution of the artificial neural networks proposed by Alan M. Turing – a revival of his “schoolboy” ideas. Logic Systems Laboratory, Swiss Federal Institute of Technology, Lausanne, Switzerland, June 2000, Report Revision 1.70.
  3. Copeland BJ, Proudfoot D. On Alan Turing’s anticipation of connectionism. Synthese 1996; 108: 361-377.
  4. Copeland BJ, Proudfoot D. Alan Turing’s forgotten ideas in computer science. Scientific American 1999; 4(280): 76-81.
  5. Taubes G. Evolving a conscious machine. Discover 1998; 19(6): 72-79.
  6. Davidson C. Creatures from primordial silicon. New Scientist 1997; 156(2108): 30-34.
  7. Teuscher C. Turing’s Connectionism – An Investigation of Neural Network Architectures. London: Springer-Verlag, 2002.

One Response to “Copeland and Proudfoot miss the mark”

  1. Tweets that mention Compucology.net » Copeland and Proudfoot miss the mark -- Topsy.com Says:

    […] This post was mentioned on Twitter by Mrs Eddie Izzard , Alan Turing Year. Alan Turing Year said: For the experts: Neural nets, Turing, Copeland and Proudfoot: http://bit.ly/e33cvU #Turing #neuralnets […]

Leave a Reply