An unnecessary focus, that does not ‘sex-up’ Turing’s imitation game, rather detracts from Alan Turing’s brilliant ideas on building machines to think, is the notion that machines in practical Turing tests require 30% deception rate for success. This is a misinterpretation of one of Turing’s predictions:
"...in about fifty year’s time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent. chance of making the right identification after five minutes of questioning” (Computing Machinery and Intelligence, 1950, p.442).
If that statement is to be interpreted as 30% interrogator deception rate for machine success then it has already been surpassed, in the very first Loebner Prize for Artificial Intelligence in 1991:
“….the highest-ranked computer program was misclassified as a human by five of the ten judges” (by Robert Epstein in his chapter ‘The Quest for the Thinking Computer’ in R. Epstein, G. Roberts & G. Beber Parsing the Turing Test, Springer, 2008, p. 3).
Turing’s success rate required a machine to provide “satisfactory and sustained” answers, not an “easy contrivance” (A.M. Turing, in Computing Machinery and Intelligence, 1950, p. 447), to any questions put to it by a jury panel of interrogators, and that:
“a considerable proportion of a jury, who should not be experts about machines, must be taken in by the pretence” (A.M. Turing, in ‘Can Automatic Calculating Machines be Said to think’, p. 495 in B. J. Copeland’s The Essential Turing, OUP, 2004).
To me, a considerable proportion should be more than the rate of chance - this is not ‘moving the goal post’, because half the number of ten judges were deceived in one practical Turing test event (see above, 1991). To do Turing’s ideas justice and to inspire and motivate machine developers, not less than 70% of a jury panel of interrogators should feel the conversational skills of their hidden interlocutor are human-like in any robust Turing test experiment.
The challenge is to attain funding for a full, large–scale Turing test experiment with hundreds of “average interrogators” – that is, all ages of novices and expertise levels - taking part in thousands of tests run over weeks and weeks.
Until that happens practical Turing tests serve many purposes, from measuring machine performance against a human’s in any field, to encouraging/inspiring children to take up computing and robotics. It is also a useful experiment helping to raise awareness and prevention of cybercrime. It is estimated that the cost of cybercrime to the individual is over £3bn* - the full amount is probably not known because people may not always be willing to come forward and report to the police admiting they have had their identity stolen or bank accounts compromised, for fear of appearing foolish at being deceived in cyberspace. There is also the task of making more children prepared to distinguish adults from peers in cyberspace to ensure they do not succumb to grooming on the Internet^.
Reading University (Visiting Professor Kevin Warwick and Dr. Huma Shah) and Coventry University (Deputy Vice Chancellor-Research, Professor Kevin Warwick) are collaborating to host another public Turing test experiment, following on from the June 2012 experiment on the 100th anniversary of Alan Turing’s birth at Bletchley Park.
Details for this year's event:
Practical Turing tests (based on Huma Shah’s PhD: 'Deception-detection and machine intelligence in practical Turing tests' )
Venue location: The Royal Society, 6-9 Carlton House Terrace, London SW1 5AG: https://royalsociety.org/
Public day: Saturday 7 June 2014
Timing: 10am - 6.30pm
Follow on Twitter here: @Turing2014
Event email: Turingtestsin2014 [at] gmail [dot] com
Event animated trailer on YouTube is here:
Previous experiment, Turing100in2012 information here: http://www.kevinwarwick.com/turing100.htm
^ Child Exploitation and Online Prevention Centre (CEOP): http://ceop.police.uk/
© Huma Shah, 16 April 2014