A recent study by researchers at Aberystwyth University and the University of Cambridge, funded by the Biotechnology and Biological Sciences Research Council, contends that a robot has been successfully programed to carry out each stage of the scientific method. So far, it has independently accomplished novel and correct scientific knowledge about the genomics of the baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems.
The robot's name? Adam. This is an insulting, irresponsible, and antagonistic name to choose for an advancement in artificial intelligence technology. Besides directly insulting mankind by associating this machine with the father of humanity, the implication drawn from the parallel between human Adam's creator and "Adam's" shows that the researchers are even more big-headed about their own megalomaniacal self-image than their ostentatious name choice. You don't need to be religious or believe the Genesis creation story to find this allusion offensive. I would be equally as offended if researchers chose a grandiose name that has metaphysical significance in another culture, for example, Allah. Yet Adam is more than a religious figure; he, along with Eve, is a widely known archetypal symbol of mankind's complexity, both beautiful and flawed. To name an advanced robot "Adam" and thus characterize a technology as "humanlike" is a misleading anthropomorphic analogy that both insults the dignity of humans and overstates the extent of our technological progress. Fears about the consequences of Artificial Intelligence have been well documented from philosophy to fiction. John Searle's Chinese Room Argument posits that true artificial intelligence will never be achieved, although robots might eventually get to a level of sophistication wherein they appear to act like humans. Films like Blade Runner, The Matrix, and Terminator also build off the notion of Artificial Intelligence interacting with humans in potentially hazardous ways. Most accounts regarding Artificial Intelligence tend to lean in the direction of luddism and dystopian fantasy. Isaac Asimov's classic 1941 science fiction story Reason is a nice blend of fiction and philosophy regarding Artificial Intelligence. His story satires a posteriori and inductive logic, telling of a Descartes-inspired robot named Cutie (QT1) who reasons, accurately, that it has no precedent to believe that humans made it besides faith in their causal explanation. It comes to the ironic conclusion that, "I myself exist, because I think," and deduces that nothing else--humans, outer-space, planets--necessarily exists. Ultimately, the robot forms its own religion serving the energy converter that powers it. Asimov's satire raises a relevant question about the hazards of developing artificial intelligence with true rationality. Given only observation as a fundamental premise and an internal doctrine of deductive reason (and the 3 laws of robotics of course), it is inherently more rationale to solipsistically conclude that the only thing that can be known is our own consciousness. To program a robot to reason like a human is to program a robot to take leaps of faith and trust induction, a paradox lamented by philosophers since Hume and hotly debated today. Asimov's story underscores how the execution of pure reason may lead to entirely wrong conclusions, and thus, reason is itself flawed. Of course, Adam is a scientist, and not necessarily programed to reason like a human or like Cutie. But this only serves to support my opposition to its evocative appellation. Perhaps the fallacious nature of observation contributes to the reason why great thinkers from antiquity to today occasionally held internal reason to a higher standard than observation, and other greats were mislead by their own confidence in sense experience. Parmenides famously argued that experience could not be trusted, and advocated for careful logic and reason to dominate the pursuit of truth. However, some of his philosophical descendants did not share this position. Alemaeon observed the optic nerve connection between the eye and the brain, and posited that the brain was central to the human soul. Contrasty, Aristotle observed that the central hub of the vascular system was the heart, and argued, convincingly, that the heart was the organ which held the soul, a belief that lasted for many centuries thereafter. Both Aristotle and Alemaeon formed their opinions based on the same type of evidence, Alemaeon just happened to be correct. But neither Aristotle, Alemaeon, nor Cutie attempted to design tests to disprove their theory, and thus, ultimately wrong theories were allowed to stand based on misconceptions about the validity of observation. Skeptics like Parmenides, on the other hand, were also wrong, because they failed to appreciate the types of knowledge that observation could competently achieve. Adam seems to be of a different sort: it benefits from the aftermath of Boyle's air-pump experiments, logical positivism and contemporary philosophy of science, and probably embodies the importance of falsifiability. Today, "real" scientists discern between "pure knowledge" and "scientific knowledge," and pursue the later (unfortunately, much of the lay public still believe scientific inquiry to pursue and accomplish the former, or criticize it for its failure to do so). I have every reason to assume that Adam is probably a decent scientist by today's standards...for a robot. Yet I am disappointed with these researcher's choice of semantics, as their arrogant name choice overshadows the actual significance of their research and raises questions about their motivations. Adam is an important step in the development of artificial intelligence, a scientific quest which I support. However, it is important to accurately identify the purpose and scope of all research, and AI research is not, and should never be, aimed at making artificial persons. Adam's name indicates the presence of this type of misdirected research aim, and, even if the name is deliberately ironic or eye-catching, it ought to be changed out of respect for this domain of research as well as humanity itself.Published on April 7, 2009 in Other