by Basil E. Gala, Ph.D.
In Search of Meaning
Machine perception, often called machine learning, is a term we usually associate with robotic functions in industrial automation, as in machine vision, recognizing parts, sensing defects. We also call machine perception the computer recognition of faces, signatures, and fingerprints. More often we call these operations pattern recognition, such as speech and writing recognition by computer systems that perform classification of different inputs. Generally, all these innovations are called artificial intelligence or AI, which also includes the programming in computers of cognitive functions and linguistics, for example, decision making and analysis of data, chess playing, and other gaming decisions. In this connection, a new term and discipline is data mining, which involves extracting relevant or important facts from a large mass of data scientists or businessmen have collected with automatic devices. Whatever you prefer to call these artifacts, the question is: Are we going to be able to design systems that will approach or exceed our human capacity to perceive, classify, and make fitting decisions, doing such tasks unassisted, with the great speed of electrons or photons, and at low cost, thus replacing human labor in many jobs? Yes, we will be able to do it; and I intend to contribute to this effort as I did some years back working on my dissertation.
First, should we do it? Should we go ahead and build intelligent machines just because we can? Would not that development imperil our species, even replace it? That may happen, but it’s not going to stop researchers in AI. We might as well say, let’s not design nuclear devices, or do genetic engineering, or explore space, or go over the oceans, or invent fire–all dangerous ventures. It’s not our nature to hold back from exploring, experimenting, or building things for fear of danger. Danger only stimulates our sense of awareness. We just need to proceed with caution and due respect for safety measures. AI will outsource many jobs, eventually most jobs, to machines. Computers are already replacing many workers. The danger does exist we’ll lose control over our intelligent artifacts; perhaps we have already lost some control to computers. A stealth plane would quickly crash without several on- board computers in control of steering. On May 7, 2010 the Dow Jones Industrial Average dropped 1000 points in a matter of minutes and Procter and Gamble, a giant, very stable, basic consumer products company lost half its stock value, because of a glitch in automatic trading by computer. But the opportunties in AI to expand our vision and reach in the universe are enormous and irresistible. We’ll go into these opportunities later in this discourse.
Second, are we really capable of designing and building smart machines with computers or other devices? Some people think cognition is a strictly human function, never to be implemented in our artifacts. They argue that mental activities are God-given or natural abilities that we cannot impart to machines, because we are spiritual beings rather than material objects. Such critics often confuse what we can do in building machine intelligence with whether we should build it or not for ethical or other social reasons. If the critics are right about their ethical considerations, then let’s bury our computers, telephones, cars, and other equipment that have caused layoffs of workers, noise, and pollution, all the way back to the steam engine and the cotton gin. If our nation does this, other nations will not–and we’ll be left behind in competing for business and military defense. What was the point of barring support for stem cell research under President G.W. Bush when that research went on elsewhere? If any action is taken to restrict research and development in any field, it must be with international agreements, such as the nuclear non-proliferation treaty.
In the absence of such a treaty on artificial intelligence, are we able to design smart machines that perform human functions? We are. We already have computer programs that recognize human speech, after some training, and record it as ASCII eight-bit (byte) codes in computer memory. I own such a program made by Dragon. We also have a variety of optical character recognition systems (OCR) which can read characters in a variety of fonts. IBM makes a good OCR system. Fingerprint and face recognition systems are also available for machine identification. Are we there yet to AI, or getting close? Not really.
Forty three years ago, in 1967, when I got interested in pattern recognition as a Caltech graduate student in computer science, I thought we would have AI in a few years. We were working then with what we saw as the powerful IBM 360 main frame computer, getting access to it with terminals, punched cards, or magnetic tapes. A British statistician, I. J. Good, was forecasting super intelligent machines by the year 2000. Later, Arthur C. Clark, physicist and science fiction writer, imagined HAL 9000, Heuristically programmed ALgorithmic computer, smart but flawed, in his novel 2001: A Space Odyssey, beautifully rendered on film by Stanley Kubrick. Incidentally, we also had a moon colony in Clark’s 2001, following the Apollo landing in 1969.
Well, we made excellent progress in other technologies: transistors and integrated circuits (following Moore’s law), software design for personal computers and Microsoft, cell phones, the Apple i-Phone, the Internet and Google, as well mapping the human genome, bioengineering products and Genentech. Designing electronics with human intelligence, however, is going to be much harder than these achievements. We need to come up with a breakthrough in our thinking about the problem, a new kind of mathematics. A new logic was proposed in 1965 by professor Lotfi Zadeh, at Berkeley, which he named fuzzy logic. That differs from ordinary logic where an element is or is not a member of a set, by having an element possess a degree of membership in the set. Fuzzy logic gives rise to different mathematics and Japanese researchers have designed pattern recognizers with such a logic, as opposed to our customary digital on-or-off circuits. I’m waiting for smart robots sailing out of Japan to conquer the world, as Toyota and Honda cars have done. The Japanese already have tens of thousands more robots working in factories than any other nation. The Chinese produce factory workers in more traditional ways, as does the U.S. with the help of Latinos.
Enough time has elapsed since 1965 for us to test the power of fuzzy logic and design smart machines with this tool. We may need a different breakthrough, such as calculus, invented by Isaac Newton and Gottfried Leibniz independently in the late seventeenth century. The ancient Greek mathematicians had worked on the problem of squaring the circle of radius r, i.e. Finding 2 л r, and had difficulty with the concepts of the infinite and infinitessimal, which Newton and Leibniz used adroitly in their calculus. With calculus we can derive the value of л to any accuracy we desire after 3.14. For AI we need a breakthrough concept like calculus, or like probability and quantum mechanics that allow us to deal with random events, previously thought to be inaccessible to reasoning, left to gods and astrologists.
Ever since John von Neumann and Alan Turing invented the modern stored-program computer at the end of WWII, researchers have been hammering away at the problem of machine perception with a variety of tools: statistics, inference theory, estimation theory, correlation, regression, cluster analysis, information theory, syntactic theory, linear programming, Bayesean probability decision theory, matrix theory, perceptron theory (discriminant functions), neural nets, and just plain heuristic programming. Neural nets made a buzz for a while—layers of elements trained to receive and recognize patterns, evolving somehow into a perceiving device. Upon analysis they turned out to be implementations of linear discriminant functions, Rosenblatt’s old perceptron from the fifties.
In the fifties popular books came out with titles such as, Giant Brains, Thinking Machines, Intelligent Computers. A classic science fiction film, The Day the Earth Stood Still, appeared in 1951 with an elegant humanoid alien, played by Michael Rennie, who landed with his saucer spaceship in the company of a huge, intelligent, powerful, but silent robot; the alien chastised humans for their violent ways, and warned us we would be destroyed at the hands of the robot, for the sake of a peaceful galaxy, if we didn’t mend our ways. The robot was irrevocably programmed to destroy violent aggressors anywhere. Now that robot required very astute discriminant functions in its programming.
I have my doubts about giving a robot, even a very intelligent robot, so much power and discretion. I can appreciate, though, the great value of really smart machines for scientific exploration and engineering design. Many of the designs we have today in products would not be possible without the use of modern computers. More sophisticated computers are designed using existing models. In the same way, once we have a machine that can perceive well, we can use that to develop even better perceptual devices. We end up with a rapid evolution of machine intelligence, leading to a super-intelligent artifact, according to I.J.Good.
Our intelligence is limited by the size of our skulls, which was limited by human evolution to the size of the opening through which humans pass to be born. Since neurons, for a good reason I’m sure, don’t reproduce as a rule after birth, ten billion or so had to get packed in the skull within convolutions of the brain. But even convolutions can serve only so far. By comparison, a computer brain has no such limitations; it can grow indefinitely in its evolution. With integrated circuits that means a lot of transistors even in a small chip. A supercomputer is designed with many chips working together to process information.
Once we have programmed or otherwise eqipped a supercomputer with perceptual ability, it will visualize patterns, structures of sensory inputs or any other data in any number of dimensions. We humans can see patterns, figures, structures in two dimensions quite well and in three dimensions with some difficulty. But many data environments have more than three dimensions, such as the economy, for example. The economy depends on hundreds of variables. Economists linearize these variables and use matrices to calculate outcomes, but the economy is not linear. We don’t know its shape really. An very intelligent machine would be able to see the structure of the economy and make much better predictions.
Much the same situation exists in every science, such as medicine. Doctors do their best to diagnose an illness given the symptoms you describe, lab tests, a physical examination, and your health history. The human body and mind, however, are much more complex and non-linear than doctors can perceive. Today’s computers can help with the diagnosis and recommend a treatment, if a doctor wants to use available programs, but our computers are not great perceivers of patterns so far. The doctor prescribes some medicine or surgery, but that may not work because of side effects not taken into account by the model of the disease assumed by medical science.
In every field of science, physics, chemistry, biology, sociology, psychology intelligent computers would enable us to make rapid and fundamental discoveries, greatly accelerating our scientific progress. Food production would go to factories, instead of farms subject to the vagaries of the weather. Space exploration would propel humanity to other planets near and far from Earth. Our lifespans would be extended indefinitely, allowing us to create more, explore farther, and solve many fundamental problems, war, poverty, disease, and death itself.