A very popular and somewhat comical remark is that everything that has already been achieved in the field of AI is, at best, an imitation of intelligence, and that what has eluded mechanization has done so because its mechanization is, in fact, impossible; except perhaps in some far off future — but probably not even then. Only this distant, unlikely and perhaps undesirable AI would merit being considered actually intelligent. When some breakthrough that was thought impossible occurs against all odds, the speed at which the skeptics move the goalpost is startling. Recall how, suddenly, everybody had always expected that a computer would master Go. Apparently, as of 2016, Go is really nothing special, whereas in 2015, a computer Go champion was a century if not a millennium away. Some have learned nothing from the story of chess. This goes not only for laymen but even some top experts in the field.
There are some other false dichotomies in around the topic of AI. “That which is simple for humans is difficult for a machine, and that which is difficult for humans is simple for machines” used to be a fashionable aphorism. As walking is easy for humans but not for machines, so is multiplication of large numbers easy for a machine, but difficult for a human. Lately this belief has fallen out of fashion due to the rise of the artificial neural network. We ought to ask ourselves where tasks at the intersection of these two categories fit within this paradigm.
Another popular notion is that while machines are digital, we humans are analogue and therefore superior. Retro fashion? Penrose caused quite a stir with his assertion that only quantum computers can be made intelligent since, after all, people are quantum and not classical computers. This claim still echoes nested within the silly claim that consciousness is a quantum phenomenon.
Fine, a house too is a thing of quantum mechanics, as is a human or a computer nothing but a cloud of quarks and electrons. So what?
Concerning the much extolled property of being analogue and quantum, two things have recently been shown: That an analogue computer made of soap and bent wire is not always able to find the shortest path between several cities. Nor do other analogue machines achieve the speed or precision of their digital counterparts. It has also become clear that, for quite similar reasons as analogue ones, quantum computers can’t significantly outperform classical, digital computers. It would change nothing even if Google and NASA spent their whole budget on the former.
In this context the long prevalent, not entirely silent assumption that man is a trans-turing machine bears mentioning. That since, allegedly, Gödel’s incompleteness theorem doesn’t affect humans we are able to tackle arbitrarily complex mathematics, whereas a computer is forever trapped in Turing’s cage.
Perhaps a wiser grouping of smart programs would be into those that mostly wrote themselves, and those which were primarily the work of a human. It used to be the case that all AIs (even the fake ones) were entirely man made. But, as we established previously, the era of the expert system is ending — even the invention of algorithms is beginning to automatize.
Moving on. There exist mathematically and logically founded algorithms and programs for which we, at least in theory, know why and what they do. Conversely, there are also experimental constructions for which nobody can quite explain how and why they should work. While all this may sound charlatan, consider that the Wright brothers were also charlatans, who were discovering aerodynamics only as they developed the airplane. So too did makers of the steam engine discover thermodynamics as they went along. Simply put, sometimes practice outpaces theory and the unlearned charlatans are only retroactively rehabilitated into pioneers and visionaries. Provided of course that they actually invent something, which, truth be told, they usually don’t.
Of all bifurcations of AI, the only important and crucial one is this: The human and the superhuman. The recognition of faces and distinguishing of cats from dogs is in the domain of anyone. However, there are centuries old mathematical, scientific and technological problems that have eluded every genius that humanity produced. When machines begin to solve these, it will undoubtedly be a huge milestone in the evolution of AI.
In fact, this milestone may have already been passed. For example, the heavy-lifting for the solution to the four color problem was done by a computer. There are perhaps 20 instances of such (minor) breakthroughs, where man and machine solved an until then impenetrable problem in collaboration. This has been going on for a couple of decades now, though admittedly on a relatively small scale.
But when the time comes when a prominent and difficult problem will be solved, and an AI will have played an unambiguously crucial role …
Then “never” will become “it was only a matter of time, but this is not the real thing”. Well, we shall see who will be the first to introduce such a machine.