Some years ago, I was involved in developing artificial intelligence (AI) expert systems. I built expert systems to troubleshoot failures in highly engineered systems such the General Eclectic T700 turboshaft engine, a commercial high-volume photocopier, a blood chemistry analyzer, and similarly complex and difficult to diagnose and repair systems.
Xerox Corp. was looking for an artificial intelligence solution to support field service operations. The finalists were my company and another diagnostic expert system company that used similar AI technology. Unable to determine which systems offered a better solution, Xerox decided to conduct a rigorous and objective evaluation by holding a double-blind face off between the two expert systems.
The assessment was to be made based on comparing the two expert systems’ ability to diagnose and prescribe a corrective action for hardware failures in a complex electromechanical device comprising a large number of solenoid actuators, electrical clutches, optical sensors and limit switches. The operation of this system required highly accurate timing: an intermittently slipping clutch or a slow solenoid could cause the entire systems to jam or to stop working altogether.
Each of the competing companies was given the system’s schematics and theory of operation documentation, as well as equal access to a subject matter expert. I and a developer from the competing company were sequestered in one of Xerox’s technical centers in the Rochester, NY area for five days, building the expert system using exactly the same information.
The actual face off was conducted by creating hardware and electrical malfunctions in a Xerox hardware. Service technicians (who did not know what problem was induced) used the expert systems to troubleshoot and fix the problems.
At the end of the daylong session, my system diagnosed correctly and guided the repair activity of more faults than the competitive system. As importantly, in the few instances the expert system could not reach a conclusive diagnosis, it ranked the suspected subsystems and parts in order of probability of being the root cause of the failure. The competing system simply gave up.
As the two competing companies were roughly of the same size and the software licensing were comparable, the reader should assume that we won the contract. We didn’t.
Unbeknownst to us, there was a silent third competitor. A team from Xerox’s product engineering department created an expert system mockup that mimicked the behavior of the mature expert systems, but offered a different knowledge acquisition and representation paradigm. It was a concept rather then a working system, but, if successful, would have been much easier to build than the two competing systems.
So the internal project team got the project. But their system never got off the ground.
As we used to say in those days: the opposite of Artificial Intelligence is Natural Stupidity.
Image: Laughing Fool (Possibly Jacob Cornelisz van Oostsanen ca. 1500)
This blog article was originally published in April 2015. Updated March 2017.