Few areas in computer science have, over the years, repeatedly created as much interest, promise, and disappointment, as the field of artificial intelligence. The manufacturing industry, now the latest target application area of “AI”, puts much hype on AI for predictive maintenance. Will AI deliver this time, or is disappointment inevitable?
In engineering, the development of AI was arguably driven by the need for automated analysis of image data from air reconnaissance (and later satellite) missions at the height of the Cold War in the 1960s. A novel class of algorithms emerged that applied back-propagation to non-binary decision trees to force convergence of input data towards previously undefined output clusters. For the first time, these algorithms, dubbed “neural networks”, had the ability to self-develop a decision logic based on training input, outside the control of a (human) designer. The results were often spectacular, but occasionally, spectacularly wrong: since the learnt concepts could not be inspected, they could also not be validated, leading to systems being “untraceable” – failures could not be explained.
In the early days the computational complexity of these algorithms often exceeded available processing power of contemporary computer hardware, at least outside of classified government use. Applying AI to solve real problems proved difficult; virtually no progress was made for more than a decade – a decade that was later referred to as the first “AI Winter”, presumably in analogy to the “Nuclear Winter” and in keeping with the themes of the time. Engineers were forced to wait for Moorse’ law (which stipulated that processing power doubles every 1.5 years – a law that held through much of the second half of the 20th century) to catch up with the imagination of 1960s mathematicians…