The Problem with Machine Learning

Machine learning is widely perceived as getting its start with chess.  When the skills of the program exceeded the skills of the programmer, the logic went,  you’ve created machine learning.  The machine now has capabilities that the programmer didn’t.  Of course, this is something of a fiction.  Massive calculation capabilities alone don’t really mean there’s profound learning occurring.  Massive calculation capabilities might reveal learning, however. 

 In the meantime machine learning has done some really interesting things, such as driving a car, face recognition, spam mail identification, and with it robots can even vacuum the house.

The core problem for machine learning is structure.   Machine learning will do well with structured environments, but not so well in unstructured environments.  Driving a car is a great example.  It did poorly until the problems it couldn’t identify such as subtleties in the driving surface, or the significance of certain objects were defined in a way that could be dealt with by the program.  The underlying problem is structuring a problem in a way that the program can analyze and resolve it. 

Consider the way people learn, and the way that machines learn.  A child starts learning in an unstructured way.  It takes a long time for a child to learn how to speak or walk.  Each child will learn at their own rate based on their environment and unique genetic makeup.  Once they have an unstructured basis for learning do we add a structured learning environment; school.   With machines we provide a structured environment first, and then hope they can learn the subtleties of a complex world. 

There is an excellent paper by Pedro Domingos of University of Washington looking at the growth areas of machine learning.  One observation is that when humans make a new discovery they can create language to describe the new concept.  These concepts are also comparable in the human mind where people can see the comparison between situations and apply new techniques to existing areas by taking skills from one area and applying to another.  An example of human learning is using physicists to create mathematical models for the financial industry to create models for high speed computer trading. 

 Structured machine learning models are making progress in solving some interesting problems, like those mentioned previously.  The approaches mostly look at providing more layers of complexity in the way problems are analyzed and resolved.  Indeed, the future of machine learning is not in the volume of data, but the complexity of the issues to be studied.  The world is a complex place, and human understanding of it is comprised of a combination of literal and intuitive approaches.  The intuitive is the ability to reach across domains of knowledge, extensions of understanding to new areas, and qualitative judgments.

Because machines literal due to the nature of machine structure, programming has been likewise very literal.  The machine learning models are getting far more sophisticated in terms of complexity.  Still, the challenge of creating a structured tool (machine learning) to tackle an unstructured world may be a problem that will never be entirely resolved till we restructure the machine. 

Perhaps we create a machine with a base level of “instinct”, enough to allow the machine to perceive the world around them, and then let it learn.  If we were to create a machine that might take a few years of observation of the world and create their own basis for understanding the universe, what kind of intelligence would be created?  Would we be able to control it?  Will it provide any useful service to us?  

Advertisement