You can find yourself in the throes of some surprisingly contentious discussions depending on when and how you use the terms “machine learning” (ML) and/or “artificial intelligence” (AI). Rather than weigh in on some of the thornier elements of those debates, this article will provide you with some basic facts about the terminology, and the relationship between the two terms.
Of the two, AI is the broader concept. It very generally encompasses all forms of computer capacity to behave like a human. As it sounds, this covers a tremendous amount of surface area. Humans, of course, can see, reason, remember, read, learn, listen, organize, evaluate, judge, feel, and so much more. AI technically captures all of this.
Some of this human-like activity is already part of our daily lives. For example, AI technology is used in personal assistants such as Siri or Alexa to listen to human speech, evaluate its meaning, and respond. Computers are part of high-tech manufacturing assembly-lines watching processes and spotting or predicting problems or opportunities for improvements. Many other dimensions of human intelligence have yet to be developed in computers, however. Computers are not, for example, expected to feel remorse or love… yet.
ML sits under the umbrella of AI. Specifically, it refers to a computer’s ability to learn about something from data, without the need for a human to program that learning explicitly into the computer. This, too, is a broad concept, but attempts to provide slightly more specificity about the type of problem a machine is being used to solve and the manner in which it is solving it.
ML is often a natural extension of the reliance on technology to supplement (or even replace) human intuition or intelligence to perform important tasks. For example, a human may once have manually reviewed hundreds of incoming job applicant’s resumes. That human likely relied on years of domain experience to inform their judgements about the likelihood that a job candidate would be successful, based upon their resume. Later, computers were deployed to automate some of this process and they were explicitly programmed with an intent to transfer some of that human’s knowledge into computer code-- for example, spotting keywords, ranking certain kinds of job history, etc. Finally, ML distinguishes itself from this prior computer support by learning for itself what characteristics of a resume are most likely to yield a successful employee. It does this without the benefit of years of human domain expertise, or having that expertise programmed-- rather, it relies on a vast amount of prior data on historic job applicant resumes’ and their success- or not- in the job.
Here, a machine has used data to learn about how to perform a task. That’s machine learning, and it’s one subset of artificial intelligence.