Loading...

Artificial intelligence vs. machine learning: battle of the minds

  • Posted on January 24, 2017

In the last year, I’ve been wrestling with two competing approaches with respect to how to leverage big data: artificial intelligence (AI) and machine learning (ML).  After all, there’s nothing exciting about big data unless you have a mechanism to process it. After a few weeks of intensive research, I have realized that everyone’s definition of everything is at odds with everyone else, and the ability to construct a meaningful dialogue around these two technologies is stymied by both neo-religious imperatives as well as philological differences that are rooted in etymological psychosis. Also, the difference between two is not just pathologically insipid, but boring as well.  Consequently, I will try to break down how I see the world unfolding in the next ten years in a language that does not render itself beholden to either vendor frenzy or academic delight. You’re welcome  [CLICK TO TWEET].

But first we must texture the conversation with some background. Let’s start with definitions as they relate to big data:

Artificial Intelligence: AI’s goal is to understand neural function and imitate that ability of the brain to learn from a given set of circumstances. The brain is basically, in many cases, a Rube Goldberg machine. We have this limited set of evolutionary building blocks that we are able to use to get to a complex end state.

Back in the 1980s, AI was a popular field among scientists: how experts do what they do, and how to reduce those tasks to a set of rules and program computers with those rules, effectively replacing the experts. Researchers wanted to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didn’t know ourselves. Fundamentally, AI seeks to construct the ability to reason within a system.

It didn’t work.

Classical AI absorbed hundreds of millions of VC dollars before being declared a failure. The problem with AI was that we just didn’t have enough cost efficient compute power to accomplish those goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.

Machine Learning: ML is actually a group of technologies that includes computational statistics, algorithm design, and mathematics that try to cull data in order to identify patterns that can be translated into knowledge.  A primary set of instructions is provided to a system that then generalizes that information and finds/extrapolates patterns in order to apply that information to new solutions.

ML evolved from AI.  Once the challenges of AI became extant and insurmountable, theorists looked for a more tailored approach to inductive compute decision making. There are various permutations to this model. Vendors have labeled their systems as “machine intelligence” or “supervised learning.”  For the exceptionally nerdy, feel free to investigate Bayesian networks.

Besides being easier to construct, the benefits of machine learning are explicit.  ML starts with a defined problem and a set of rules that describe appropriate analysis of a given set of data.

Similarities:

  • Iterative algorithms: Although the mechanisms differ, iterative learning is a key component of both ML and AI. In ML, iterative learning is the process of defining uncertain boundaries from a described set of parameters. In AI, iterative learning takes place through non-linear and typically time-varying sequences that manage input for describing an aspirational trajectory.
  • More data, faster iterations: The system itself constructs and/or optimizes algorithmic models from learning.
  • Applied where explicit algorithms are unfeasible (spam filtering, OCM, computer vision). Essentially, when the definition of an object, and act, or anything else cannot be explicitly described mathematically, a more complex set of instructions (usually probabilistic modeling) are required for analysis and understanding.
  • Both systems rely upon “learning to learn” based in inductive reasoning: Again, the process differs greatly between the two, but the net result is rooted in the ability of a system to learn from experience. That learning can be explicit and guided (typically machine learning), or it can be based on a set of a priori knowledge and extrapolated by the system itself (typically artificial intelligence).

The Case for Artificial Intelligence:

  • The goal of AI is reasoning. Not only must the system identify what needs to be computed, but also how it must be computed, even under uncertainty.  A tall order; I know a lot of people that lack this capability.
  • Neural network based: The most common solution mechanism for the implementation of AI is the artificial neural network. Basically, interconnected neurons at various levels of abstraction weigh the value of observed or calculated behaviors and tune the algorithm driving the system through the use of non-linear functions of the inputs.
  • Learning at different levels of abstraction: AI algorithms have parameterized transformations that a signal encounters as it propagates from the input layer to the output layer. The transformation is a processing unit that has trainable considerations, such as weights and thresholds. The number of layers is a function of the complexity of the network, and the weight of the parameters describes the logic behind it.
  • Looking for shifts in variance in order to recognize change: In neural networks, there is a mechanism to recognize the actual shift in variance as well as the magnitude of the change. Ultimately, this is one of the cornerstones of AI; the ability to quantify variance and extrapolate its effect is fundamental to the premise of AI

The Case for Machine Learning:

  • ML focuses on the prediction of unknown properties based upon known properties, which in turn are based upon probability distribution. What this infers are two key objectives:
    • The goal is problem solving; identifying and/or solving a problem using a given set of data with defined inputs and supervised error back-propagation
    • ML is not looking for the ability to think; rather, it is looking for the ability to do what a thinking entity can do.
  • There are two class of functions as it relates to ML: something that can be learned and something that cannot be learned. This is established by modeling the complexity associated with the feed-forward control system, which is responsible for instantiating a control signal from the source to the external environment. This suggests a level of understanding around not just a system disturbance, but the anticipated change in a system based upon mathematical modeling. Feedback systems, conversely, change control signals reactively, as they are rooted in environmental responses to signals. An inability to rationalize behavior despite these systems would render a function as something that cannot be learned and outside the capability of a machine learning environment.

One interesting caveat as it relates to ML is the debut of quantum machines to the learning process. Quantum computing theoretically provides an infinite number of feed-forward systems for a given environment.  One can deduce that quantum learning is not deterministic in nature; the fundamental compute elements of quantum computing (qubit) creates a logic model that is fuzzy.  This may provide the most creative problem solving process under the known laws of physics.  Currently, it is being used for image recognition, but future implementations consider training probabilistic models upon intractable calculations, which ultimately requires coarse approximations.

However, in either case:

  • You end up with a result that is optimized. You don’t necessarily understand how you evolved it, but the results speak for themselves. You can only understand the process and the interfaces.
  • Locus of learning shifts from product to process: you don’t tweak the end product; you tweak to process. Essentially, you are searching for the equity of irreducibly; since you cannot describe the elemental state of an entity, you must leverage its effect upon the system though mechanistic means.
  • Both of these models go beyond engineering: we attempt to create more than we can understand.

So, where to invest? Ultimately, it depends upon your business need.  ML offers a number of business-friendly tools that make it the go-to solution for most corporate efforts: it can be bound by known technologies; it can solve specific problems, and problems that emerge can be redressed through tweaking either the inputs, the data organization, or the value of the outputs.  In fact, these are the reasons AI fell out of favor back in the 80s: there was no way to abstract the ability of an expert into a set of rules that could be extrapolated to more complex scenarios.

However, I believe that the future is AI.  The ability to reason holds an intellectual and emotional capacity that supersedes the business imperative.  It is what we as programmers and theorists and scientists have struggled for since the founding of statistical analysis; how do we create a machine that can answer questions about the stuff we don’t know?  IBM has invested heavily in the space, as has Microsoft, Tesla, Oracle, Google, every healthcare company upon the planet, most of the banks, and every tech savvy entrepreneur that is looking to figure out how to make more money than the next guy.

If you’re a business, invest in machine learning.  It will do you good.  But if you’re a visionary, go for artificial intelligence.  You may change the world. And I’ll be there to cheer you on.

Avanade Insights Newsletter

Stay up to date with our latest news.

Next steps

Talk to us about how we can bring the power of digital innovation to your business.

CONTACT US
Contact Avanade
CLOSE
Modal window
Contract
Share this page