"Machine Learning (ML)" and "Traditional Statistics(TS)" have different philosophies in their approaches. With "Data Science" in the forefront getting lots of attention and interest, I like to dedicate this blog to discuss the differentiation between the two. I often see discussions and arguments between statisticians and data miners/machine learning practitioners on the definition of "data science" and its coverage and the required skill sets. All is needed, is just paying attention to the evolution of these fields.
There is no doubt that when we talk about "Analytics," both data mining/machine learning and traditional statisticians have been a player. However, there is a significant difference in approach, applications, and philosophies of the two camps that is often overlooked.
Historically, ML techniques and approach heavily relies on computing power. On the other hand, TS techniques were mostly developed where computing power was not an option. As a result, TS heavily relies on small samples and heavy assumptions about data and its distributions,
ML in general tends to make less pre-assumptions about the problem and is liberal in its approaches and techniques to find a solution, many times using heuristics. The preferred learning method in machine learning and data mining is inductive learning. At its extreme, in inductive learning the data is plentiful or abundant, and often not much prior knowledge exists or is needed about the problem and data distributions for learning to succeed. The other side of the learning spectrum is called analytical learning, (deductive learning), where data is often scarce or it is preferred (or customary) to work with small samples of it. There is also good prior knowledge about the problem and data. In real world, one often operates between these two extremes. On the other hand, traditional statistics is conservative in its approaches and techniques and often makes tight assumptions about the problem, especially data distributions.
The following table shows some of the differences in approach and philosophy between the two fields:
There is no doubt that when we talk about "Analytics," both data mining/machine learning and traditional statisticians have been a player. However, there is a significant difference in approach, applications, and philosophies of the two camps that is often overlooked.
What is ML?
ML
is a branch of Artificial Intelligence (AI). AI
focuses on understanding intelligence and how to replicate it in machines
(systems or agents). ML
aims at automatic
discovery of regularities in data through the use of computer algorithms and
generalizing those into new but similar data. Its main focus is the study and design of systems that can “learn from data” and its focus
is inductive learning (learning by examples). ML is not the same as “data mining” or “predictive
analytics” that are practices but a core part of both.
ML Roots started in 1950’s and many startups formed in late 80’s, early 90’s with applications such as real-time fraud detection,
character recognition, and recommendation systems to be commercially successful (first generation ML systems). ML is also closely related to “Pattern Recognition (PR). While ML grew out of computer science, Pattern Recognition has engineering roots. The two however are facets of the same field where focus in both is learning from data. Today ML resurgence is the driver of the next big wave
of innovation.
ML Application Variety
Data mining and predictive analytics
- Fraud detection, ad placement, credit scoring, recommenders, drug design, stock trading, customer relationship & experience, …
Text processing & analysis
- Web search, spam filtering, sentiment analysis, …
Graph mining
Other:
- Speech recognition, human genome, bioinformatics, optical character recognition (OCR), face recognition, self-driving cars, scene analysis, …
ML Community/Practitioners
• Typically computer science and/or engineering background
• More programming savvy
• Not confined with a single tool
• Open-source friendly
• Rapid prototyping of the ideas/solutions desired
ML vs. Traditional Statistics
Historically, ML techniques and approach heavily relies on computing power. On the other hand, TS techniques were mostly developed where computing power was not an option. As a result, TS heavily relies on small samples and heavy assumptions about data and its distributions,
ML in general tends to make less pre-assumptions about the problem and is liberal in its approaches and techniques to find a solution, many times using heuristics. The preferred learning method in machine learning and data mining is inductive learning. At its extreme, in inductive learning the data is plentiful or abundant, and often not much prior knowledge exists or is needed about the problem and data distributions for learning to succeed. The other side of the learning spectrum is called analytical learning, (deductive learning), where data is often scarce or it is preferred (or customary) to work with small samples of it. There is also good prior knowledge about the problem and data. In real world, one often operates between these two extremes. On the other hand, traditional statistics is conservative in its approaches and techniques and often makes tight assumptions about the problem, especially data distributions.
The following table shows some of the differences in approach and philosophy between the two fields:
Machine Learning (ML)
|
Traditional statistics (TS)
|
Goal: “learning” from data of all sorts
|
Goal: Analyzing and summarizing data
|
No rigid pre-assumptions about
the problem and data distributions in general
|
Tight assumptions about the problem
and data distributions
|
More liberal in the techniques and approaches
|
Conservative in techniques and approaches
|
Generalization is pursued empirically through training, validation
and test datasets
|
Generalization is pursued using statistical tests on the training
dataset
|
Not shy of using heuristics in approaches in search of a “good
solution”
|
Using tight initial assumptions about data and the problem, typically
in search of an optimal solution under those assumptions
|
Redundancy
in features (variables) is okay, and often helpful. Preferable to use
algorithms designed to handle large number of features
|
Often
requires independent features. Preferable to use less number of input
features
|
Does not
promote data reduction prior to learning. Promotes a culture of abundance:
“the more data, the better”
|
Promotes
data reduction as much as possible before modeling (sampling, less inputs, …)
|
Has
faced with solving more complex problems in learning, reasoning, perception,
knowledge presentation, …
|
Mainly
focused on traditional data analysis
|
Learning can be achieved
by manually writing a program covering all possible data patterns. This is
exhaustive work and is generally impossible to accomplish for real-world
problems. In addition, this program will never be as good or as thorough as a
learning algorithm. Learning algorithms learn by examples (like
humans do) automatically, and they generalize based on what they learn (inductive
learning). Generalization is a key aspect of evaluating the
performance of a learner. At the
highest level, the most popular learning algorithms can be categorized into supervised and unsupervised types and each into high-level useful categories (also called data mining functions):
Supervised learning includes:
· Classification: Predicting to which discrete class an entity belongs
(binary classification is used the most)—e.g., whether a customer will be
high-risk.
· Regression: Predicting
continuous values of an entity’s characteristic—e.g., how much an individual
will spend next month on his or her credit card, given all other available
information.
· Forecasting: Estimation
of macro (aggregated) variables such as total monthly sales of a particular
product.
· Attribute Importance: Identifying the variables (attributes) that are the
most important in predicting different classification or regression outcomes.
Unsupervised learning
includes:
· Clustering: Finding
natural groupings in the data.
· Association models: Analyzing “market baskets” (e.g., novel combinations
of the products that are often bought together in shopping carts).
Statistical Learning Theory
Historically, statisticians have been skeptics of machine learning and resistant to accepting it. This has been because of the liberal approach of ML and less emphasize on theoretical proofs. The good news is that "Statistical Learning Theory" has bridged the gap and has provided an umbrella theory where both sides can collaborate and operate. Basic statistical concepts is a cornerstone of many engineering and science fields, very much like math is. But sticking to traditional statistics thinking and practices would have prevented progress. These are two different things and ML has proved that in practice. For those interested to understand a bit about Statistical Learning Theory and its relation to ML, see the following lecture by Yaser S. Abu-Mostafa at Cal Tech.
---------------------------------------------------------------------------
I discuss these topics in detail in my book. Visit the book site for "High-Performance Data Mining and Big Data Analytics: The Story of Insight from Big Data" (http://bigdataminingbook.info ).