top of page
Writer's pictureCCE Finland

What is Machine Learning? A Beginner’s Guide

Updated: Apr 25

Machine learning isn’t new. With the rise of artificial intelligence (AI) in real-world applications and stock prices, you might think AI, machine learning, and data science are recent, cutting-edge ideas. While they’re definitely cutting-edge, they’re far from new.

Machine learning was coined by IBM employee, Arthur Samuel, in 1959. The start of machine learning actually began with a game of checkers where Robert Nealey, a checkers master, was beaten by an IBM computer. Over the last 60 years, machine learning has become a lot more sophisticated and impressive than playing a game of checkers. David Flower, the president and CEO of Volt Active Data, Inc., believes that the power of machine learning and real-time data can “help organizations make better decisions and operate with more efficiency, but they can also help them save money using predictive analytics and unlock new revenue streams.”

But what exactly is machine learning? Why has everyone been talking about it? Let’s find out! In this post, you’ll learn what machine learning is, how it works, and how it’s being integrated into most aspects of everyday life.

Table of Contents

Is Tech Right For you? Take Our 3-Minute Quiz!

You Will Learn: If a career in tech is right for you What tech careers fit your strengths What skills you need to reach your goals

What is Machine Learning?

Machine learning is a subset of AI that uses data and algorithms to give machines the ability to “imitate intelligent human behavior.” Algorithms are a set of rules that a computer uses to solve problems. Machine learning algorithms are trained on massive data sets to learn how to find patterns and relationships so they can make predictions and decisions. With experience and the introduction of training data, they can improve over time to make better predictions and decisions.

You might be thinking that this is exactly what a computer already does. You wouldn’t be completely wrong. Computers follow an explicit set of instructions to complete a task. You click on a web browser, and your computer is programmed to go through steps 1, 2, 3, and so on to get it up and running. Machine learning is a bit more nuanced in that your computer has to “make a choice.” In machine learning, a computer receives tons of data and a task. With the data, the computer needs to figure out how to accomplish the task.

Let’s take a look at a real-world example of machine learning — facial recognition. On iOS phones, users set up facial recognition by holding their phones in front of their faces and moving them around as prompted by the screen. As you move your face, the phone collects data: the angle of your jaw, the definition of your cheeks, the space between your eyes, etc. It takes notes of the smallest details about your face so it can complete its task — recognizing you each time you try to unlock the screen. Because the machine has learned your face, it can decide to open your phone whether you change your appearance with something as simple as glasses or even if you put on a mask.

Why is Machine Learning Important?

Companies are pushing their programmers and developers to find ways to integrate machine learning into their business models and products. Why? It’s because most people — myself included — would consider it a mistake to miss out on its advantages. By harnessing immense amounts of data and a computer’s ability to process it quickly, machine learning can reduce costs and security risks, improve products and services, save time, and boost accuracy and efficiency. It might seem as if machine learning only fits into the tech industry, but that couldn’t be further from the truth. Machine learning technology can be incorporated into any field, and you’ll find that machine learning is being used by smart technologists in most industries, including finance, healthcare, education, marketing, and cybersecurity.

Machine Learning vs. Deep Learning vs. Neural Networks

The way machine learning readily comes up in conversations about AI, the same goes for deep learning and machine learning. You might also see an appearance from “neural networks.” These terms — artificial intelligence, machine learning, deep learning, and neural networks — are often thrown together as if they share the same meaning, but they don’t.

Like we discussed, machine learning is a subset of AI, and deep learning is a subset of machine learning. Deep learning is a technique that uses multi-layered algorithms and neural networks to teach computers to follow a process that is designed to imitate the human brain and human decision making. Although the term neural networks is controversial—critics argue that the name allows what is an artificial process to be too easily confused with the biological neural networks of the brain—an artificial neural network is a machine learning model that makes decisions by attempting to copy the complex way our brains process information. Artificial neural networks work in a layered way, setting up an intricate system of algorithms that makes it possible for computers to “think like humans” as the algorithms process data, learn, and improve.

Is Tech Right For you? Take Our 3-Minute Quiz!

You Will Learn: If a career in tech is right for you What tech careers fit your strengths What skills you need to reach your goals

Types of Machine Learning Algorithms

Machine learning uses algorithms to make predictions. Remember — machine learning algorithms are a set of parameters that computers use to learn and make predictions or decisions from a given data set. These algorithms use different methods to train and analyze data so its predictions are accurate. By introducing new data over time, these algorithms help machines become more accurate or smarter. You could even say the machines — or computers — gain more (artificial!) intelligence.

There are four types of machine learning algorithms: supervised, semi-supervised, unsupervised, and reinforcement.

Supervised

Supervised learning uses labeled data to train machines by providing them with inputs and desired outputs. The “desired” is important because you want the computer to give accurate and relevant results. The algorithm needs to figure out how to take the information from an input and predict the desired output. In this method, the algorithm analyzes the data, picks up on any patterns, and makes predictions. When the algorithm makes predictions, it’s measured on its accuracy and corrected. Supervised learning algorithms go through this process over and over again to reduce their errors and improve their predictions.

There are two types of supervised machine learning models used for making predictions — classification and regression.

  1. Classification: A machine learning algorithm that analyzes data to put them into categories. An example of classification is spam detection when an email is filtered into your inbox or a spam folder. Classification algorithms include decision trees, random forest, k-nearest neighbors, and support vector machines.

  2. Regression: A machine learning algorithm for understanding the relationship between variables. Regression algorithms are used for making projections between dependent and independent variables, for example, how you’d forecast sales for a business. Common regression algorithms are linear regression and logistic regression.

When you call a ride, a company like Uber uses machine learning to make the experience a little easier for you. In a city like New York, if you use a yellow taxi cabs you never know how much you’re paying for a trip until you get there. Let’s hope you don’t get caught in traffic! So that Uber can provide up-front fare estimates, their AI team “leverage(s) techniques ranging from linear to deep learning models” and pools tons of specific data—pickup and dropoff points, time of request, traffic patterns, weather, and historical data—so you know the cost before confirming your ride.

Unsupervised

Unsupervised learning uses machine learning algorithms to analyze unlabeled data. This allows it to discover and identify patterns — without human intervention — about similarities or relationships within the data. Unsupervised learning algorithms will organize and group the data into categories that make sense.

Common types of unsupervised learning are clustering and dimension reduction.

  1. Clustering: A technique where algorithms discover patterns in unlabeled data and group the information based on how they correlate. K-means is a common unsupervised clustering algorithm.

  2. Dimension reduction: A technique where algorithms reduce the variables, or dimensions, in a data set to make it more manageable.

Imagine an e-commerce company wants to target new customers with Facebook ads. With the large amounts of data Facebook (now Meta) has about its users, it likely uses unsupervised learning algorithms to analyze the information for pattern recognition and information classification. This could look like grouping the customer data by age, location, or their spending habits. With this information on hand, advertisers can create target ads for different audiences — for example, image ads for one group and video ads for another.

Semi-supervised

Semi-supervised machine learning is a hybrid of supervised and unsupervised learning. These algorithms are trained on labeled and unlabeled data. This is a common solution for machine learning when you’re looking to save time and money on large amounts of labeled data, as you’d need for supervised learning. Unlabeled data is easier to come by, but semi-supervised learning still makes it possible to train for supervised tasks that need classification and regression.

Reinforcement

Reinforcement machine learning is a technique that uses trial and error to train algorithms. The algorithm doesn’t use sample data. Instead, it uses a set of parameters — like those you might learn when playing a game — to explore options and evaluate results to decide the best solution. In short, reinforcement algorithms learn as they go. The models use positive reinforcement to train for better recommendations. When it puts out a successful action or outcome, it’s reinforced. And when it doesn’t, it’s ignored.

A popular example of reinforcement learning is when the IBM Watson® system won Jeopardy! in 2011. Through trial-and-error training, Watson was able to figure out when it should try to answer, what category it should select, and how much money to wager.

Is Tech Right For you? Take Our 3-Minute Quiz!

You Will Learn: If a career in tech is right for you What tech careers fit your strengths What skills you need to reach your goals

8 Must-Know Machine Learning Algorithms

Machine learning is very technical, and it gets even more technical when you consider the many ways computers can be trained to make predictions and decisions. As you dive into machine learning, you’ll come across dozens of algorithms, but here are eight of the most common algorithms you should know for machine learning.

Linear Regression

A supervised machine learning algorithm that makes predictions between two variables — an input and a target variable. It uses data points and fits them into a line, as best as it can. This “regression line” helps analyze relationships between the variables, making it possible to predict an output based on input values. Linear regression algorithms are used in predicting housing prices, forecasting sales, and analyzing trends.

Logistic regression

A supervised machine learning algorithm used in prediction and classification. Logistic regression algorithms are used to sort input data into categories by predicting probabilities. It’s commonly used for binary tasks — classifying data into one of two groups — for example, Is this a picture of a dog? Yes/No? Common uses for logistic regression algorithms are in spam and fraud detection. Is this email spam? Yes/No?

Decision tree

A supervised machine learning algorithm used for classification and regression. Decision tree algorithms start with one question — a root node — about the data set. As you ask more questions about the data, it branches into internal nodes or possible outcomes. These questions are either binary (with two options) or multi-linear (with multiple options). Decision tree algorithms are done computing when each node ends in a leaf node where the algorithm makes a final decision on the data. If this sounds reminiscent of a regular decision tree, that’s because it is! Decision trees in machine learning are just like any decision tree you’d write for yourself, but just more complex! Real-life examples of decision tree algorithms are found in loan application evaluations (think: buy now, pay later platforms like Affirm) and medical diagnostic software.

Random forest

A supervised machine learning algorithm for classification and regression tasks. Random forest algorithms are made of multiple decision tree algorithms that have been trained with the bagging method. Bagging is a method where each decision tree is independently and randomly trained on data to improve its accuracy. The multiple decision trees grow into a “random forest” where their final nodes — or outputs — are averaged for a prediction.

Random forest algorithms are commonly used in banking. By weighing questions and their potential outcomes, banks can use this algorithm in loan applications — and deciding who’s most likely to repay their debts — and fraud detection.

Naive Bayes

A supervised machine learning algorithm used for classification. It’s based on the math formula, Bayes’ theorem, which is used to calculate conditional probabilities or the likelihood of an event happening. The Naive Bayes algorithm puts a spin on this to calculate the probability of data being classified as one thing or another. The algorithm is called “naive” for a reason. Even when given data that seemingly goes together, it considers each independently.

For example, if given a picture of a fruit, it would consider the object being yellow, oblong, and 2-3 inches in diameter as independent from each other. Then the algorithm combines these features to consider the probability of the object being a lemon. Naive Bayes algorithms are particularly useful on massive data sets and are used for image classification, chatbots, email filtration, and sentiment analysis — determining whether the tone of a message is positive, negative, or neutral.

Support vector machine (SVM)

A supervised machine learning algorithm most commonly used for classification, but it can be applied to regression tasks. Support vector machine (SVM) algorithms are used to find a “hyperplane.” A hyperplane is a decision boundary that separates input data into distinct groups. When new data is introduced to the algorithm, it’s sorted on either side of the hyperplane based on its similarities. The goal is to make the margin between the groups as big as possible to better classify new data.

A popular real-life application of SVM algorithms is used in handwriting recognition. Consider the letters “u” and “v.” When SVM algorithms are trained on data, they look at the similarities within each written letter. The more data that’s introduced, the algorithm is able to widen the gap between what written characteristics make a “u” and others that dictate a “v.”

K-nearest neighbors (KNN)

A supervised machine learning algorithm most commonly used in classification tasks. By analyzing data points, the KNN algorithm uses the proximity of the points to group data or make predictions about them. It’s able to do this because the algorithm assumes that similar data points will be nearby. During classification, the algorithm looks at “K,” the number of nearest neighbors to consider.

Imagine looking at labeled points that are marked along a gradient scale of blues and greens. When a new data point is introduced to the algorithm, the algorithm will look at the number of nearest neighbors to consider, or whatever “K” is. Let’s say “K” is 10. If the new data point is close to three blue points and seven green points, the algorithm would classify the point as being green. When “K” is higher, the algorithm can produce more accurate results from averaging its predictions. Extra points can skew the results, but the nearest data points should have higher assigned weights to keep any outliers from negatively affecting the predictions.

You’ll commonly see the K-nearest neighbors algorithm used in services that make recommendation systems based on your user experience. Think Netflix, Amazon, and YouTube.

K-means

An unsupervised machine learning algorithm used for clustering. K-means is an unsupervised algorithm which — if you remember — means all the data is unlabeled. As the name might suggest, it’s similar to K-nearest neighbors in that it uses the proximity of data to discover patterns and group similar data into the same cluster.

For the K-means algorithm, “K” is a centroid — a data point that represents the center (or mean) of each cluster. The goal of the algorithm as it trains is to minimize the distance between the centroid and the data points in the cluster. This will increase a machine’s accuracy in creating clusters as more data is introduced.

K-means clustering is especially helpful in retail when companies look to segment their customers based on their buying behaviors. The algorithm is also used to detect fraud and optimize delivery zones.

Is Tech Right For you? Take Our 3-Minute Quiz!

You Will Learn: If a career in tech is right for you What tech careers fit your strengths What skills you need to reach your goals

Machine Learning – In the Real World

You might not realize it, but you’re almost certainly coming across applications that use machine learning every day. Machine learning is built into a huge range of technology you use including smartphones, social media, and email. The predictions and decision-making that the algorithms make possible have tons of real-world uses. Examples of machine learning in the real world include:

  1. Facial recognition

  2. Speech recognition

  3. Natural language processing

  4. Recommendation engines

  5. Email automation/spam filtering

  6. Social media connections

  7. Fraud detection

  8. Stock market predictions

  9. Predictive text

  10. Predictive analytics

  11. Virtual personal assistants

  12. Traffic predictions

  13. Self-driving cars

  14. Medical diagnoses

The Future of Machine Learning

Because machine learning is how humans program computers to imitate human behavior by making predictions and decisions, you could consider it the secret behind the “human intelligence” in AI. While we can appreciate the role it plays in robotics in household vacuum cleaners or as it filters spam from our primary email inbox, the potential impact of machine learning is world-changing. It can help doctors make quicker and more accurate medical diagnoses. It could be the first line of defense that prevents a hacker from stealing thousands of dollars of your hard-earned money. It’s a defender against cyberbullying where it can be used to flag, block, and ban users who share harmful messaging.

If you’re interested in a career in artificial intelligence, it’s safe to say that AI isn’t going anywhere. And machine learning is one of its most popular avenues for making a career. But you have to start somewhere first. The Skillcrush Break Into Tech program is the first step to learning the in-demand skills — Python, JavaScript, Data Analysis — of a web developer so you can work towards becoming an AI engineer in machine learning.

79 views0 comments

Comments


bottom of page