How does artificial intelligence (AI) work?

In this article:

  1. Automation
  2. Machine Learning
  3. Supervised Learning
  4. Unsupervised Learning
  5. Reinforcement Learning
  6. Machine Vision Applications
  7. Self-driving Cars

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. They often refer to AI as one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

AI systems generally work by ingesting large amounts of labelled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction. The systems are learning processes; this aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules called algorithms, provide computing devices with step-by-step instructions for completing a specific task.

What are examples of AI technology and how is it used today?
AI is incorporated into a variety of different types of technology. Here are five examples:


When paired with AI technologies, automation tools can expand the volume and types of tasks performed. An example is robotic process automation (RPA), a kind of software that automates repetitive, rules-based data processing tasks traditionally done by humans. When combined with machine learning and emerging AI tools, RPA can automate more significant portions of enterprise jobs, enabling RPA’s tactical bots to pass along intelligence from AI and respond to process changes.

Machine Learning

This is the science of getting a computer to act without programming. Deep learning is a subset of machine learning that, in straightforward terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms:

Supervised Learning

With this type of machine learning, you have both your input variables and your output variables available: so, you have a complete dataset. During the training of the model, examples of ‘how it is supposed to be’ are given, so the machine can learn to predict future output based on new input. If you already know what a model should predict based on a certain input and when you have this data available, supervised learning is the most suitable algorithm for your case.

We hereby give you an example of how supervised learning can be used by a company. Imagine that you are a teacher who would like to make predictions of the test results for your students next test. First you would need a dataset, in this case: their attendance during class and how well are they making their assignments (input data) and their score for the last exam (output data). With this input data, the algorithm can learn to recognize a correlation between the behaviour during the course and the exam results. Now you can make predictions of future exam results. In order to do so, you would need to collect the same input data and give it to the trained model which will provide you with the expected exam results.

Unsupervised Learning

Unsupervised learning can be considered as the opposite of supervised learning. This method can be used if you have a dataset with only input data, but you don’t have a desired output yet. In this case, you would like to search for groups or categories in your existing data: a relation between observations in order to define groups. The machine will scan incoming data and structure these into categories or clusters.

Therefore, unsupervised learning is mainly used in exploring processes. Let’s say you have a big dataset with food, but you don’t know in which categories you would like to split them. The algorithm can look for similarities, e.g.: fruits and vegetables. This would be two categories. Ultimately this can be subdivided even more, for example fruit can be categorized into citrus and exotic fruit.

Reinforcement Learning

Reinforcement learning is a form of machine learning in which a machine learns based on trial and error. With Reinforcement Learning the model will be optimized via feedback to previous actions and experiences. Usually the optimization is based on rewards, meaning that some actions might provide more meaningful experiences, e.g. a mouse getting cheese when getting out of a maze. The model will update itself by using this feedback to optimize the rewards, depending on the action to undertake.

To give an impressive example about how reinforcement learning was actually used, we will explain how a computer was taught to finish a level of Super Mario. The model is taught that the longer a level is played, and the more points it collects, the better. Thus, the algorithm starts playing the game. Each time it ‘dies’ on the game, he knows he has to make improvements at that point. When the algorithm is gaining more points in the game, this is considered a reward. At that point, the algorithm knows he must try those moves again. In the end, the algorithm has learned how to finish a game of Super Mario by trial and error.

Machine Vision Applications

This technology gives a machine the ability to see. Machine vision captures and analyzes visual information using a camera, analogue-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn’t bound by biology and can be programmed to see through walls, for example. It is used in various applications, from signature identification to medical image analysis. Computer vision, focused on machine-based image processing, is often conflated with machine vision.

Self-driving Cars

Autonomous vehicles use a combination of computer vision, image recognition and deep learning to build automated skills at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.

Do you need further assistance?