What exactly is Artificial Intelligence (AI)?

While it’s difficult and probably inaccurate to define Artificial Intelligence (AI) as one single concept, it’s relatively safe to say that AI is a machine or computer behaving or rather, thinking in a way that represents that of a human being. That’s the very broad stroke of it.

If, however, we take a closer look, AI collectively is a bunch of technologies grouped together. For example, there’s “machine learning”, “deep learning”, “neuromorphic computing” or “neural networks”, “natural language processing”, “inference algorithms”, “recommendation engines”, “bots”, “autonomous systems” and “cognitive computing”. Just to name a small few.

To make the matter a little more complicated, there are also different ways in which some of the algorithms are categorised: there are deterministic approaches; there are non-deterministic approaches; there are rules-based approaches. The point here being that AI is extremely complicated and cannot be realised using a single system.

How do terms like AI, cognitive computing and machine learning relate to one another?

Obviously, most of these terms are not synonymous. Cognitive computing is very different to machine learning but they are both a type of AI or are combined to be part of what will be an AI entity.

Cognitive computing (an IBM term primarily) is a radical approach to curating massive amounts of data, storing the data into a cognitive stack and then having the ability to create connections between all of the ingested data. This will allow a user to discover a particular problem, or question, that would otherwise never have been anticipated.

Machine learning is almost the opposite. Machine learning takes a goal function or outcome and, by using a defined set of “rules”, will look at the same massive amount of disparate data and try to create proximity to the goal function if not find the target requested. Basically, machine learning will find what you told it to look for.

Staying with cognitive computing and machine learning as an example, these two systems will need a way in which to work together in order to produce an intelligent result when information is requested. In other words, if you asked someone an everyday question like “Where is the nearest Spar?”, a human answer could be something like “The nearest Spar is 20 kilometres away — but there is a Pick ‘n Pay 5 kilometres down the road”. This kind of response requires what is known as a “training model”. Without this, a machine would not offer the additional information about Pick ‘n Pay because it was never requested. Training would help the machine to “think” that offering this additional information would be helpful because, even though Pick ‘n Pay is not a Spar, it is more than likely that the person asking about Spar will be able to find what they need at Pick ‘n Pay.

This is just one of the almost infinite ways in which just these two systems will need to work together in order to behave in a human way.

artificial intelligence AI training model

What is a training model?

Simply put, a training model is showing a group of people pictures of mountains and mixed in with those pictures are pictures of camels. Then throw in more pictures which could represent objects geometrically or mathematically similar to mountains such as ice cream cones or pyramids. Now, with a machine “watching”, the group of people would tell you which of the pictures are mountains and the machine will not only learn which pictures represent mountains but it will also learn from the way the people behave while selecting the pictures. This approach is called heuristic learning.

When we watch people, create a model based on their behaviour and then do as they did, it is a type of learning. Heuristic learning is just one of the many ways in which machines learn behaviour.

Now, considering what a machine has to do in order to answer a simple question, assuming that machine learning and cognitive computing were the only two systems to coordinate, which they aren’t, you can start to appreciate how applications such as Apple’s Siri are truly remarkable.

How advanced is AI right now?

While most of the concepts and discussions surrounding AI seem to be something from the realm of science fiction, you’d be greatly surprised to learn that, throughout your everyday, you are already using AI, whether you are aware of it or not. It’s also worth noting that current technologies are becoming more advanced and, within a very short time, our daily routines will become almost dependant on AI being an ever-present presence.

Here are two examples of AI already having a massive impact on our daily lives:

1. Virtual Personal Assistants

Like Siri, there is also Google Now and Cortana (iOS, Android, Windows Mobile). Users simply ask questions or ask for reminders and those questions and calendar events are, almost always, accurate and on time.

AI is important in these apps because there is a continuous learning process on the go. The app seemingly starts to show signs of human behaviour — little things like being able to detect the tone of voice or quickly anticipates your daily needs and then becomes an efficient personal assistant.

2. Online Customer Support

Many websites now offer customers the opportunity to chat with a customer support representative while they’re browsing. It’s very surprising and almost alarming to know that not every site actually has a live person on the other end of the line. In many cases, you’re talking to a bot with rudimentary AI. For the most part, the responses are automated but the clever part is, by way of natural language processing (NLP), the bot is able to “understand” the language in which you are typing as opposed to its very different computer language.

At Omnicor, we are looking at Machine Learning to optimise our Recommendation engine. With vast amounts of assessment results, we can fine-tune the RoleFit-generated Recommendations based on patterns/predictions in a candidate’s assessment data, job performance (Quality of Hire) and job profile

Author
William N. Irwin
Senior Developer
Omnicor

Sources

Definitions of the technologies mentioned in paragraph two:

Machine learning:

Machine learning is an application of artificial intelligence that automates analytical model building by using algorithms that iteratively learn from data without being explicitly programmed where to look.

Deep Learning:

An artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making.

Neuromorphic Computing or Neural Networks:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains.

Natural Language Processing:

Natural language processing (NLP) is a field of computer science, artificial intelligence and computational linguistics concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language corpora.

Inference Algorithms:

Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst.

Recommendation Engines:

A recommender system or a recommendation system (sometimes replacing “system” with a synonym such as platform or engine) is a subclass of information filtering system that seeks to predict the “rating” or “preference” that a user would give to an item.

Bots:

An Internet bot, also known as web robot, WWW robot or simply bot, is a software application that runs automated tasks (scripts) over the Internet.

Autonomous Systems:

Within the Internet, an autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the Internet.

Cognitive Computing:

Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works.