In this article:
- What Is Artificial Intelligence ?
- A Brief History of Artificial Intelligence
- Types of Artificial Intelligence
- How Does Artificial Intelligence Work?
- Ways of Implementing AI
- Programming Cognitive Skills: Learning, Reasoning and Self-Correction
- Advantages and Disadvantages of AI
- Applications of Artificial Intelligence
What Is Artificial Intelligence ?
Artificial Intelligence is a method of making a computer, a computer-controlled robot, or a software, think intelligently like the human mind.
The outcome of studying the patterns of the human brain and by analysing the cognitive process, helps to develops intelligent software and systems.
Artificial intelligence (AI) is currently one of the hottest buzzwords in tech. The last few years have seen several innovations and advancements that seemed mere science fiction slowly transform into reality.
Artificial Intelligence is regarded as a factor of production, which has the potential to introduce new sources of growth and change the way the work is carried out across many industries.
PWC (also known as PricewaterhouseCoopers) recently published an article that predicits that AI could potentially contribute $15.7 trillion to the global economy by 2035. China and the United States are in a prime position to benefit the most from the coming AI boom, accounting for nearly 70% of the global impact.
Here we provide an overview of AI, including how it works, its pros and cons and its applications.
A Brief History of Artificial Intelligence
Here’s a brief timeline of the past six decades of how AI evolved from its inception:
1956 – John McCarthy coined the term ‘artificial intelligence’ and had the first AI conference.
1969 – Shakey was the first general-purpose mobile robot built, able to do things with a purpose vs. just a list of instructions.
1997 – Supercomputer ‘Deep Blue’ was craeted and defeated the world champion chess player in a match. It was a massive milestone by IBM to create this large computer.
2002 – The first commercially successful, robotic vacuum cleaner, was created.
2005 – 2019 – Today, speech recognition, robotic process automation (RPA), smart homes and other innovations make their debut.
2020 – Baidu releases the LinearFold AI algorithm to medical and scientific and medical teams developing a vaccine during the early stages of the SARS-CoV-2 (COVID-19) pandemic.
The algorithm can predict the RNA sequence of the virus in only 27 seconds, which is 120 times faster than other methods.
Types of Artificial Intelligence
- Purely Reactive
These machines specialise in one field of work and do not have any memory or data to work with, for example, in a chess game, the machine observes the moves and makes the best possible decision to win.
- Limited Memory
These machines gather previous data and continue adding it to their memory. They have enough memory or experience to make proper decisions, but memory is minimal. For example, this machine can suggest a restaurant based on the location data that has collected
- Theory of Mind
A kind of AI that is yet to be built, will understand thoughts, emotions and interact socially.
- Self-Aware
Self-aware machines are the future generation of these new technologies.
They will be intelligent, sentient and conscious.
How Does Artificial Intelligence Work?
AI systems merge large sets of data with intelligent, iterative processing algorithms. The AI then learns from patterns and features in the analysed data. Each time an Artificial Intelligence system performs a round of data processing, it tests, measures its performance and uses the results to develop additional expertise.
Ways of Implementing AI
Let’s explore the following ways that explain how we can implement AI:
- Machine Learning
It is machine learning that gives AI the ability to learn; using algorithms to discover patterns and generate insights from the data they are exposed to.
- Deep Learning
Deep learning, a subcategory of machine learning, gives AI the ability to mimic a human brain’s neural network. It can make sense of patterns, noise and sources of confusion in the data.
Programming Cognitive Skills: Learning, Reasoning and Self-Correction
Artificial Intelligence emphasises three cognitive skills of learning, reasoning and self-correction; skills that the human brain possess to one degree or another.
In the context of AI, these are defined as:
- Learning
The acquisition of information and the rules needed to use that information.
- Reasoning
Using the information rules to reach definite or approximate conclusions.
- Self-Correction
The process of continually fine-tuning AI algorithms and ensuring they offer the most accurate results they can.
Researchers and programmers have extended and elaborated the goals of AI to the following:
- Logical Reasoning
AI programs enable computers to perform sophisticated tasks. On February 10, 1996, IBM’s Deep Blue computer won a game of chess against a former world champion, Garry Kasparov.
- Knowledge Representation
Smalltalk is an object-oriented, dynamically typed, reflective programming language that was created to underpin the “new world” of computing exemplified by “human-computer symbiosis.”
- Planning and Navigation
The process of enabling a computer to get from point A to point B. A prime example of this is Google’s self-driving Toyota Prius.
- Natural Language Processing
Set up computers that can understand and process language.
- Perception
Use computers to interact with the world through sight, hearing, touch, and smell.
- Perception
Use computers to interact with the world through sight, hearing, touch and smell.
- Emergent Intelligence
Intelligence that is not explicitly programmed, but emerges from the rest of the specific AI features. The vision for this goal is to have machines exhibit emotional intelligence and moral reasoning.
Some of the tasks performed by AI-enabled devices include:
- Speech Recognition
- Object Detection
- Solve Problems and Learn From The Given Data
- Plan an approach for future tests to be done
Advantages and Disadvantages of AI
Artificial intelligence has its advantages and disadvantages, like any other concept, innovation or idea.
Here’s a quick rundown of some of the pros and cons.
Pros:
- It reduces human error
- It never sleeps, so it’s available 24x7
- It never gets bored, so it easily handles repetitive tasks
- It’s fast
Cons:
- It’s costly to implement
- It can’t duplicate human creativity
- It will definitely replace some jobs, leading to unemployment
- People can become overly reliant on it
Applications of Artificial Intelligence
Machines and computers affect how we live and work. Top companies are continually rolling out revolutionary changes to how we interact with machine-learning technology.
DeepMind Technologies, a British artificial intelligence company, was acquired by Google in 2014. The company created a Neural Turing Machine, that allowed computers to mimic the short-term memory of the human brain.
Google’s driverless cars and Tesla’s Autopilot features are the introductions of AI into the automotive sector. Elon Musk, CEO of Tesla Motors, has suggested via Twitter that Teslas will have the ability to predict the destination that their owners want to go to via learning their pattern or behaviour via AI.
A question-answering computer system developed by IBM, named Watson, was designed for use in the medical field. Watson suggests various kinds of treatment for patients based on their medical history and is proven to be very useful.
Some of the more common commercial business uses of AI are:
- Banking Fraud Detection
From extensive data consisting of fraudulent and non-fraudulent transactions, the AI learns to predict if a new transaction is fraudulent or not.
- Online Customer Support
AI is now automating most of the online customer support and voice messaging systems.
- Cyber Security
Using machine learning algorithms and ample sample data, AI can be used to detect anomalies and adapt and respond to threats.
- Virtual Assistants
Siri, Cortana, Alexa, and Google now use voice recognition to follow the user’s commands. They collect information, interpret what is being asked, and supply the answer via fetched data. These virtual assistants gradually improve and personalize solutions based on user preferences.