What is Artificial Intelligence (AI) and How Does it Work

What is AI how does it work
Table of Contents

Artificial intelligence (AI) is the science of creating computer systems that simulate forms of behavior associated with human intelligence. These computer systems are supported by complex software that allows them to collect and store information and use it in practice. Thanks to these features, AI is constantly advancing and improving.

History of Artificial Intelligence

We consider artificial intelligence a young science, but since the earliest days, humans have fantasized about inanimate objects that would have human-like reasoning powers. We meet the AI concept with the ancient Greeks and Egyptians, but also in the Middle Ages.

In the Middle Ages, the concrete idea of ​​artificial intelligence was born. Scientists took the first steps towards its realization at the end of the 19th and the beginning of the 20th century, when Charles Babbage and Augusta Ada Byron, Countess of Lovelace, created the first modern computer – the basis of a machine that could be programmed and serve as AI.

The 1940s. The idea continued to develop, and each of the followers brought their contribution. Thus, in 1940, Princeton mathematician John Von Neumann designed the structure of a computer that could store data and programs in its memory, while Warren McCulloch and Walter Pitts began a deeper investigation of neural networks.

The 1950s. In 1950 Alan Turing’s “Computing Machinery and Intelligence” work was published. Turing, an English mathematician, logician, and cryptographer, was the first to ask whether machines could think like humans. He created a test, today known as the “Turing Test”, which was supposed to determine the answer to this question. The Turing Test tested whether a computer could offer examiners solutions that would deceive them into thinking it was a human being. This test remains an essential part of the history and origins of AI to this day.

1956 was a significant year for the development of AI. Dartmouth College hosted a summer conference attended by the most eminent names in the field of AI, including Marvin Minsky, Oliver Selfridge, and John McCarthy, who coined the term artificial intelligence. The first AI program, called Logic Theorist, was presented at the conference, which proved mathematical theorems. The creators of this program were Allen Newell and Herbert A. Simon.

The 1950s and 1960s. Governments recognized the potential of AI. Newell and Simon published the General Problem Solver (GPS) algorithm; McCarthy developed Lisp, an AI programming language still in use today. In the mid-1960s, MIT professor Joseph Weizenbaum developed ELIZA, an early natural language processing program.

The 1970s and 1980s saw a lull in AI research and development as governments and industries cut off funding. In the 1980s, research into deep learning techniques and the adoption of Edward Feigenbaum’s expert systems gave rise to hope that AI research could continue. Still, funding was missing until the mid-1990s. 

The 1990s and today. Since 1990, we have had the opportunity to witness the rapid development of computer science and AI that continues today. 1997 was a particularly memorable year in the history of AI – IBM Deep Blue defeated the famous Russian chess master Garry Kasparov! In 2011, the successor to Deep Blue, IBM’s Watson, a question-answering computer system based on NLP beat two former champions of Jeopardy.

From its infancy, AI was intended to solve problems, thanks to advanced computer learning and massive databases, and expert systems would copy the process of human decision-making. Today, based on machine learning (ML) and natural language processing (NLP), AI is more complex than ever and plays a vital role in everyday usage. It is present in all essential fields of life – education, health care, technology, law, transport, manufacturing, and numerous others.

How does AI work?

AI and its components, such as machine learning, are often used interchangeably as synonyms, although they are not. AI systems require special hardware and software that write and train machine learning algorithms. All of them make AI, but none of them is AI alone.

Generally speaking, AI systems work by collecting data, analyzing it, and looking for correlations between them to know how to use them in the future. In this way, they make patterns that predict future uses.

AI is based on three cognitive processes:

  1. The learning process is based on collecting and converting data into valuable and usable information (algorithms).
  2. The reasoning process involves choosing a suitable algorithm to solve a particular task.
  3. The self-correction process enables the improvement of algorithms and the creation of more adequate responses to achieve the best possible results.

Is cognitive computing part of AI?

Cognitive computing is also often interchangeably used as a synonym for AI, but those two terms don’t represent the same thing. While AI systems simulate human intelligence and become more advanced daily, aiming to become superior to humans, cognitive computing refers to products that mimic human thinking. AI systems are pre-programmed and operate without human influence, while cognitive computing relies on the human factor.

In simpler terms, AI relies on previously acquired knowledge and patterns to make an independent decision. At the same time, cognitive computing collects data, giving humans insights to help them make a decision.

What are the pros and cons of AI?

Thanks to machine learning, AI processes enormous amounts of data quickly. If properly programmed, AI is faster and more effective than humans in processing information, the possibility of making mistakes is minimal, and its predictions are incomparably more accurate than human ones.

Here are some of the key benefits of AI:

  • AI avoids human errors. Humans make mistakes; machines, if properly programmed, do not. AI makes decisions based on previously collected information and patterns of repeating situations. Greater precision and accuracy are guaranteed.
  • AI takes the risk instead of humans. As we mentioned in the introduction, AI simulates human intelligence thanks to the collected knowledge. AI robots can replace humans in risky situations, defuse bombs, fly into space, explore the ocean floor, and the like.
  • AI is productive 24/7, unlike humans. Although the average working day lasts 8 hours, research has shown that people are productive on average for 3 hours a day. AI doesn’t get tired, distracted, or bored – and doesn’t require breaks. It works efficiently 24/7 and excels at multitasking.
  • AI is a great online assistant. Chatbots are based on artificial intelligence and can be so complex that you cannot tell whether you correspond with a chatbot or a human operator. One thing is sure – AI does an excellent job as an online assistant and allows your customers to get the answers they want, saving you time and money.
  • AI accelerates decision-making. While people may be guided by subjective attitudes and emotions when making decisions, AI bases its decisions on facts.
  • AI performs repetitive tasks. Tasks such as checking documents for accuracy and sending thank-you notes can be monotonous. AI successfully takes over, allowing your employees to focus on more important tasks that won’t stifle their creativity.
  • AI is part of a daily routine for many. iPhone has Siri, Amazon has Alexa, and Google has Google Assistant. They are some of the most widely used forms of AI for making phone calls, sending emails, browsing the internet, and similar applications.
Advantages & disadvantages of artificial intelligence

Unfortunately, artificial intelligence also has some less attractive sides:

  • AI is very expensive. Creating a machine that will mimic human reasoning requires advanced hardware, software, and time. AI devices are constantly updating and require maintenance, which is all very expensive.
  • AI lacks creativity. AI successfully performs the tasks it is programmed for but cannot think outside the box.
  • AI can take human jobs. One of the biggest fears related to AI is unemployment. AI-based robots have already replaced humans in many manufacturing and research jobs, and there are fears that chatbots and digital assistants will take over even more.
  • AI can make people careless. Since it takes over a large part of the responsibilities of employees, it can also lead to negligence – workers can potentially assume that AI will take care of important details for them.
  • AI has no ethics. The morality developed in the human race from the earliest days of childhood does not exist with AI machines.They only know what they are programmed to learn, recognize and use.

The types of AI classification

We can divide AI systems based on functionalities and capabilities. When we talk about functionalities, which symbolize the possibility of the machines learning from experience and making human-like decisions, we distinguish four types of AI: 

  1. Reactive machines. These AI systems are the simplest and most basic version, purely reactive, with no memory of past experiences. They are based only on the present moment and react based on what they see – presently available information. Deep Blue, an IBM chess-playing supercomputer, is an example of a reactive machine.
  2. Limited memory. This AI machine has a memory of information that can make future decisions. Their memory is limited – they don’t have a library of records to store indefinitely and learn from. Examples of these machines are self-driving cars.
  3. Theory of mind. The machines in this class are significantly more advanced than the previous two. They are based on the psychological theory of mind – the understanding that everything around us has thoughts and emotions that influence our judgment. This future AI machine will have the social intelligence to understand human emotions and adapt its approach to each person individually.
  4. Self-awareness. Creating AI machines that have self-awareness is the future. These machines are supposed to be an extended version of the “Theory of Mind” – they will not only understand human consciousness but have their own.
Types of Artificial Intelligence Infographic
Types of Artificial Intelligence

Weak vs. strong AI

According to its capabilities, AI can be categorized as weak or strong, and within their framework, we distinguish the following three types of AI systems:

Artificial Narrow Intelligence or Narrow AI is also called weak AI is so called because of its limitations – it has predefined functions and can perform narrowly specialized tasks and nothing more. An example of narrow – i.e., weak AI is Apple’s Siri.

Artificial General Intelligence, or shorter General AI, is imagined as a more advanced version that should be able to perform any task that a human can. Therefore it will belong to strong AI. These machines should be able to apply acquired knowledge and skills in all situations. General AI does not yet exist, although large sums of money are being invested in its development.

And there is the third, the most superior one – Artificial Super Intelligence or Super AI. Super AI machines are supposed to be more advanced than humans and perform tasks better than a human can. Super artificial intelligence should fully understand consciousness and human needs but also have its own. Super AI cannot be categorized as strong AI because it is beyond strong – if ever constructed, it will be extravagant (this type of machine also does not exist yet).

How does AI Technology Make Everyday Life Easier? 

AI technology sounds unfamiliar and futuristic to many, and they are unaware that they use it in their everyday lives. If you’ve ever asked Siri or Alexa for a recommendation, unlocked a mobile phone with facial recognition, sent a predefined SMS message, or corresponded with an e-commerce shop chatbot, you have used artificial intelligence.

Although AI is often considered a concept that awaits us in the future, the future has already arrived. Artificial intelligence is currently present in almost all spheres of life – from mobile phones, driving a car, and in the performance of administrative tasks – eliminating the need for human activity in some rather tedious and monotonous tasks. It is represented in healthcare, the financial sector, legal institutions, development centers, and many other industries.

The wide application of AI should not surprise us. Everyday life is based on the daily use of the Internet and smartphones, and information is the most valuable currency on the market. Algorithms that simulate human intelligence and decision-making can only improve the quality of life with powerful performance.

AI algorithms use technologies, the most prominent of which (but not the only ones) are machine learning and deep learning, which allow them to process data at an incredible speed, with minimal possibility of errors. That is especially important in the banking and finance sectors. Neural networks enable the development of more advanced technologies, such as facial recognition or natural language processing, allowing voice searches, dictation, and numerous other uses. AI systems also use other prominent technologies: automation, machine vision, and robotics.

Artificial intelligence algorithms are self-improving in real-time, using every query as a chance to improve the speed and accuracy of the answers provided.

AI for Good Conference Interview with Neil Sahota

Fields of AI implementation 

Artificial intelligence has found application in almost all areas of life and work, and here are a few examples of applications where its presence has become irreplaceable:

AI in healthcare. AI has numerous applications in healthcare – from faster data processing and analysis, suggestions for treatment plans, prediction of the course of pandemics, health chatbots, etc. One of the most significant benefits of using AI in healthcare is reducing costs and treating patients more efficiently, which can be a matter of life or death.

AI in education. AI systems can help students learn faster and more effectively, especially quality learning materials. It can help teachers prepare teaching materials or score and grade tests, which would leave more time to work with students.

AI in law. Attorneys and judges can use AI to draft contracts, predict the outcome of legal disputes and propose sanctions for crimes committed. Judicial systems operate pretty slowly, and AI could speed them up without the risk of errors and with increased productivity of lawyers and judges. AI in law can be a valuable aid, but it still cannot replace human labor, as it lacks ethics and judgment.

AI in finance. For most banks and financial systems companies, AI could be a valuable help. AI can automate tasks, help detect fraud, or serve as a chatbot that makes it easier for customers to use banking mobile and web applications. In this way, employees’ work is made easier and faster but also improves the user experience for which the bank’s services are available 24/7.

What Can We Expect from AI in the Future?

AI systems already have a pretty strong impact on our lives, and only stands to become even more significant in the coming days, weeks, and months, let alone years. AI technology is developing faster than most have guessed, becoming more autonomous, productive, and efficient by the day. It’s expected to expand into even more industries, bringing relief and concern.

Many are worried that AI machines could eliminate their jobs. It’s a justifiable worry: AI can remove risks, take on dangerous tasks, and fill in tedious, repetitive paperwork and generic positions to leave more room for people to be creative. However, no matter how advanced, AI machines lack self-awareness, ethics, emotions, and other subjective feelings, making it impossible for them to take over workplaces in child-care, health, law, art, and other fields.

AI has already changed the world we know, and it will change it a lot in the future – let’s hope for the better.

So… what now?

We are still not close to the development of strong and super AI systems, which would function independently, and have their consciousness and self-awareness, but that is why the development of machines that make decisions based on previous experiences and collected information bases is taking place quickly and successfully.

Neil Sahota
Neil Sahota (萨冠军) is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) Advisor, author of the best-seller Own the AI Revolution and sought-after speaker. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI.