What is artificial intelligence (AI): definition of the concept in simple words.

The most famous way to determine if a machine has intelligence is the Turing test, proposed in 1950 by mathematician Alan Turing. During the test, a person talks to a computer and must determine who is talking - a machine or a person. If a machine is able to imitate a conversation, then it has intelligence. Today, the Turing test is already: last summer, the chat bot Eugene Goostman passed it, and the test is constantly criticized. Look At Me put together eight other ways to tell if a car has intelligence.

Lovelace Test 2.0


This test is named after Ada Lovelace, a 19th-century mathematician who is considered the first computer programmer in history. It is designed to determine the presence of intelligence in a machine through its ability to be creative. The test was originally proposed in 2001, in which the machine was supposed to create a work of art that the machine's designer would mistake for human-made. Since there are no clear criteria for success, the test is too inaccurate.

Last year, Professor Mark Reidel of the Georgia Institute of Technology updated the test to make it less subjective. Now the machine must create a work in a certain genre and within certain creative limits set by a human judge. Simply put, it must be a work of art in a specific style. For example, a judge might ask the machine to paint a Mannerist painting in the vein of Parmigianino or a piece of jazz in the vein of Miles Davis. Unlike the original test, the machines work within the given limits, and therefore the judges can evaluate the result more objectively.

IKEA test


The machine is shown a picture and asked, for example, where the cup is on it, and they are given several answers. All answer options are correct (on the table, on the mat, in front of the chair, to the left of the lamp) but some may be more human than others (say, of all of the above, a person is more likely to answer “on the table”). It seems like a simple task, but in reality the ability to describe where an object is in relation to other objects is - essential element human mind. Many nuances and subjective judgments come into play here, from the size of objects to their role in a particular situation - in general, the context. Humans do it intuitively, but machines run into problems.

Grape Schemes


Chatbots that pass the Turing test are good at tricking judges into believing they are human. According to Hector Levesque, professor of computer science at the University of Toronto, such a test only shows how easy it is to deceive a person, especially in short text correspondence. But it's impossible to tell from the Turing test whether a machine has intelligence or even language understanding.

The concept of artificial intelligence (AI or AI) includes not only technologies that allow you to create intelligent machines (including computer programs). AI is also one of the areas of scientific thought.

Artificial Intelligence - Definition

Intelligence- this is the mental component of a person, which has the following abilities:

  • adaptive;
  • learning through the accumulation of experience and knowledge;
  • the ability to apply knowledge and skills to manage the environment.

The intellect combines all the abilities of a person to cognize reality. With the help of it, a person thinks, remembers new information, perceives environment and so on.

Artificial intelligence is understood as one of the areas of information technology, which is engaged in the study and development of systems (machines) endowed with the capabilities of human intelligence: the ability to learn, logical reasoning, and so on.

At the moment, work on artificial intelligence is carried out by creating new programs and algorithms that solve problems in the same way as a person does.

Due to the fact that the definition of AI evolves as this direction develops, it is necessary to mention the AI ​​Effect. It refers to the effect that artificial intelligence creates when it has made some progress. For example, if AI has learned to perform any actions, then critics immediately join in, arguing that these successes do not indicate the presence of thinking in the machine.

Today, the development of artificial intelligence goes in two independent directions:

  • neurocybernetics;
  • logical approach.

The first direction involves the study of neural networks and evolutionary computing from the point of view of biology. The logical approach involves the development of systems that mimic high-level intellectual processes: thinking, speech, and so on.

The first work in the field of AI began to be conducted in the middle of the last century. The pioneer of research in this direction was Alan Turing, although certain ideas began to be expressed by philosophers and mathematicians in the Middle Ages. In particular, as early as the beginning of the 20th century, a mechanical device capable of solving chess problems was introduced.

But in reality this direction was formed by the middle of the last century. The appearance of works on AI was preceded by research on human nature, ways of knowing the world around us, the possibilities of the thought process, and other areas. By that time, the first computers and algorithms had appeared. That is, the foundation was created on which a new direction of research was born.

In 1950, Alan Turing published an article in which he asked questions about the capabilities of future machines, as well as whether they could surpass humans in terms of sentience. It was this scientist who developed the procedure that was later named after him: the Turing test.

After the publication of the works of the English scientist, new research in the field of AI appeared. According to Turing, only a machine that cannot be distinguished from a person during communication can be recognized as a thinking machine. Around the same time that the role of a scientist appeared, a concept was born, called the Baby Machine. It envisaged the progressive development of AI and the creation of machines whose thought processes are first formed at the level of a child, and then gradually improve.

The term "artificial intelligence" was born later. In 1952, a group of scientists, including Turing, met at the American University of Dartmund to discuss issues related to AI. After that meeting, the active development of machines with the capabilities of artificial intelligence began.

A special role in the creation of new technologies in the field of AI was played by the military departments, which actively funded this area of ​​research. Subsequently, work in the field of artificial intelligence began to attract large companies.

Modern life puts more challenging tasks in front of researchers. Therefore, the development of AI is carried out in fundamentally different conditions, if we compare them with what happened during the period of the emergence of artificial intelligence. The processes of globalization, the actions of intruders in the digital sphere, the development of the Internet and other problems - all this poses complex tasks for scientists, the solution of which lies in the field of AI.

Despite the successes achieved in this area in recent years (for example, the emergence of autonomous technology), there are still voices of skeptics who do not believe in the creation of a truly artificial intelligence, and not a very capable program. A number of critics fear that the active development of AI will soon lead to a situation where machines will completely replace people.

Research directions

Philosophers have not yet come to a consensus about what is the nature of the human intellect, and what is its status. In this regard, in scientific works devoted to AI, there are many ideas that tell what tasks artificial intelligence solves. There is also no common understanding of the question of what kind of machine can be considered intelligent.

Today, the development of artificial intelligence technologies goes in two directions:

  1. Descending (semiotic). It involves the development of new systems and knowledge bases that imitate high-level mental processes such as speech, expression of emotions and thinking.
  2. Ascending (biological). This approach involves research in the field of neural networks, through which models of intellectual behavior are created from the point of view of biological processes. Based on this direction, neurocomputers are being created.

Determines the ability of artificial intelligence (machine) to think in the same way as a person. In a general sense, this approach involves the creation of AI, the behavior of which does not differ from human actions in the same, normal situations. In fact, the Turing test assumes that a machine will be intelligent only if, when communicating with it, it is impossible to understand who is talking: a mechanism or a living person.

Science fiction books offer a different way of assessing the capabilities of AI. Artificial intelligence will become real if it feels and can create. However, this approach to definition does not hold up in practice. Already, for example, machines are being created that have the ability to respond to changes in the environment (cold, heat, and so on). At the same time, they cannot feel the way a person does.

Symbolic approach

Success in solving problems is largely determined by the ability to flexibly approach the situation. Machines, unlike people, interpret the data they receive in a unified way. Therefore, only a person takes part in solving problems. The machine performs operations based on written algorithms that exclude the use of several abstraction models. To achieve flexibility from programs is possible by increasing the resources involved in the course of solving problems.

The above disadvantages are typical for the symbolic approach used in the development of AI. However, this direction of development of artificial intelligence allows you to create new rules in the calculation process. And the problems arising from the symbolic approach can be solved by logical methods.

logical approach

This approach involves the creation of models that mimic the process of reasoning. It is based on the principles of logic.

This approach does not involve the use of rigid algorithms that lead to a certain result.

Agent Based Approach

It uses intelligent agents. This approach assumes the following: intelligence is a computational part, through which goals are achieved. The machine plays the role of an intelligent agent. She learns the environment with the help of special sensors, and interacts with it through mechanical parts.

The agent-based approach focuses on the development of algorithms and methods that allow machines to remain operational in various situations.

Hybrid approach

This approach involves the integration of neural and symbolic models, due to which the solution of all problems associated with the processes of thinking and computing is achieved. For example, neural networks can generate the direction in which the operation of a machine moves. And static learning provides the basis through which problems are solved.

According to company experts Gartner, by the beginning of the 2020s, almost all released software products will use artificial intelligence technologies. Also, experts suggest that about 30% of investments in the digital sphere will fall on AI.

According to Gartner analysts, artificial intelligence opens up new opportunities for cooperation between people and machines. At the same time, the process of crowding out a person by AI cannot be stopped and in the future it will accelerate.

In company PwC believe that by 2030 the volume of the world's gross domestic product will grow by about 14% due to the rapid introduction of new technologies. Moreover, approximately 50% of the increase will provide an increase in the efficiency of production processes. The second half of the indicator will be the additional profit received through the introduction of AI in products.

Initially, the United States will receive the effect of the use of artificial intelligence, since this country has created Better conditions for the operation of machines on AI. In the future, they will be surpassed by China, which will extract the maximum profit by introducing such technologies into products and their production.

Company experts Sale force claim that AI will increase the profitability of small businesses by about $1.1 trillion. And it will happen by 2021. In part, this indicator will be achieved through the implementation of solutions offered by AI in systems responsible for communication with customers. At the same time, the efficiency of production processes will improve due to their automation.

The introduction of new technologies will also create an additional 800,000 jobs. Experts note that this figure offsets the loss of vacancies due to process automation. Analysts, based on a survey among companies, predict their spending on factory automation will rise to about $46 billion by the early 2020s.

In Russia, work is also underway in the field of AI. For 10 years, the state has financed more than 1.3 thousand projects in this area. Moreover, most of the investments went to the development of programs that are not related to the conduct of commercial activities. This shows that the Russian business community is not yet interested in introducing artificial intelligence technologies.

In total, about 23 billion rubles were invested in Russia for these purposes. The amount of government subsidies is inferior to the amount of AI funding shown by other countries. In the United States, about 200 million dollars are allocated for these purposes every year.

Basically, in Russia, funds are allocated from the state budget for the development of AI technologies, which are then used in the transport sector, the defense industry, and in projects related to security. This circumstance indicates that in our country people are more likely to invest in areas that allow you to quickly achieve a certain effect from the invested funds.

The above study also showed that Russia now has a high potential for training specialists who can be involved in the development of AI technologies. For 5 recent years Approximately 200,000 people have been trained in areas related to AI.

AI technologies are developing in the following directions:

  • solving problems that make it possible to bring the capabilities of AI closer to human ones and find ways to integrate them into everyday life;
  • development of a full-fledged mind, through which the tasks facing humanity will be solved.

At the moment, researchers are focused on developing technologies that solve practical problems. So far, scientists have not come close to creating a full-fledged artificial intelligence.

Many companies are developing technologies in the field of AI. "Yandex" has been using them in the work of the search engine for more than one year. Since 2016, the Russian IT company has been engaged in research in the field of neural networks. The latter change the nature of the work of search engines. In particular, neural networks compare the query entered by the user with a certain vector number that most fully reflects the meaning of the task. In other words, the search is conducted not by the word, but by the essence of the information requested by the person.

In 2016 "Yandex" launched the service "Zen", which analyzes user preferences.

Company Abbyy recently introduced a system Compreno. With the help of it, it is possible to understand the text written in natural language. Other systems based on artificial intelligence technologies have also entered the market relatively recently:

  1. findo. The system is capable of recognizing human speech and searches for information in various documents and files using complex queries.
  2. Gamalon. This company introduced a system with the ability to self-learn.
  3. Watson. An IBM computer that uses a large number of algorithms to search for information.
  4. ViaVoice. Human speech recognition system.

Large commercial companies are not bypassing advances in the field of artificial intelligence. Banks are actively implementing such technologies in their activities. With the help of AI-based systems, they conduct transactions on exchanges, manage property and perform other operations.

The defense industry, medicine and other areas are implementing object recognition technologies. And development companies computer games, apply AI to create the next product.

Over the past few years, a group of American scientists has been working on a project NEIL, in which the researchers ask the computer to recognize what is shown in the photograph. Experts suggest that in this way they will be able to create a system capable of self-learning without external intervention.

Company VisionLab introduced its own platform LUNA, which can recognize faces in real time by selecting them from a huge cluster of images and videos. This technology is now used by large banks and network retailers. With LUNA, you can compare people's preferences and offer them relevant products and services.

A Russian company is working on similar technologies N-Tech Lab. At the same time, its specialists are trying to create a face recognition system based on neural networks. According to the latest data, Russian development copes with the assigned tasks better than a person.

According to Stephen Hawking, the development of artificial intelligence technologies in the future will lead to the death of mankind. The scientist noted that people will gradually degrade due to the introduction of AI. And in the conditions of natural evolution, when a person needs to constantly fight to survive, this process will inevitably lead to his death.

Russia is positively considering the introduction of AI. Alexei Kudrin once said that the use of such technologies would reduce the cost of maintaining the state apparatus by about 0.3% of GDP. Dmitry Medvedev predicts the disappearance of a number of professions due to the introduction of AI. However, the official stressed that the use of such technologies will lead to the rapid development of other industries.

According to experts from the World Economic Forum, by the beginning of the 2020s, about 7 million people in the world will lose their jobs due to the automation of production. The introduction of AI is highly likely to cause the transformation of the economy and the disappearance of a number of professions related to data processing.

Experts McKinsey declare that the process of automation of production will be more active in Russia, China and India. In these countries, in the near future, up to 50% of workers will lose their jobs due to the introduction of AI. Their place will be taken by computerized systems and robots.

According to McKinsey, artificial intelligence will replace jobs that involve physical labor and information processing: retail, hotel staff, and so on.

By the middle of this century, according to experts from an American company, the number of jobs worldwide will be reduced by about 50%. People will be replaced by machines capable of carrying out similar operations with the same or higher efficiency. At the same time, experts do not exclude the option in which this forecast will be realized before the specified time.

Other analysts note the harm that robots can cause. For example, McKinsey experts point out that robots, unlike humans, do not pay taxes. As a result, due to a decrease in budget revenues, the state will not be able to maintain infrastructure at the same level. Therefore, Bill Gates proposed a new tax on robotic equipment.

AI technologies increase the efficiency of companies by reducing the number of mistakes made. In addition, they allow you to increase the speed of operations to a level that cannot be achieved by a person.

What is artificial intelligence? Undoubtedly, many have heard of cars that can control their movement without human assistance, speech recognition devices such as Apple's Siri, Amazon's Alexa, Google's Assistant and Microsoft's Cortana. But this is not all the possibilities of artificial intelligence (AI).

AI was first "discovered" in the 1950s. Over the years, it has had ups and downs, but at the present stage of human development, artificial intelligence is seen as a key technology of the future. Thanks to the development of electronics and the emergence of faster processors, an increasing number of applications are beginning to use AI. Artificial intelligence is an extraordinary software technology that every engineer should become familiar with. In this article, we will try to briefly describe this technology.

Artificial intelligence defined

AI is a subfield of computer science that involves making smarter use of computers and electronic components by mimicking the human brain. Intelligence is the ability to acquire knowledge and experience and apply it to solve problems. AI is especially useful in analyzing and interpreting data sets and extracting real useful information. From information comes insight that can be used to make decisions or take some kind of action.

Areas of study

Artificial intelligence is a broad technology with many possible applications. It is usually divided into sub-branches. Let's take a quick look at each of them:

  • Solving general problems that do not have a specific algorithmic solution. Problems with uncertainty and ambiguity.
  • Expert systems are software that contain a knowledge base of rules, facts, and data from several individual experts. The database can be requested to solve problems, diagnose diseases or provide advice.
  • Natural Language Processing (NLP) – used for text analysis. Voice recognition is also part of (NLP).
  • Computer vision is the analysis and understanding of visual information (photos, videos, and so on). Machine vision and face recognition are examples. Used in "autonomous" vehicles and production lines.
  • Robotics is the creation of smarter, more adaptive and “self-reliant” robots.
  • Games: The AI ​​is great at playing games. Computers are already programmed to play and win at chess, poker and Go.
  • Machine learning is procedures that allow a computer to learn from input and make sense of the results. Neural networks form the basis of machine learning.

How artificial intelligence works

Ordinary computers use algorithms to solve problems. The sequence of instructions leads to a step-by-step execution of actions to obtain results. Traditional forms of artificial intelligence are based on knowledge bases and inference engines that use various mechanisms to work with the knowledge base through a user interface. Useful results have been obtained by some of the methods listed below:

  • Search: Search algorithms use a database of information organized into graphs or trees. Search is the main method of artificial intelligence.
  • Logic: Deductive and inductive reasoning is used to determine the truth or falsity of statements. This includes both propositional logic and predicate logic.
  • Rules: Rules are a series of "if" statements that can be found to determine the outcome. Rule-based systems are called expert systems.
  • Probability and statistics: Some problems can be solved, and solutions are found, through the application of standard mathematical theory of probability and statistics.
  • Lists: Some types of information can be stored in lists that are searchable.
  • Other forms of knowledge are schemas, frames, and scripts, which are structures that encapsulate Various types knowledge. Search methods look for answers to relevant queries.

Traditional or legacy AI methods such as search, logic, probability, and rules are considered the first wave of artificial intelligence. These methods are still in use and take knowledge and reasoning well, especially for a narrow range of problems. The first wave of AI lacks the human traits of learning and decision abstraction. These qualities are now available in the second wave of artificial intelligence, thanks to neural networks and machine learning.

Neural networks

Today, most AI research and development is based on the use of neural networks or artificial neural networks (ANNs). These networks are made up of artificial neurons that mimic the neurons in the human brain that are responsible for our thinking and learning. Each neuron is a node of a complex interconnection that connects many neurons to others through synapses. The ANN simulates this network.

Each node has several weighted inputs, as well as an output and a threshold setting (figure above). Such nodes are usually implemented in software, although hardware emulation is also possible. A typical circuit consists of three layers - an input layer, a hidden (processing or training layer) and an output layer:

Some mechanisms use backpropagation to provide feedback that changes the input weights of some nodes as new information is received.

Machine learning and deep learning

Machine learning is a method of teaching a computer to recognize patterns. The computer or device is "learned" with an example, and then special programs are run to compare the input with the learned value. Typically, training software requires huge amounts of data. Machine learning programs are meant to be learned automatically as they gain more knowledge and experience from new material.

Neural networks are commonly used for machine learning, however other algorithms can be used as well. The software can then change itself to improve recognizability based on new inputs. Now, some machine learning systems can recognize patterns on their own without training, and then modify themselves for further improvement.

Deep learning is an extended case of machine learning. It also uses neural networks called deep neural networks (DNNs). They include additional hidden levels of computation to further enhance their capabilities. Bulk training required. Programmers can improve performance by playing with interconnect weights. DNNs also require matrix processing. However, it should be noted that DNNs use statistical weights, so results in, say, visible recognition may not be 100%. In addition, debugging such systems is a very painstaking work.

Machine learning and deep learning are widely used in big data analysis, as well as in computer vision and speech recognition. They can also be applied in other areas such as medicine, law and finance.

Artificial intelligence software

Almost any programming language can be used for AI programming, but some languages ​​have certain advantages. Profile languages ​​designed specifically for AI include LISP and Prolog. LISP, one of the oldest higher level languages, deals with lists. Prolog is based on logic. C++ and Python are popular today. There is also a special software for the development of expert systems.

Several major AI users provide development platforms, including Amazon, Baidu (China), Google, IBM, and Microsoft. These companies offer pre-trained systems as a starting point for some common applications such as voice recognition. Processor vendors such as Nvidia and AMD also offer some support.

AI hardware

Running artificial intelligence software on a computer usually requires high speed and a large amount of memory. However, some simple applications can run on an 8-bit processor. Some of today's processors are more than adequate, and multiple parallel processors can be ideal solution for certain applications. In addition, special processors have been developed for some applications.

Graphics processing units (GPUs) are an example of focusing an architecture and instruction set on a given use to optimize performance. For example, dedicated Nvidia processors for self-driving cars and AMD GPUs. Google has developed its own engines to optimize its search engines. Intel and Knupath also offer software support for their advanced processors. In some cases, special logic in an ASIC or FPGA can implement a particular application.

Activity and current status

Artificial intelligence was once considered exotic software designed for special needs. The requirement for high-speed computers with lots of memory limited its use. Today, thanks to super fast processors, multi-core processors and cheap memory, AI has become more popular. The Google search engines that we all use on a daily basis are based on artificial intelligence.

Today, the emphasis is undoubtedly on neural networks and deep machine learning. While voice recognition and self-driving vehicles continue to take center stage, other key applications such as facial recognition, autonomous navigation, robotics, medical diagnostics, and finance are emerging. Advanced military applications (such as autonomous weapons) are also in development.

The future of AI looks promising. According to Orbis Research, the global AI market is expected to grow by 2022, with a compound annual growth rate of over 35%. The International Data Corporation (IDC) is also positive, saying AI spending is expected to increase to $47 billion in 2020, up from $8 billion in 2016.

Many people have a logical question - will artificial intelligence replace people in certain professions, and what kind of professions will they be? The answer is as follows - "possibly and only some." Most likely, artificial intelligence-based computers will help increase the productivity of some professions by increasing productivity, efficiency and speed of decision-making. However, some industrial jobs will still be lost as robotics develops, but the replacement of humans by machines will lead to the creation of new jobs related to the maintenance of these machines.

Another question asked by many people, can artificial intelligence be dangerous for humanity? The AI ​​is smart, but not that smart. Its main purpose will be data analysis, problem solving and decision making based on available information and distilled knowledge. People still dominate, especially when it comes to innovation and creativity. However, it is difficult to predict the future. At least at this stage of development, there are no super smart robots, not yet ...

Many people think that artificial intelligence is a distant future, but we face it daily.

Saudi Arabia, 2017 The world's first robot receives citizenship. This is Sofia, the most famous representative of artificial intelligence technologies in the media space. She knows how to maintain a conversation, reproduces up to 62 believable facial expressions, makes provocative statements and jokes about Elon Musk and the destruction of humanity.

It would seem that such technologies are still far from "mere mortals", and in fact we interact with artificial intelligence on a daily basis. So what is it, where is it found, and how do machines manage to learn?

What, when, where

When asked what artificial intelligence (AI) is, Wikipedia will answer that this is a section of computational linguistics and computer science that formalizes tasks that resemble those performed by a person.

In simple terms, artificial intelligence (AI) is a broad branch of computer science that aims to imitate human intelligence by machines. And although this technology has been actively talked about since the early 2000s, it is far from new.

The term "artificial intelligence" was coined by Dartmouth College professor John McCarthy back in 1956 when he led a small team of scientists to determine whether machines could learn like children through trial and error, eventually developing formal thinking.

In fact, the project was based on the intention to figure out how to make machines "use language, abstract forms, solve the problems that people usually solve, and improve." And that was over 60 years ago.

Why the demand for AI has arisen right now

1. Today we are dealing with an unprecedented amount of information. Over the past few years, 90% of the world's data has been created. This statistic was first mentioned in a study by IBM back in 2013, but this trend remains constant. Indeed, every two years over the past three decades, the amount of data in the world has increased by about 10 times.

2. Algorithms are becoming more sophisticated, and machines with neural networks are able to reproduce the way the human brain works and form complex associations.

3. Computing power is constantly growing and is able to process a huge amount of data.

Put it all together and you get a lot technical workers, CEOs and venture capitalists who invest in the development of AI and are interested in the progress of technology.

"Artificial Intelligence" and we

AI technologies have been capturing the public's imagination for decades, but many don't realize they use them every day.

So, the profile company SpotHub conducted a random survey of 1,400 people from different parts of the world, and it turned out that 63% of them are not aware of the daily importance of AI.

Perhaps this is because when it comes to artificial intelligence, we expect to see a smart robot that talks and thinks like us. And while Sophia and machines like it may now seem like a “hello” from the future, it is still a technology far from self-aware.

Now we are surrounded by many incredibly complex artificial intelligence tools that are designed to facilitate all aspects of modern life. Here are just a few of them:

Search assistants such as Siri, Alexa, and Cortana are equipped with human voice processing and recognition software, making them AI tools. So far, voice search capabilities are available on 3.9 billion Apple devices, Android and Windows worldwide, and that's not counting other manufacturers. For its prevalence, voice search is one of the most modern technologies with Al support.

video games

Video games have long used Al, the complexity and effectiveness of which has grown exponentially over the past few decades. As a result, for example, virtual characters are able to behave in a completely unpredictable way, analyzing the environment.

Autonomous cars

Fully autonomous cars are getting closer to reality. This year, Google announced an algorithm that can learn to drive a car exactly like a person does - through experience. The idea is that eventually the car will be able to "look" at the road and make decisions based on what it sees.

Product offer

Big retailers like Target and Amazon are making millions from their stores' ability to anticipate your needs. For example, the recommendation service on Amazon.com is based on machine learning technologies, which also help to choose the best routes for automatic movement in processing centers and order fulfillment.

Supply chains and systems for forecasting and resource allocation operate on the basis of these same technologies. Technologies for understanding and recognizing natural speech formed the basis of the Alexa service. Deep learning builds company’s new drone initiative, Prime Air, and new vision machine vision technology retail Amazon Go.

Online customer support

In the service industry, chatbots have revolutionized the service experience, and consumers find them as convenient as phones or emails.

The concept is simple: an AI bot that runs on an enterprise website responds to visitor queries like: What's the price? What is your company's phone number? Where is your office? The visitor receives a direct response, instead of looking for the necessary information on the site.

Read also: Artificial intelligence can transform autonomous weapons into killer robots. Why is it really scary

News portals

Artificial intelligence is able to write simple stories like financial reports, sports reports, etc. For this Halloween, researchers at the Massachusetts Institute of Technology have created

The essence of artificial intelligence in the format of questions and answers. The history of creation, research technologies, whether artificial intelligence is associated with IQ and whether it can be compared with a human one. Answered questions Stanford University professor John McCarthy.

What is artificial intelligence (AI)?

Artificial intelligence is a field of science and engineering that deals with the creation of machines and computer programs that have intelligence. It is related to the task of using computers to understand human intelligence. At the same time, artificial intelligence should not be limited to biologically observable methods.

Yes, but what is intelligence?

Intelligence is the ability to come to a decision with the help of calculations. Intelligence different kind and levels have people, many animals and some machines.

Is there not a definition of intelligence that does not depend on relating it to human intelligence?

Until now, there is no understanding of what types of computational procedures we want to call intelligent. We know far from all the mechanisms of intelligence.

Is intelligence an unambiguous concept so that the question "Does this machine have intelligence?" could you answer yes or no?

No. AI research has shown how to use only some of the mechanisms. When only well-studied models are required to complete a task, the results are very impressive. Such programs have "little" intelligence.

Is artificial intelligence an attempt to mimic human intelligence?

Sometimes, but not always. On the one hand, we will learn how to make machines solve problems by watching people or our own algorithms at work. On the other hand, AI researchers use algorithms that are not observed in humans or require much more computational resources.

Do computer programs have an IQ?

No. IQ is based on the rate of development of intelligence in children. This is the ratio of the age at which a child usually scores a certain result to the age of the child. This assessment is appropriately extended to adults. IQ correlates well with various measures of success or failure in life. But building computers that can score high on IQ tests will have little to do with their usefulness. For example, a child's ability to repeat a long sequence of numbers correlates well with other intellectual abilities. It shows how much information a child can remember at one time. At the same time, keeping numbers in memory is a trivial task even for the most primitive computers.

How to compare human and computer intelligence?

Arthur R. Jensen, a leading researcher in the field of human intelligence, claims as a "heuristic hypothesis" that ordinary people share the same mechanisms of intelligence and that intellectual differences are due to "quantitative biochemical and physiological conditions." These include speed of thought, short-term memory, and the ability to form accurate and retrievable long-term memories.

Whether or not Jensen's view of human intelligence is correct, the situation in AI today is the opposite.

Computer programs have a lot of speed and memory, but their abilities correspond to the intellectual mechanisms that software developers understand and can put into them. Some abilities that children don't usually develop until adolescence are introduced. Others, owned by two-year-olds, are still missing. The matter is further exacerbated by the fact that the cognitive sciences still cannot determine exactly what human abilities are. Most likely, the organization of intellectual mechanisms of AI compares favorably with that of humans.

When a human is able to solve a problem faster than a computer, it shows that developers lack understanding of the mechanisms of intelligence needed to perform the task effectively.

When did AI research start?

After World War II, several people began working independently on intelligent machines. The English mathematician Alan Turing may have been the first of these. He gave his lecture in 1947. Turing was one of the first to decide that AI was best explored by programming computers rather than constructing machines. By the late 1950s, there were many AI researchers, and most of them based their work on computer programming.

Is the purpose of AI to put the human mind into a computer?

The human mind has many features, it is hardly realistic to imitate each of them.


What is the Turing test?

A. Alan Turing's 1950 paper "Computing and Intelligence" discussed the conditions for a machine to have intelligence. He argued that if a machine can successfully pretend to be human to an intelligent observer, then you must, of course, consider it intelligent. This criterion will satisfy most people, but not all philosophers. The observer must interact with the machine or human through the I/O facility to eliminate the need for the machine to imitate appearance or human voice. The task of both the machine and the man is to make the observer consider himself a man.

The Turing test is one-sided. A machine that passes the test should definitely be considered sentient, even if it doesn't know enough about humans to imitate them.

Daniel Dennett's book "Brainchildren" has an excellent discussion of the Turing test and its various parts that have been implemented successfully, i.e. with limitations on the observer's knowledge of AI and the subject matter. It turns out that some people are pretty easy to convince that a fairly primitive program is reasonable.

Is the goal of AI to reach human levels of intelligence?

Yes. The ultimate goal is to create computer programs that can solve problems and achieve goals in the same way that humans can. However, scientists conducting research in narrow areas set much less ambitious goals.

How far is artificial intelligence from reaching the human level? When will it happen?

Human level intelligence can be achieved by writing a large number programs, and the collection of vast knowledge bases of facts in the languages ​​that are used today to express knowledge.However, most AI researchers believe that new fundamental ideas are needed. Therefore, it is impossible to predict when human-level intelligence will be created.

Is the computer a machine that can become intelligent?

Computers can be programmed to simulate any type of machine.

Does the speed of computers allow them to be intelligent?

Some people think that both faster computers and new ideas are required. Computers were fast enough even 30 years ago. If only we knew how to program them.

What about creating a "child machine" that could be improved by reading and learning from experience?

This idea has been proposed repeatedly since the 1940s. Eventually, it will be implemented. However, AI programs have not yet reached the level of learning much of what a child learns in the course of life. Existing programs do not understand the language well enough to learn much through reading.

Are computability theory and computational complexity the keys to AI?

No. These theories are relevant but do not address the fundamental problems of AI.

In the 1930s, mathematical logicians Kurt Gödel and Alan Turing established that there were no algorithms that would guarantee the solution of all problems in some important mathematical areas. For example, answers to questions in the spirit of: “is the sentence of first-order logic a theorem” or “does a polynomial equation in some variables have integer solutions in others.” Since humans are capable of solving problems of this kind, this fact has been put forward as an argument that computers are inherently incapable of doing what humans do. Roger Penrose also speaks of this. However, humans cannot guarantee solutionsarbitrarytasks in these areas.

In the 1960s, computer scientists such as Steve Cook and Richard Karp developed the domain theory for NP-complete problems. Problems in these areas are solvable, but, apparently, their solution requires time that grows exponentially with the dimension of the problem. The simplest example of the domain of an NP-complete problem is the question: what statements of propositional logic are satisfiable? People often solve problems in the area of ​​NP-complete problems many times faster than is guaranteed by the main algorithms, but cannot solve them quickly in the general case.

For AI, it is important that when solving problems algorithms were just as effective as human mind. Determining the sub-fields where good algorithms exist is important, but many AI problem solvers do not fall into easily identifiable sub-fields.

The theory of complexity of general classes of problems is called computational complexity. So far, this theory has not interacted with AI as much as one might hope. Success in problem solving by human and AI programs appears to depend on problem properties and problem solving techniques that neither complexity researchers nor the AI ​​community can accurately define.

Also relevant is the theory of algorithmic complexity, developed independently of each other. Solomonov, Kolmogorov and Chaitin. It defines the complexity of a symbolic object as the length of the shortest program that can generate it. Proving that a candidate program is the shortest, or close to it, is an impossible task, but representing objects by the short programs that generate them can sometimes clear things up, even if you can't prove that your program is the shortest.