Nosov N.Yu., Sokolov M.D. Trends in the development of artificial intelligence. Areas of application of artificial intelligence in the enterprise

Lecture1 5 . Technologies artificial intelligence

Plan

    The concept of artificial intelligence.

    Areas of application of AI.

    The concept of expertsystems.

    Artificial intelligence concept

« Intelligence– the totality of all cognitive functions of an individual: from sensations and perception to thinking and imagination; in a narrower sense - thinking. I. is the main form of human knowledge of reality. There are three varieties in understanding the function of intelligence: 1) ability to learn; 2) operating with symbols; 3) the ability to actively master the patterns of the reality around us" (Rapatsevich E.S. Dictionary-reference book on scientific and technical creativity. - Mn.: LLC "Etonim", 1995. - 384 pp. - P. 51-52.). ( DC 2)

Any intellectual activity is based on knowledge. This knowledge includes the characteristics of the current situation, assessments of the possibility of performing certain actions, the laws and patterns of the world in which the activity takes place, and much more. In the programs that began to be created when computers appeared, the necessary knowledge was stored in the memory of the programmers who wrote the programs. Its computer mechanically executed the sequence of program commands stored in its memory. The computer did not require any knowledge for this.

« Artificial intelligence– 1) a symbol for cybernetic systems and their logical and mathematical support, designed to solve certain problems that usually require the use of human intellectual abilities; 2) the totality of the functional capabilities of an electronic computer (computer) to solve problems that previously required mandatory human participation” (Ibid., p. 54).

The fundamental difference between artificial intelligence systems is that for such systems the programmer does not prepare specific programs for execution. A person only gives the machine the required task, and the system must build the program itself to carry out this task. This requires knowledge both about the subject area to which the task relates and about how programs are built. All this knowledge is stored in intelligent systems in a special block called a knowledge base.

The knowledge stored in the knowledge base is recorded in a special formalized form. The knowledge base can implement procedures for generalizing the correction of stored knowledge, as well as procedures that create new knowledge based on what is already there.

Artificial intelligence is one of the newest areas of science, which appeared in the mid-60s. XX century based on computer technology, mathematical logic, programming, psychology, linguistics, neurophysiology and other branches of knowledge. Artificial intelligence is an example of interdisciplinary research that combines the professional interests of specialists in different fields. The very name of the new science arose in the late 60s. and in 1969 the first World Conference on Artificial Intelligence was held in Washington (USA).

When in the late 40s - early 50s. When computers appeared, it became clear that engineers and mathematicians had created not just a fast-running device for computing, but something more significant. It turned out that with the help of EIM you can solve various puzzles, logical problems, play chess, and create game programs. Computers began to take part in creative processes: composing musical melodies, poems and even fairy tales. Programs have appeared for translation from one language to another, for pattern recognition, and proof of theorems. This indicated that with the help of computers and corresponding programs it is possible to automate such types of human activity that are called intellectual and are considered accessible only to humans. Despite the wide variety of non-computational programs created by the early 60s, programming in the field of intellectual activity was in a much worse position than solving computational problems. The reason is obvious. Programming for computational problems was based on the corresponding theory - computational mathematics. Based on this theory, many problem solving methods have been developed. These methods became the basis for the corresponding programs. There was nothing like this for non-computational problems. Any program here was unique, like a work of art. The experience of creating such programs has not been generalized in any way, the ability to create them has not been formalized.

When a programmer created a program for playing chess, he used his own knowledge of the game process. He put them into the program, and the computer only technically executed this program. We can say that the computer did not “distinguish” between computational programs and non-computational ones. He found the roots of a quadratic equation or wrote poetry in the same way. The computer's memory had no knowledge of what it was actually doing.

One could talk about the intelligence of a computer if it myself, based on knowledge of how the game of chess proceeds and how people play this game, managed to compose a chess program or synthesized a program for writing simple waltzes and marches.

Not the procedures themselves by which this or that intellectual activity is performed, but understanding how to create them, how to learn a new type of intellectual activity, - this is where what can be called intelligence is hidden. Special procedures for teaching new types of intellectual activity distinguish a person from a computer. Consequently, in the creation of artificial intelligence, the main task is the implementation by machine means of those procedures that are used in human intellectual activity. What are these procedures?

It is possible to formulate the main goals and objectives of artificial intelligence. Object of study artificial intelligence are procedures used when a person solves problems traditionally called intellectual or creative. But if the psychology of thinking studies these procedures in relation to humans, then artificial intelligence creates software (and now software and hardware) models of such procedures.

Target research in the field of artificial intelligence - the creation of an arsenal of procedures sufficient for computers (or other technical systems, such as robots) to find solutions based on problem statements. In other words, they have become autonomous programmers, capable of performing the work of professional programmers - applied engineers (who create programs to solve problems in a specific subject area). Of course, the formulated goal does not exhaust all the tasks that artificial intelligence sets for itself. This is the immediate goal. Subsequent goals are associated with an attempt to penetrate into areas of human thinking that lie outside the sphere of rational and verbally expressible thinking. For in the search for solutions to many problems, especially those that are very different from previously solved ones, a large role is played by the sphere of thinking that is called subconscious, unconscious, or intuitive.

The main methods used in artificial intelligence are various kinds of software models and tools, computer experiments and theoretical models. However, modern computers no longer satisfy artificial intelligence specialists. They have nothing to do with how the human brain works, so there is an intensive search for new technical structures that can better solve problems associated with intellectual processes. This includes research on neural-like artificial networks, attempts to build molecular machines, work in the field of holographic systems, and much more.

There are several main problems studied in artificial intelligence.

    Knowledge representation – development of methods and techniques for formalizing and subsequent entering into memory an intellectual system of knowledge from various problem areas, generalization and classification of accumulated knowledge when solving problems.

    Reasoning modeling is the study and formalization of various schemes of human reasoning used in the process of solving various problems, the creation of effective programs for implementing these schemes in computers.

    Dialogue communication procedures in natural language, ensuring contact between an intelligent system and a human specialist in the process of solving problems.

    Planning of expedient activities is the development of methods for constructing programs for complex activities based on the knowledge about the problem area that is stored in the intellectual system.

    Training intelligent systems in the process of their activity, creating a set of tools for accumulating and generalizing the skills accumulated in such systems.

In addition to these problems, many others are being studied, constituting the groundwork on which specialists will rely in the next round of development of the theory of artificial intelligence.

Intelligent systems are already being introduced into the practice of human activity. These are the most well-known expert systems, which transfer the experience of more trained specialists to less trained ones, and intelligent information systems (for example, machine translation systems), and intelligent robots, and other systems that have every right to be called intelligent. Without such systems, modern scientific and technological progress is no longer possible.

Currently, AI is a powerful branch of computer science, which has both fundamental, purely scientific foundations, and highly developed technical and applied aspects related to the creation and operation of workable samples of intelligent systems. The emergence of 5th generation computers depends on the results of this work.

Any task for which the solution algorithm is not known can be classified as AI (chess playing, medical diagnostics, text summary, translation into a foreign language). Characteristic features of AI tasks are the use of information in symbolic form and the presence of a choice from many options under conditions of uncertainty.

The most promising direction in the development of computer learning systems is artificial intelligence technology. Systems that use AI techniques are called intelligent teaching systems (ITS). IOS implements adaptive and two-way interaction aimed at the effective transfer of knowledge. The most promising way to develop IOS is, apparently, the path of creating self-learning systems that acquire knowledge in dialogue with a person.

2. Areas of application of AI

Systems with AI are understood as devices or programs that have such characteristics inherent in human intellectual behavior as understanding and using language, causality of behavior, the ability to solve problems, the ability to respond flexibly to a situation, take advantage of favorable situations, find solutions in ambiguous or contradictory situations, recognize the relative importance of different elements of situations, find similarities between them despite their differences.

Software systems that implement algorithms for which there is no formal solution model are called heuristic and belong to AI. AI problems are those in which it is not the solution process that is formalized, but the process of finding a solution.

Artificial intelligence systems are most widely used to solve the following problems:

    Pattern recognition is a technical system that perceives visual and audio information (encodes and stores it in memory), problems of understanding and logical reasoning in the process of processing visual and speech information.

    Reasoning modeling - the study of human reasoning in artificial intelligence is just beginning, but without creating formal models for such reasoning, it is very difficult to produce in intelligent systems all the features of the reasoning of specialists solving the problems that we want to make available to artificial systems. In the expert systems created today, not only reliable logical conclusions are implemented, but also plausible reasoning and a number of other non-monotonic reasoning. The first programs for reasoning by analogy and association appeared.

    Symbolic computing systems

    Systems with fuzzy logic - fuzzy inference is used very widely, because it reflects the sum of human knowledge about many phenomena of the real world. When planning behavior in robots and other artificial intelligence systems operating in incompletely described environments, when making decisions in the absence of comprehensive information, in expert systems with partial knowledge of the subject area, and in many other situations, fuzzy inference cannot be avoided

    Cognitive psychology is one of the areas of modern psychological science associated with the search internal reasons one or another behavior of a living system. As a rule, the object of study is a person’s knowledge about himself and the world around him, as well as the cognitive processes that ensure the acquisition, preservation and transformation of this knowledge.

    Natural language understanding – analysis and generation of texts and their internal representation.

    Expert systems are systems that use the knowledge of specialists in specific types of activities.

    Computational linguistics was born at the intersection of computing and linguistics. New science changed its name several times; At first it was called mathematical linguistics, then structural linguistics, and computational linguistics, then computer linguistics.

    It became possible to automate many labor-intensive processes, maintaining a variety of vocabulary and lexical cards. Machine translation is now a reality.

    Machine intelligence- a set of computer hardware and software, with the help of which communication between a person and a machine (interface) is ensured, which in its level approaches the communication between specialists solving a joint problem.

    Behavior planning– one of the areas of research in artificial intelligence. The main task of this direction is to search for procedures that could automatically offer the shortest path to achieving the goal, based on a given situation. Problems of this type turned out to be most relevant for robots operating autonomously. When solving a task assigned to it, the robot must draw up a plan for solving it and try to carry it out. If, in the process of implementing this plan, the robot becomes convinced that there are insurmountable obstacles, then it must build another plan in which these obstacles do not exist.

    Intelligent robots.

    Games are games characterized by a finite number of situations and clearly defined rules, in which the level of a person of average ability is exceeded; but the level of the best specialists has not been reached.

    Problem solving is the formulation, analysis and presentation of specific life situations, the solution of which requires ingenuity and the ability to generalize. They are trying to use computer technology to implement intellectual processes of finding solutions when the final result is unpredictable and is the fruit of logical conclusions and conclusions that one comes to independently.

The newest Russian fighter is equipped with artificial intelligence systems. Much more radical changes have been made to the aircraft's on-board electronic systems. As a result of the introduction of a multi-channel digital fly-by-wire aircraft control system, including artificial intelligence systems, the Su-37, compared to the Su-35, will receive additional unprecedented capabilities: The ability to launch preemptive strikes against any air enemy (including stealth aircraft); Multi-channel and algorithmic security of all information and targeting systems; Attacking ground targets without entering the enemy’s air defense zone; Low-altitude flight with overflight and avoidance of ground obstacles, including in automatic mode; Automated group actions against air and ground targets; Countering enemy radio-electronic and optical-electronic weapons; Automation of all stages of flight and combat use

Panasonic announces the start of shipments of the new pt AE500E projector with artificial intelligence. Built-in artificial intelligence that automatically controls lamp brightness depending on the input video signal, providing a contrast level of 1300:1.

The development of information technology has been exciting the human mind for a good half century. Computers have become a part of our daily life. Working in a modern office is unthinkable without the Internet, Email, and for many, a well-deserved rest begins only when the game console is turned on. Third-generation mobile phones now not only transmit voice, but also easily replace almost any office equipment. There are even cars with on-board computers that can create a trip route and deliver the passenger to their destination.

The first processor, released by Intel on November 11, 1971, contained 2,300 transistors on a circuit the size of a fingernail. The microchip performed 60 thousand operations per second - nothing by modern standards, but then it was a serious breakthrough. Since then, computing technology has come a long way. For example, it is estimated that over the 30 years of the existence of microprocessors, the minimum size of processor elements has decreased by 17 times, while the number of transistors has increased by 18 thousand times, and the clock frequency has increased by 14 thousand times. Current processor technology used by Intel allows the production of transistors the size of a molecule, and in the future - in several atomic layers.

The information technology industry is one of the most dynamically developing areas of life. In accordance with Moore's law, in 2020, computers will reach the power of the human brain, because... will be able to perform 20 quadrillion (i.e. 20,000,000 billion) operations per second, and by 2060, as some futurologists believe, the computer will be equal in mind power to all of humanity. However, back in 1994, a PC based on an Intel Pentium processor with a ridiculous, by today’s standards, frequency of 90 MHz beat several of the world’s strongest grandmasters in a series of chess tournaments, including the current champion of the planet, Garry Kasparov.

Already today there are real possibilities for using intelligent technologies in almost any car. For example, Johnson Controls' BlueConnect headset, an integrated hands-free vehicle module based on the Intel PXA250 and Intel PXA210 processors, allows the driver to perform a wide variety of voice-activated actions using a cell phone and Bluetooth technology.

It is obvious that every year more and more powerful microprocessors will be used in an increasing number of different household devices. Recently, Intel specialists have developed transistors whose operating speed exceeds the speed of the Pentium 4 by almost 1000%. Thus, the corporation's scientists say, it has been proven that there are no fundamental obstacles to the continued development of microprocessors in accordance with Moore's law until the end of the current decade.

Such transistors, measuring only 20 nanometers, will allow Intel to create processors with a billion transistors by 2007, operating at frequencies up to 20 GHz with a supply voltage of about 1 volt. And the company's management is already talking about upcoming processors with clock frequencies of up to 30 GHz. Intel has already created the prerequisites for the production of such microprocessors, company representatives say.

Supporters of artificial intelligence sincerely believe that the purpose of human existence is to create a computer superintelligence

Artificial intelligence, in the truest sense of the term, implies a surrogate, but competitive with respect to the human type of mind, “living,” for example, on a computer basis. So far it has been possible to create only some similarities, “monkey imitators” of human intelligent activity. Yes, Mars rovers, independently avoiding trivial obstacles, autonomously plow open the lifelessness of the deserts of the Red Planet, but to set the direction of research, a human team from Earth is still needed. Yes, semiconductor units, stuffed with hundreds of millions of transistors, have learned, at the very least, to write down text from dictation, but the most basic clause, understandable to a living listener, immediately confuses them. Yes, the computer has been taught to automatically translate words from one language to another, but the texts received from such an “artificial translator” without editing by a living language expert are still not of very high quality.

The definition of artificial intelligence cited in the preamble, given by John McCarthy in 1956 at a conference at Dartmouth University, is not directly related to the understanding of human intelligence. According to McCarthy, AI researchers are free to use techniques not seen in humans if needed to solve specific problems.

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English language phrase artificial intelligence does not have that slightly fantastic anthropomorphic overtones that it acquired in the rather unsuccessful Russian translation. Word intelligence means “the ability to reason rationally”, and not at all “intelligence”, for which there is an English analogue intelligence .

Participants of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the particular definitions of intelligence, common to man and “machine,” can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) during self-learning to solve problems of a certain class of complexity and solve these problems.”

Prerequisites for the development of artificial intelligence science

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debates about the nature of man and the process of understanding the world, neurophysiologists and psychologists had developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions about optimal calculations and the presentation of knowledge about the world in in a formalized form; finally the foundation was born mathematical theory computing - the theory of algorithms - and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question arose in the scientific community: what are the limits of computer capabilities and will machines reach the level of human development? In 1950, one of the pioneers in the field of computing, English scientist Alan Turing, wrote an article entitled “Can a Machine Think?” , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal to a person in terms of intelligence, called the Turing test.

History of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies were carried out at Moscow University and the Academy of Sciences, headed by Veniamin Pushkin and D. A. Pospelov. Since the early 1960s, M. L. Tsetlin and his colleagues have been developing issues related to training finite state machines.

In 1964, the work of the Leningrad logician Sergei Maslov “ Reverse method establishing deducibility in classical predicate calculus,” which was the first to propose a method for automatically searching for proofs of theorems in predicate calculus.

Until the 1970s in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences “computer science” and “cybernetics” were mixed at that time due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction “artificial intelligence” as a branch of computer science. At the same time, computer science itself was born, subordinating its ancestor “cybernetics”. At the end of the 1970s it was created Dictionary on artificial intelligence, a three-volume reference book on artificial intelligence and encyclopedic Dictionary in computer science, in which the sections “Cybernetics” and “Artificial Intelligence” are included, along with other sections, in computer science. The term “computer science” became widespread in the 1980s, and the term “cybernetics” gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the “cybernetic boom” of the late 1950s - early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition, considering the achievements of this science in its light.

  • top-down AI), semiotic - creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  • Bottom-Up AI), biological - the study of neural networks and evolutionary computations that model intelligent behavior based on biological elements, as well as the creation of corresponding computing systems, such as a neurocomputer or a biocomputer.

The latter approach, strictly speaking, does not belong to the science of AI in the sense given by John McCarthy - they are united only by a common final goal.

The Turing Test and the Intuitive Approach

This approach focuses on those methods and algorithms that will help an intelligent agent survive in its environment while performing its task. So, here the algorithms for finding a path and making decisions are studied much more carefully.

Hybrid approach

Hybrid approach assumes that only the synergistic combination of neural and symbolic models achieves the full range of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid information systems will be much stronger than the sum of the various concepts separately.

Research models and methods

Symbolic modeling of thought processes

Analyzing the history of AI, we can identify such a broad area as modeling reasoning. Long years the development of this science moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is set to a certain task, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proving theorems, making decisions and game theory, planning and dispatching, forecasting.

Working with Natural Languages

An important direction is natural language processing, within which the analysis of the capabilities of understanding, processing and generating texts in “human” language is carried out. This direction aims to process natural language in such a way that one would be able to acquire knowledge independently by reading existing text available on the Internet. Some direct applications of natural language processing include information retrieval (including deep text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

Producing knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intelligent system in the process of its operation. This direction has been central since the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a report on a probabilistic unsupervised learning machine, calling it "The Inductive Inference Machine."

Robotics

Machine creativity

The nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and the problems of computer writing music, literary works (often poetry or fairy tales), and artistic creation are posed here. Creating realistic images is widely used in the film and gaming industries.

The study of problems of technical creativity of artificial intelligence systems stands out separately. The theory of solving inventive problems, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this capability to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands it. By adding noise instead of missing information or filtering noise with knowledge available in the system, abstract knowledge is produced into concrete images that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent field. Examples include programming intelligence in computer games, nonlinear control, intelligent information security systems.

In the future, it is assumed that the development of artificial intelligence will be closely connected with the development of a quantum computer, since some properties of artificial intelligence have similar operating principles to quantum computers.

It can be seen that many areas of research overlap. This is typical of any science. But in artificial intelligence, the relationship between seemingly different areas is especially strong, and this is associated with the philosophical debate about strong and weak AI.

Modern artificial intelligence

Two directions of AI development can be distinguished:

  • solving problems associated with bringing specialized AI systems closer to human capabilities and their integration, which is realized by human nature ( see Intelligence Enhancement);
  • creation of artificial intelligence, representing the integration of already created AI systems into unified system, capable of solving the problems of humanity ( see Strong and weak artificial intelligence).

But at the moment, the field of artificial intelligence is seeing the involvement of many subject areas that have a practical relationship to AI rather than a fundamental one. Many approaches have been tested, but no research group has yet approached the emergence of artificial intelligence. Below are just some of the most famous developments in the field of AI.

Application

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and in property management. Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

Cognitive modeling methodology is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: a methodology for structuring the situation: a model for representing the expert’s knowledge in the form of a signed digraph (cognitive map) (F, W), where F is the set of factors of the situation, W is the set of cause-and-effect relationships between the factors of the situation ; methods of situation analysis. Currently, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Models for forecasting the development of the situation are proposed here; methods for solving inverse problems.

Philosophy

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

Philosophical problems The creation of artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI.” The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?”

The term “strong artificial intelligence” was introduced by John Searle, and the approach is characterized in his words:

Moreover, such a program would not simply be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and solving real problems and, at the same time, devoid of emotions characteristic of a person and necessary for his individual survival [ ] .

In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

Ethics

Other traditional faiths rarely describe the issues of AI. But some theologians nevertheless pay attention to this. For example, Archpriest Mikhail Zakharov, arguing from the point of view of the Christian worldview, poses the following question: “Man is a rationally free being, created by God in His image and likeness. We are accustomed to attributing all these definitions to the biological species Homo Sapiens. But how justified is this? . He answers this question like this:

If we assume that research in the field of artificial intelligence will someday lead to the emergence of an artificial being that is superior in intelligence to humans and has free will, would this mean that this being is a person? ... man is God's creation. Can we call this creature a creation of God? At first glance, it is a human creation. But even during the creation of man, it is hardly worth understanding literally that God sculpted the first man from clay with His own hands. This is probably an allegory indicating the materiality of the human body, created by the will of God. But without the will of God nothing happens in this world. Man, as a co-creator of this world, can, fulfilling the will of God, create new creatures. Such creatures, created by human hands according to God's will, can probably be called creations of God. After all, man creates new species of animals and plants. And we consider plants and animals to be God’s creations. The same can be applied to an artificial being of a non-biological nature.

Science fiction

The topic of AI is considered from different angles in the works of Robert Heinlein: the hypothesis of the emergence of self-awareness of AI when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other carriers of intelligence (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the “History of the Future” series), problems of AI development after hypothetical self-awareness and some social and ethical issues (“Friday”). The socio-psychological problems of human interaction with AI are also considered in Philip K. Dick’s novel “Do Androids Dream of Electric Sheep? ", also known for the film adaptation of Blade Runner.

In the works of the science fiction writer and philosopher Stanislaw Lem, the creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence. It is especially worth noting the futurology of Sum technology. In addition, in the adventures of Iyon the Quiet, the relationship between living beings and machines is repeatedly described: the rebellion of the on-board computer with subsequent unexpected events (11th journey), the adaptation of robots to human society(“Washing tragedy” from “Memoirs of Ijon the Quiet”), building absolute order on the planet by processing living inhabitants (24th journey), inventions of Corcoran and Diagoras (“Memories of Ijon the Quiet”), a psychiatric clinic for robots (“Memoirs of Ijon the Quiet” "). In addition, there is a whole series of novels and stories Cyberiad, where almost all the characters are robots, who are distant descendants of robots that escaped from people (they call people pallids and consider them mythical creatures).

Movies

Almost since the 1960s, along with the writing of science fiction stories and novellas, films about artificial intelligence have been made. Many stories by authors recognized throughout the world are filmed and become classics of the genre, others become a milestone in development

  • Mustafina Nailya Mugattarovna, bachelor, student
  • Bashkir State Agrarian University
  • Sharafutdinov Aidar Gazizyanovich, Candidate of Sciences, Associate Professor, Associate Professor
  • Bashkir State Agrarian University
  • COMPUTING MACHINES
  • TECHNIQUE
  • THE SCIENCE
  • ARTIFICIAL INTELLIGENCE

Today, scientific and technological progress is developing rapidly. One of its fast-growing industries is artificial intelligence.

Today, technological progress is developing rapidly. Science does not stand still and every year people come up with more and more advanced technologies. One of the new directions in the development of technological progress is artificial intelligence.

Humanity first heard about artificial intelligence more than 50 years ago. It happened at a conference held in 1956 at Dartmouth University, where John McCarthy gave the term a clear and precise definition. “Artificial intelligence is the science of creating intelligent machines and computer programs. For the purposes of this science, computers are used as a means to understand the characteristics of human intelligence, at the same time, the study of AI should not be limited to the use of biologically plausible methods.

The artificial intelligence of modern computers is quite high level, but not to such a level that their behavioral abilities are not inferior to even the most primitive animals.

The result of research on “artificial intelligence” is the desire to understand the work of the brain, to reveal the secrets of human consciousness and the problem of creating machines with a certain level of human intelligence. The fundamental possibility of modeling intellectual processes follows that any brain function, any mental activity, described in a language with strictly unambiguous semantics using a finite number of words, can in principle be transferred to an electronic digital computer.

Currently, some models of artificial intelligence have been developed in various fields, but a computer capable of processing information in any new field has not yet been created.

Among the most important classes of tasks that have been posed to developers of intelligent systems since the definition of artificial intelligence as a scientific direction, the following should be highlighted: areas of artificial intelligence:

  • Proof of theorems. The study of theorem proving techniques played an important role in the development of artificial intelligence. Many informal problems, for example, medical diagnostics, are solved using methodological approaches that were used to automate theorem proving. Finding a proof of a mathematical theorem requires not only deduction from hypotheses, but also the creation of intuitive assumptions about which intermediate statements should be proven for the overall proof of the main theorem.
  • Image recognition. The use of artificial intelligence for image recognition has made it possible to create practically working systems for identifying graphic objects based on similar features. Any characteristics of objects to be recognized can be considered as features. Features must be invariant to the orientation, size and shape of objects. The alphabet of features is formed by the system developer. The quality of recognition largely depends on how well the alphabet of features has been developed. Recognition consists of a priori obtaining a feature vector for a separate object selected in the image and, then, determining which of the standards of the feature alphabet this vector corresponds to.
  • Machine translation and human speech understanding. The task of analyzing sentences in human speech using a dictionary is a typical task for artificial intelligence systems. To solve this problem, an intermediary language was created that facilitates the comparison of phrases from different languages. Subsequently, this intermediary language turned into a semantic model for representing the meanings of texts to be translated. The evolution of the semantic model led to the creation of a language for the internal representation of knowledge. As a result, modern systems analyze texts and phrases in four main stages: morphological analysis, syntactic, semantic and pragmatic analysis.
  • Game programs. Based on the majority game programs Several basic ideas of artificial intelligence are based, such as enumeration of options and self-learning. One of the most interesting problems in the field of game programs using artificial intelligence methods is teaching a computer to play chess. It was founded back in the early days of computing, in the late 50s. In chess, there are certain levels of skill, degrees of quality of play, which can provide clear criteria for assessing the intellectual growth of the system. Therefore, computer chess has been actively studied by scientists from all over the world, and the results of their achievements are used in other intellectual developments that have real practical significance.
  • Machine creativity. One of the areas of application of artificial intelligence includes software systems that can independently create music, poetry, stories, articles, diplomas and even dissertations. Today there is a whole class of musical programming languages ​​(for example, the C-Sound language). For various musical tasks, special software was created: sound processing systems, sound synthesis, interactive composition systems, algorithmic composition programs.
  • Expert systems. Artificial intelligence methods have found application in the creation of automated consulting systems or expert systems. The first expert systems were developed as research tools in the 1960s. They were artificial intelligence systems specifically designed to solve complex problems in a narrow subject area, such as medical diagnosis of diseases. The classic goal of this direction was initially to create a general-purpose artificial intelligence system that would be able to solve any problem without specific knowledge in the subject area. Due to limited computing resources, this problem turned out to be too complex to solve with an acceptable result.

We can say that the main goal of developing artificial intelligence is optimization; just imagine how a person, without being exposed to danger, could study other planets and mine precious metals.

Thus, we can conclude that the study and development of artificial intelligence is important for the entire society. After all, with the use of this system, human life can be secured and made easier.

Bibliography

  1. Yasnitsky L.N. On the possibilities of using artificial intelligence [ Electronic resource]: scientific digital library. URL: http://cyberleninka.ru/ (date accessed 06/01/2016)
  2. Yastreb N.A. Artificial intelligence [Electronic resource]: scientific electronic library. URL: http://cyberleninka.ru/ (date accessed 06/01/2016)
  3. Abdulatipova M.A. Artificial intelligence [Electronic resource]: scientific electronic library. URL: http://cyberleninka.ru/ (date accessed 06/01/2016)

Artificial Intelligence (AI)(English) Artificial intelligence (AI) is the science and development of intelligent machines and systems, especially intelligent computer programs, aimed at understanding human intelligence. However, the methods used are not necessarily biologically plausible. But the problem is that it is unknown which computational procedures we want to call intelligent. And since we understand only some of the mechanisms of intelligence, then by intelligence within this science we understand only the computational part of the ability to achieve goals in the world.

Various types and degrees of intelligence exist in many people, animals and some machines, intelligent information systems and various models of expert systems with different knowledge bases. At the same time, as we see, this definition of intelligence is not related to the understanding of human intelligence - these are different things. Moreover, this science models human intelligence, since, on the one hand, one can learn something about how to get machines to solve problems by observing other people, and on the other hand, most work in AI studies the problems that humanity needs to solve in industrial and technological sense. Therefore, AI researchers are free to use techniques that are not observed in humans if necessary to solve specific problems.

It is in this sense that the term was introduced by J. McCarthy in 1956 at a conference at Dartmouth University, and until now, despite criticism from those who believe that intelligence is only a biological phenomenon, in the scientific community the term has retained its original meaning, despite obvious contradictions from the point of view of human intelligence.

In philosophy, the question of the nature and status of human intellect has not been resolved. There is also no exact criterion for computers to achieve “intelligence,” although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. Therefore, despite the many approaches to both understanding AI problems and creating intelligent information systems, two main approaches to AI development can be distinguished:

· descending (English) Top-Down AI), semiotic - the creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;

· ascending Bottom-Up AI), biological – the study of neural networks and evolutionary computations that model intelligent behavior based on smaller “non-intelligent” elements.

The latter approach, strictly speaking, does not relate to the science of artificial intelligence in the sense given by J. McCarthy; they are united only by a common final goal.

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debates about the nature of man and the process of understanding the world, neurophysiologists and psychologists had developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions about optimal calculations and the presentation of knowledge about the world in in a formalized form; finally, the foundation of the mathematical theory of calculations - the theory of algorithms - was born, and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the scientific community raised the question: what are the limits of computer capabilities and will machines reach the level of human development? In 1950, one of the pioneers in the field of computing, the English scientist Alan Turing, in the article “Can a Machine Think?”, provides answers to similar questions and describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence. with a person, called the Turing test.

The Turing test is an empirical test proposed by Alan Turing in his 1950 paper "Computing Machines and Minds" in the philosophy journal Mind" The purpose of this test is to determine the possibility of artificial thinking close to human. The standard interpretation of this test is: “A person interacts with one computer and one person. Based on the answers to the questions, he must determine who he is talking to: a person or a computer program. The purpose of a computer program is to mislead a person into making the wrong choice.” All test participants cannot see each other.

There are three approaches to defining artificial intelligence:

1) Logical approach towards the creation of artificial intelligence systems is aimed at creating expert systems with logical models of knowledge bases using the language of predicates. The language and logic programming system Prolog was adopted as a training model for artificial intelligence systems in the 80s. Knowledge bases written in the Prolog language represent sets of facts and rules of logical inference written in the logical language. The logical model of knowledge bases allows you to record not only specific information and data in the form of facts in the Prolog language, but also generalized information using rules and procedures of logical inference, including logical rules for defining concepts that express certain knowledge as specific and generalized information. In general, research into the problems of artificial intelligence in computer science within the framework of a logical approach to the design of knowledge bases and expert systems is aimed at the creation, development and operation of intelligent information systems, including the issues of teaching students and schoolchildren, as well as training users and developers of such intelligent information systems.

2) Agent-based approach has been developing since the early 1990s. According to this approach, intelligence is the computational part (planning) of the ability to achieve the goals set for an intelligent machine. Such a machine itself will be an intelligent agent, perceiving the world around it using sensors and capable of influencing objects in the environment using actuators. This approach focuses on those methods and algorithms that will help the intelligent agent survive in the environment while performing its task. Thus, search and decision-making algorithms are studied much more strongly here.

3) Intuitive approach assumes that AI will be able to exhibit behavior no different from humans, and in normal situations. This idea is a generalization of the Turing test approach, which states that a machine will become intelligent when it is able to carry on a conversation with an ordinary person, and he will not be able to understand that he is talking to the machine (the conversation is carried out by correspondence).

The definition selected the following areas of research in the field of AI:

- Symbolic modeling of thought processes.

Analyzing the history of AI, we can highlight such a broad area as reasoning modeling. For many years, the development of AI as a science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is a certain problem, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proof of theorems, decision making and game theory, planning and dispatching , forecasting.

- Working with natural languages.

An important area is natural language processing, which involves analyzing the capabilities of understanding, processing and generating texts in “human” language. In particular, the problem of machine translation of texts from one language to another has not yet been solved. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.

- Accumulation and use of knowledge.

According to many scientists, an important property of intelligence is the ability to learn. Thus, knowledge engineering comes to the fore, combining the tasks of obtaining knowledge from simple information, its systematization and use. Advances in this area affect almost every other area of ​​AI research. Here, too, two important subareas cannot be overlooked. The first of them - machine learning - concerns the process of independent acquisition of knowledge by an intelligent system in the process of its operation. The second is associated with the creation of expert systems - programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The field of machine learning includes a large class of pattern recognition problems. For example, this is character recognition, handwritten text, speech, text analysis. Many problems are successfully solved using biological modeling. Biological modeling

Large and interesting achievements available in the field of modeling biological systems. Strictly speaking, this can include several independent directions. Neural networks are used to solve fuzzy and complex problems such as geometric shape recognition or object clustering. The genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent that interacts with external environment, is called the agent-based approach. Particularly worth mentioning is computer vision, which is also associated with robotics.

- Robotics.

In general, robotics and artificial intelligence are often associated with each other. The integration of these two sciences, the creation of intelligent robots, can be considered another area of ​​AI.

- Machine creativity.

The nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and the problems of computer writing music, literary works (often poetry or fairy tales), and artistic creation are posed here. Creating realistic images is widely used in the film and gaming industries. Adding this feature to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands it. By adding noise instead of missing information or filtering noise with knowledge available in the system, abstract knowledge is produced into concrete images that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

- Other areas of research.

There are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include programming intelligence in computer games, nonlinear control, and intelligent information security systems.

Approaches to creating intelligent systems. The symbolic approach allows you to operate with weakly formalized representations and their meanings. Efficiency and overall effectiveness depend on the ability to highlight only essential information. The breadth of classes of problems effectively solved by the human mind requires incredible flexibility in abstraction methods. Not accessible with any engineering approach that the researcher initially chooses based on a deliberately flawed criterion, for its ability to quickly provide an effective solution to some problem that is closest to this researcher. That is, for a single model of abstraction and construction of entities already implemented in the form of rules. This results in significant expenditure of resources for non-core tasks, that is, the system returns from intelligence to brute force on most tasks and the very essence of intelligence disappears from the project.

It is especially difficult without symbolic logic when the task is to develop rules since their components, not being full-fledged units of knowledge, are not logical. Most studies stop at the impossibility of at least identifying new difficulties that have arisen using the symbolic systems chosen at the previous stages. Moreover, solve them, and especially train the computer to solve them, or at least identify and get out of such situations.

Historically, the symbolic approach was the first in the era of digital machines, since it was after the creation of Lisp, the first symbolic computing language, that its author became confident in the possibility of practically starting to implement these means of intelligence. Intelligence as such, without any reservations or conventions.

It is widely practiced to create hybrid intelligent systems in which several models are used at once. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.

Development of the theory of fuzzy sets. The development of the theory of fuzzy sets began with the article “Fuzzy Sets,” published by US professor Lotfi Zadeh, who first introduced the concept of a fuzzy set, proposed the idea and the first concept of a theory that made it possible to fuzzyly describe real systems. The most important direction of the theory of fuzzy sets is fuzzy logic, used to control systems, as well as in experiments on the formation of their models.

The 60s began a period of rapid development of computers and digital technologies based on binary logic. At that time, it was believed that the use of this logic would allow solving many scientific and technical problems. For this reason, the emergence of fuzzy logic remained almost unnoticed, despite all its conceptual revolutionary nature. However, the importance of fuzzy logic has been recognized by a number of representatives of the scientific community and it has been developed as well as practical implementation in various industrial applications. After some time, interest in it began to increase on the part of scientific schools that united adherents of technologies based on binary logic. This happened due to the fact that quite a lot of practical problems were discovered that could not be solved using traditional mathematical models and methods, despite the significantly increased available computational speeds. A new methodology was required, the characteristic features of which were to be found in fuzzy logic.

Like robotics, fuzzy logic was met with great interest not in its country of origin, the United States, but beyond its borders, and as a consequence, the first experience of industrial use of fuzzy logic - to control boiler installations of power plants - is associated with Europe. All attempts to use traditional methods, sometimes very intricate, to control a steam boiler ended in failure - this nonlinear system turned out to be so complex. And only the use of fuzzy logic made it possible to synthesize a controller that satisfied all the requirements. In 1976, fuzzy logic was used as the basis for an automatic control system for a rotary kiln in cement production. However, the first practical results of using fuzzy logic, obtained in Europe and America, did not cause any significant increase in interest in it. Just as it was with robotics, the country that was the first to begin the widespread implementation of fuzzy logic, realizing its enormous potential, was Japan.

Among the applied fuzzy systems created in Japan, the most famous is the subway train control system developed by Hitachi in Sendai. The project was implemented with the participation of an experienced driver, whose knowledge and experience formed the basis for the developed control model. The system automatically reduced the speed of the train as it approached the station, ensuring a stop at the required location. Another advantage of the train was its high comfort, due to the smooth acceleration and deceleration. There was also whole line other advantages compared to traditional systems management.

The rapid development of fuzzy logic in Japan has led to its practical applications not only in industry, but also in the production of consumer goods. An example here is a video camera equipped with a fuzzy image stabilization subsystem, which was used to compensate for image fluctuations caused by the operator’s inexperience. This problem was too complex to be solved by traditional methods, since it was necessary to distinguish random fluctuations in the image from the purposeful movement of objects being photographed (for example, the movement of people).

Another example is the automatic washing machine, which is operated at the touch of a button (Zimmerman 1994). This “integrity” aroused interest and was met with approval. The use of fuzzy logic methods made it possible to optimize the washing process, providing automatic recognition of the type, volume and degree of soiling of clothes, not to mention the fact that reducing the machine control mechanism to one single button made it significantly easier to handle.

Inventions in the field of fuzzy logic have been implemented by Japanese companies in many other devices, including microwave ovens (Sanyo), anti-lock braking systems and automatic boxes transmissions (Nissan), Integrated Vehicle Dynamics Control (INVEC), and computer hard drive controls that reduce access time to information.

In addition to the applications mentioned above, since the early 90s. There is an intensive development of fuzzy methods in a number of applied areas, including those not related to technology:

Electronic pacemaker control system;

Motor vehicle control system;

Cooling systems;

Air conditioners and ventilation equipment;

Waste incineration equipment;

Glass melting furnace;

Blood pressure monitoring system;

Diagnosis of tumors;

Diagnosis of the current state of the cardiovascular system;

Control system for cranes and bridges;

Image processing;

Fast charger;

Word recognition;

Bioprocessor management;

Electric motor control;

welding equipment and welding processes;

Traffic control systems;

Biomedical Research;

Water treatment plants.

At the moment, in the creation of artificial intelligence (in the original sense of the word, expert systems and chess programs do not belong here), there is an intensive grinding of all subject areas that have at least some relation to AI into knowledge bases. Almost all approaches have been tested, but not a single research group has approached the emergence of artificial intelligence.

AI research has joined the general stream of singularity technologies (species leap, exponential human development), such as computer science, expert systems, nanotechnology, molecular bioelectronics, theoretical biology, quantum theory(s), nootropics, extrophiles, etc. see daily stream Kurzweil News, MIT.

The results of developments in the field of AI have entered higher and secondary education in Russia in the form of computer science textbooks, where issues of working and creating knowledge bases, expert systems based on personal computers based on domestic logic programming systems are now studied, as well as studying fundamental issues of mathematics and computer science using examples working with models of knowledge bases and expert systems in schools and universities.

The following artificial intelligence systems have been developed:

1. Deep Blue - defeated the world chess champion. (The match between Kasparov and supercomputers did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov, although the original compact chess programs are an integral element of chess creativity. Then the IBM line of supercomputers appeared in the brute force projects BluGene (molecular modeling) and modeling of the pyramidal cell system at the Swiss Blue Brain Center. This story- an example of the intricate and secretive relationship between AI, business and national strategic objectives.)

2. Mycin was one of the early expert systems that could diagnose a small set of diseases, often as accurately as doctors.

3. 20q is a project based on AI ideas, based on the classic game “20 Questions”. It became very popular after appearing on the Internet on the website 20q.net.

4. Speech recognition. Systems such as ViaVoice are capable of serving consumers.

5. Robots compete in a simplified form of football in the annual RoboCup tournament.

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics) when playing on the stock exchange and property management. In August 2001, robots beat humans in an impromptu trading competition (BBC News, 2001). Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), and also to ensure a number of other national security tasks.

Computer game developers are forced to use AI of varying degrees of sophistication. Standard tasks of AI in games are finding a path in two-dimensional or three-dimensional space, simulating the behavior of a combat unit, calculating the correct economic strategy, and so on.

Artificial intelligence is closely related to transhumanism. And together with neurophysiology, epistemology, cognitive psychology, it forms a more general science called cognitive science. Philosophy plays a special role in artificial intelligence. Also, epistemology - the science of knowledge within the framework of philosophy - is closely related to the problems of artificial intelligence. Philosophers working on this topic are grappling with questions similar to those faced by AI engineers about how best to represent and use knowledge and information. Producing knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

In computer science, problems of artificial intelligence are considered from the perspective of designing expert systems and knowledge bases. Knowledge bases are understood as a set of data and inference rules that allow logical inference and meaningful processing of information. In general, research into problems of artificial intelligence in computer science is aimed at the creation, development and operation of intelligent information systems, including issues of training users and developers of such systems.

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised. On the one hand, they are inextricably linked with this science, and on the other, they introduce some chaos into it. Philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI.” The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?”

Issues of creating artificial intelligence. Two directions for the development of AI are visible: the first - in solving problems associated with bringing specialized AI systems closer to human capabilities, and their integration, which is realized by human nature, the second - in the creation of Artificial Intelligence, which represents the integration of already created AI systems into a single system capable of solving problems humanity.

Among AI researchers, there is still no dominant point of view on the criteria of intelligence, the systematization of goals and tasks to be solved, there is not even a strict definition of science. There are different points of view on the question of what is considered intelligence. The analytical approach involves the analysis of higher nervous activity a person to the lowest, indivisible level (function of higher nervous activity, elementary reaction to external irritants (stimuli), irritation of synapses of a set of neurons connected by function) and the subsequent reproduction of these functions.

Some experts mistake the ability of rational, motivated choice in conditions of lack of information for intelligence. That is, an intellectual program is simply considered to be that program of activity that can choose from a certain set of alternatives, for example, where to go in the case of “you will go left...”, “you will go right...”, “you will go straight...”.

The most heated debate in the philosophy of artificial intelligence is the question of the possibility of thinking created by human hands. The question “Can a machine think?”, which prompted researchers to create the science of simulating the human mind, was posed by Alan Turing in 1950. The two main points of view on this issue are called the hypotheses of strong and weak artificial intelligence.

The term “strong artificial intelligence” was introduced by John Searle, and in his words the approach is characterized: “Such a program will not just be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.” In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

John Searle's "Chinese Room" thought experiment argues that passing the Turing test is not a criterion for a machine to have a genuine thought process. Thinking is the process of processing information stored in memory: analysis, synthesis and self-programming. A similar position is taken by Roger Penrose, who in his book “The King's New Mind” argues for the impossibility of obtaining the thinking process on the basis of formal systems.


6. Computing devices and microprocessors.

A microprocessor (MP) is a device that receives, processes and outputs information. Structurally, the MP contains one or more integrated circuits and performs actions defined by a program stored in memory. (Fig. 6.1)

Figure 6.1– MP appearance

Early processors were designed as unique components for one-of-a-kind computer systems. Later, computer manufacturers moved from the expensive method of developing processors designed to run one single or a few highly specialized programs to mass production of typical classes of multi-purpose processor devices. The trend towards standardization of computer components arose during the era of rapid development of semiconductor elements, mainframes and minicomputers, and with the advent of integrated circuits it became even more popular. The creation of microcircuits made it possible to further increase the complexity of CPUs while simultaneously reducing their physical size.

The standardization and miniaturization of processors has led to the deep penetration of digital devices based on them into everyday human life. Modern processors can be found not only in high-tech devices such as computers, but also in cars, calculators, mobile phones and even in children's toys. Most often they are represented by microcontrollers, where, in addition to the computing device, additional components are located on the chip (program and data memory, interfaces, input/output ports, timers, etc.). The computing capabilities of the microcontroller are comparable to the processors of personal computers ten years ago, and more often than not even significantly exceed their performance.

A microprocessor system (MPS) is a computing, instrumentation or control system in which the main information processing device is the MP. The microprocessor system is built from a set of microprocessor LSIs (Fig. 6.2).

Figure 6.2– Example of a microprocessor system

The clock generator sets a time interval, which is a unit of measurement (quantum) for the duration of the command execution. The higher the frequency, the faster, other things being equal, the MPS. MP, RAM and ROM are integral parts of the system. Input and output interfaces - devices for interfacing MPS with input and output blocks. Measuring instruments are characterized by input devices in the form of a push-button remote control and measuring converters (ADCs, sensors, digital information input units). Output devices usually represent digital displays, a graphic screen (display), and external devices for interface with the measuring system. All MPS blocks are interconnected by digital information transmission buses. The MPS uses the backbone communication principle, in which blocks exchange information via a single data bus. The number of lines in the data bus usually corresponds to the MPS capacity (the number of bits in a data word). The address bus is used to indicate the direction of data transfer - it transmits the address of a memory cell or I/O block that is currently receiving or transmitting information. The control bus is used to transmit signals synchronizing the entire operation of the MPS.

The construction of the IPS is based on three principles:

Mainline;

Modularity;

Microprogram control.

The principle of trunking - determines the nature of the connections between the functional blocks of the MPS - all blocks are connected to a single system bus.

The principle of modularity is that the system is built on the basis of a limited number of types of structurally and functionally complete modules.

The principles of trunking and modularity make it possible to increase the control and computing capabilities of the MP by connecting other modules to the system bus.

The principle of microprogram control is the ability to carry out elementary operations - microcommands (shifts, information transfers, logical operations), with the help of which a technological language is created, i.e. a set of commands that best suits the purpose of the system.

According to their purpose, MPs are divided into universal and specialized.

Universal microprocessors are general-purpose microprocessors that solve a wide class of computing, processing and control problems. An example of the use of universal MPs are computers built on IBM and Macintosh platforms.

Specialized microprocessors are designed to solve problems of only a certain class. Specialized MPs include: signaling, multimedia MPs and transputers.

Signal processors (DSPs) are designed for real-time digital signal processing (for example, signal filtering, convolution calculation, correlation function calculation, signal limiting and conditioning, performing forward and inverse Fourier transforms). (Figure 6.3) Signal processors include processors from Texas Instruments - TMS320C80, Analog Devices - - ADSP2106x, Motorola -DSP560xx and DSP9600x.

Figure 6.3– Example of internal DSP structure

Media and multimedia processors are designed to process audio signals, graphic information, video images, as well as to solve a number of problems in multimedia computers, game consoles, and household appliances. These processors include processors from MicroUnity - Mediaprocessor, Philips - Trimedia, Cromatic Research - Mpact Media Engine, Nvidia - NV1, Cyrix - MediaGX.

Transputers are designed to organize massively parallel calculations and work in multiprocessor systems. They are characterized by the presence internal memory and a built-in interprocessor interface, i.e. communication channels with other MP LSIs.

Based on the type of architecture, or the principle of construction, a distinction is made between MPs with von Neumann architecture and MPs with Harvard architecture.

The concept of microprocessor architecture defines its component parts, as well as the connections and interactions between them.

Architecture includes:

Block diagram MP;

Software model MP (description of register functions);

Information about memory organization (capacity and memory addressing methods);

Description of the organization of input/output procedures.

Fonneumann architecture (Fig. 6.4, a) was proposed in 1945 by the American mathematician Joe von Neumann. Its peculiarity is that the program and data are located in shared memory, which is accessed via one data and command bus.

Harvard architecture was first implemented in 1944 in the relay computer at Harvard University (USA). A feature of this architecture is that the data memory and program memory are separated and have separate data buses and command buses (Fig. 6.4, b), which makes it possible to increase the performance of the MP system.

Figure 6.4. Main types of architecture: (a - von Neumann; 6 - Harvard)

Based on the type of instruction system, a distinction is made between CISC (Complete Instruction Set Computing) processors with a full set of instructions (typical representatives of CISC are the Intel x86 microprocessor family) and RISC processors(Reduced Instruction Set Computing) with a reduced set of instructions (characterized by the presence of fixed-length instructions, a large number of registers, register-to-register operations, and the absence of indirect addressing).

Single-chip microcontroller (MCU) is a chip designed to control electronic devices (Figure 5). A typical microcontroller combines the functions of a processor and peripheral devices, and may contain RAM and ROM. Essentially, it is a single-chip computer capable of performing simple tasks. Using a single chip, instead of a whole set, significantly reduces the size, power consumption and cost of devices based on microcontrollers.

Figure 6.5– examples of microcontroller designs

Microcontrollers are the basis for building embedded systems; they can be found in many modern devices, such as telephones, washing machines etc. Most of the processors produced in the world are microcontrollers.

Today, 8-bit microcontrollers compatible with the i8051 from Intel, PIC microcontrollers from Microchip Technology and AVR from Atmel, sixteen-bit MSP430 from TI, as well as ARM, the architecture of which is developed by ARM and sells licenses to other companies for their production, are popular among developers. .

When designing microcontrollers, there is a balance between size and cost on the one hand, and flexibility and performance on the other. For different applications, the optimal balance of these and other parameters can vary greatly. Therefore, there are a huge number of types of microcontrollers, differing in the architecture of the processor module, the size and type of built-in memory, the set of peripheral devices, the type of case, etc.

A partial list of peripherals that may be present in microcontrollers includes:

Universal digital ports that can be configured for input or output;

Various I/O interfaces such as UART, I²C, SPI, CAN, USB, IEEE 1394, Ethernet;

Analog-to-digital and digital-to-analog converters;

Comparators;

Pulse width modulators;

Timers, built-in clock generator and watchdog timer;

Brushless motor controllers;

Display and keyboard controllers;

Radio frequency receivers and transmitters;

Arrays of built-in flash memory.

Artificial intelligence AI (artificial intelligence) is usually interpreted as the property of automatic systems to take on certain functions of a person’s thinking ability, for example, choosing and making optimal decisions based on previously gained experience and rational analysis of external influences. We are talking, first of all, about systems that are based on the principles of learning, self-organization and evolution with minimal human participation, but involving him as a teacher and partner, a harmonious element of the human-machine system.

Naturally, attempts to create computer-based AI began at the dawn of the development of computer technology. At that time, the computer paradigm dominated, the key theses of which stated that the Turing machine is theoretical model brain, and the computer is the implementation universal machine and any information process can be reproduced on a computer. This was the dominant paradigm for a long time, brought many interesting results, but did not achieve the main goal - building AI in the sense of modeling human thinking. The computer paradigm of creating AI, which failed due to an incorrect set of key prerequisites, was logically transformed into neuroinformatics, which develops a non-computer approach to modeling intellectual processes. The human brain, operating with undivided information, turned out to be much more complex than a Turing machine. Each human thought has its own context, outside of which it is meaningless, knowledge is stored in the form of images, which are characterized by vagueness, blurredness, the system of images is poorly sensitive to contradictions. The human knowledge storage system is characterized by high reliability due to distributed knowledge storage, and operating with information is characterized by great depth and high parallelism.

Information processing in any intelligent systems is based on the use of a fundamental process - learning. Images have characteristic objective properties in the sense that different recognition systems, trained on different observational material, mostly classify the same objects in the same way and independently of each other. It is this objectivity of images that allows people all over the world to understand each other. Learning is usually called the process of developing in a certain system a specific reaction to groups of external identical signals through repeated exposure to external correction signals on the recognizing system. The mechanism for generating this adjustment, which most often has the meaning of reward and punishment, almost completely determines the learning algorithm. Self-study differs from training in that here Additional Information the system is not informed about the correctness of the reaction.

Intelligent information systems can use “libraries” of a wide variety of methods and algorithms that implement different approaches to the processes of learning, self-organization and evolution when synthesizing AI systems. Since to date there is neither a general theory of artificial intelligence nor a working example of a fully functional AI model, it is impossible to say which of these approaches is correct and which is incorrect: most likely, they are able to harmoniously complement each other. More information about the problems of artificial intelligence can be found on the websites www.ccas.ru and www.iseu.by/rus/educ/envmon.

Artificial intelligence is implemented using four approaches (we can hardly resist saying the fashionable “paradigm”): logical, evolutionary, simulation and structural. All these four directions are developing in parallel, often intertwining.

The basis for the logical approach is Boolean algebra and its logical operators (primarily the familiar IF ["if"] operator). Boolean algebra received its further development in the form of predicate calculus, in which it was expanded by introducing subject symbols, relations between them, quantifiers of existence and universality. Almost every AI system built on a logical principle is a theorem proving machine. In this case, the source data is stored in the database in the form of axioms, and the rules of logical inference are stored as relations between them.

Most logical methods are characterized by high labor intensity, since during the search for a proof a complete search of options is possible. Therefore, this approach requires effective implementation computing process, and good performance is usually guaranteed with relatively small size Database. Example practical implementation logical methods are decision trees that implement in a concentrated form the process of “learning” or synthesis of a decision rule.

A relatively new direction, such as fuzzy logic, allows the logical approach to achieve greater expressiveness. After the seminal works of L. Zadeh, the term fuzzy (English fuzzy, fuzzy) became a keyword. Unlike traditional mathematics, which requires precise and unambiguous formulations of patterns at every modeling step, fuzzy logic offers a completely different level of thinking, thanks to which the creative modeling process occurs at a higher level of abstraction, in which only a minimal set of patterns is postulated. For example, the truthfulness of a logical statement can take in fuzzy systems, in addition to the usual “yes / no” (1/0), also intermediate values: “I don’t know” (0.5), “the patient is more likely alive than dead” (0.75), “ the patient is more dead than alive" (0.25), etc. This approach is more similar to the thinking of a person who rarely answers questions with only “yes” or “no.” The theoretical foundations and applied aspects of intelligent assessment and forecasting systems under conditions of uncertainty, based on the theory of fuzzy sets, are described in detail in the literature [Averkin et al., 1986; Borisov et al., 1989; Non-traditional models.., 1991; Vasiliev, Ilyasov, 1995].

The term “self-organization” is understood, according to Ivakhnenko, as “the process of spontaneous (spontaneous) increase in order, or organization in a system consisting of many elements, occurring under the influence of the external environment.”

The principles of self-organization have been the subject of research by many outstanding scientists: J. von Neumann, N. Wiener, W.R. Ashby and others. A great contribution to the development of this direction was made by the work of Ukrainian cybernetics under the leadership of A.G. Ivakhnenko, who developed a whole class of adaptive self-organization models, which could be called an “intellectual generalization” of empirical-statistical methods.

The following principles of self-organization of mathematical models can be noted:

  • - the principle of inconclusive decisions (proposed by D. Gabor and consists in the need to maintain sufficient “freedom of choice” of several best solutions at every step of self-organization),
  • - the principle of external addition (based on K. Gödel’s theorem and lies in the fact that only external criteria based on new information make it possible to synthesize the true model of an object hidden in noisy experimental data);
  • - the principle of mass selection (proposed by A.G. Ivakhnenko and indicates the most appropriate way to gradually complicate the self-organizing model so that the criterion of its quality passes through its minimum).

For self-organization to occur, it is necessary to have an initial structure, a mechanism for its random mutations, and selection criteria, thanks to which the mutation is assessed in terms of its usefulness for improving the quality of the system. Those. When building these AI systems, the researcher specifies only the initial organization and list of variables, as well as quality criteria that formalize the optimization goal, and the rules by which the model can change (self-organize or evolve). Moreover, the model itself can belong to a variety of types: linear or nonlinear regression, a set of logical rules, or any other model.

Self-organizing models serve mainly to predict the behavior and structure of ecosystems, since by the very logic of their construction, the participation of the researcher in this process is minimized. A number of specific examples of the use of GMDH algorithms can be given: for long-term forecasts of the ecological system of a lake. Baikal, modeling geobotanical descriptions; predator-prey systems, tree growth, forecasting toxicological indicators of pollutants, assessing the population dynamics of zooplankton communities.

In mathematical cybernetics, two types of iterative processes of system development are distinguished:

  • - adaptation, in which the extremum (the goal of the system’s movement) remains constant;
  • - evolution, in which the movement is accompanied by a change in the position of the extremum.

If self-organization is associated only with adaptive mechanisms for adjusting system reactions (for example, changing the values ​​of weighting coefficients), then the concept of evolution is associated with the ability of an effector (a term introduced by S. Lem) to change its own structure, i.e. the number of elements, direction and intensity of connections, adjusting them optimally in relation to the assigned tasks at each specific point in time. In the process of evolution in a complex and changing environment, the effector is able to acquire fundamentally new qualities and reach the next stage of development. For example, in the process of biological evolution, extremely complex and at the same time surprisingly productive living organisms arose.

Evolutionary modeling is an essentially universal way of constructing forecasts of the macrostates of a system in conditions where a posteriori information is completely absent, and a priori data set only the prehistory of these states. The general scheme of the evolution algorithm is as follows:

  • - the initial organization of the system is specified (in evolutionary modeling, for example, a finite deterministic Mealy automaton may appear in this capacity);
  • - carry out random “mutations”, i.e. randomly change the current state machine;
  • - they select for further “development” that organization (that automaton) that is “the best” in the sense of some criterion, for example, the maximum accuracy of predicting the sequence of values ​​of macrostates of the ecosystem.

The model quality criterion in this case is not much different, for example, from the minimum mean square error on the training sequence of the least squares method (with all the ensuing disadvantages). However, unlike adaptation, in evolutionary programming the structure of the decision device changes little when moving from one mutation to another, i.e. there is no redistribution of probabilities that would fix the mutations that led to success at the previous step. The search for the optimal structure is largely random and unfocused, which delays the search process, but ensures the best adaptation to specific changing conditions.

The structural approach refers to attempts to build AI systems by modeling the structure of the human brain. In the last ten years, the phenomenon of an explosion of interest in structural methods of self-organization - neural network modeling, which is successfully used in a variety of fields - business, medicine, technology, geology, physics, i.e. wherever you need to solve forecasting, classification or control problems.

The ability of a neural network to learn was first explored by J. McCulloch and W. Pitt when their work “A Logical Calculus of Ideas Relating to Nervous Activity” was published in 1943. It presented a model of a neuron and formulated the principles for constructing artificial neural networks.

A major impetus to the development of neurocybernetics was given by the American neurophysiologist F. Rosenblatt, who in 1962 proposed his model of a neural network - the perceptron. Initially received with great enthusiasm, the perceptron soon came under intense attack from major scientific authorities. And, although a detailed analysis of their arguments shows that they were not exactly challenging the perceptron that Rosenblatt proposed, major research on neural networks was curtailed for almost 10 years.

Another important class of neural systems was introduced by the Finnish T. Kohonen. This class has beautiful name: “self-organizing state mappings that preserve the topology of sensory space.” Kohonen's theory actively uses the theory of adaptive systems, which was developed over many years by Academician of the Russian Academy of Sciences Ya.Z. Tsypkin.

It is now very popular all over the world to assess the capabilities of learning systems, in particular neural networks, based on the theory of dimensionality created in 1966 by Soviet mathematicians V.N. Vapnik and A.Ya. Chervonenkis. Another class of neural-like models is represented by networks with backpropagation of errors, in the development of modern modifications of which Prof. played a leading role. A.N. Gorban and the Krasnoyarsk school of neuroinformatics he heads. Much scientific and popularization work is carried out by the Russian Association of Neuroinformatics under the leadership of President V.L. Dunin-Barkovsky.

The entire neural network approach is based on the idea of ​​constructing a computing device from a large number of parallel working simple elements - formal neurons. These neurons function independently of each other and are interconnected by unidirectional information transmission channels. The core of neural network concepts is the idea that each individual neuron can be modeled by fairly simple functions, and the entire complexity of the brain, the flexibility of its functioning and other important qualities are determined by the connections between neurons. The ultimate expression of this point of view can be the slogan: “the structure of connections is everything, the properties of elements are nothing.”

Neural networks (NN) are a very powerful modeling method that allows you to reproduce extremely complex dependencies that are nonlinear in nature. As a rule, a neural network is used when assumptions about the type of connections between inputs and outputs are unknown (although, of course, the user is required to have some set of heuristic knowledge about how to select and prepare data, select the desired network architecture and interpret the results) .

Representative data is supplied to the input of the neural network and a learning algorithm is launched, which automatically analyzes the data structure and generates a relationship between input and output. To train a neural network, two types of algorithms are used: controlled (“supervised learning”) and unsupervised (“unsupervised”).

The simplest network has the structure of a multilayer perceptron with direct signal transmission (see Fig. 3), which is characterized by the most stable behavior. The input layer is used to enter the values ​​of the initial variables, then the neurons of the intermediate and output layers are sequentially processed. Each of the hidden and output neurons, as a rule, is connected to all elements of the previous layer (for most network options, a complete system of connections is preferable). At network nodes, an active neuron calculates its activation value by taking the weighted sum of the outputs of the elements of the previous layer and subtracting the threshold value from it. The activation value is then transformed using an activation function (or transfer function), resulting in the output of the neuron. After the entire network has worked, the output values ​​of the elements of the last layer are taken as the output of the entire network as a whole.

Rice. 3.

Along with the multilayer perceptron model, other models of neural networks emerged later, differing in the structure of individual neurons, in the topology of connections between them, and in learning algorithms. Among the most well-known options now are NNs with backpropagation of errors based on radial basis functions, generalized regression networks, Hopfield and Hamming NNs, self-organizing Kohonen maps, stochastic neural networks, etc. There are works on recurrent networks (i.e. containing feedbacks, leading back from more distant to nearer neurons), which can have very complex behavioral dynamics. Self-organizing (growing or evolving) neural networks are beginning to be effectively used, which in many cases turn out to be more preferable than traditional fully connected neural networks.

Models based on the human brain are characterized by both easy parallelization of algorithms and the associated high performance, and not too much expressiveness of the presented results, which does not contribute to the extraction of new knowledge about the simulated environment. Therefore, the main purpose of neural network models is forecasting.

An important condition for the use of NN, as well as any statistical methods, is the objectively existing connection between known input values ​​and the unknown response. This connection may be random, distorted by noise, but it must exist. This is explained, firstly, by the fact that iterative algorithms for directed enumeration of combinations of neural network parameters turn out to be very effective and very fast only when good quality source data. However, if this condition is not met, the number of iterations grows rapidly and the computational complexity turns out to be comparable to the exponential complexity of algorithms for exhaustively enumerating possible states. Secondly, the network tends to learn first of all what is easiest to learn, and, in conditions of strong uncertainty and noisy features, these are primarily artifacts and “false correlation” phenomena.

The selection of informative variables in traditional regression and taxonomy is carried out by “weighting” the features using various statistical criteria and step-by-step procedures based, in one form or another, on the analysis of coefficients of partial correlations or covariances. For these purposes, various sequential procedures are used, which do not always lead to a result close enough to the optimal one. An efficient automated approach to selecting meaningful input variables can be implemented using a genetic algorithm.

In this regard, in the general scheme of statistical modeling using AI methods, it is recommended to sequentially perform two different procedures:

  • - using evolutionary methods in the binary space of features, such a minimum combination of variables is sought that ensures an insignificant loss of information in the source data,
  • - the minimized data matrix obtained at the previous stage is fed to the input of the neural network for training.