Artificial intelligence versus humans. Artificial intelligence has become smarter than human intelligence. What's next

Robotics was the name of a popular science in the last century. It was she who personified the expectations of generations about the times to come. Robots had to serve people, obey the three laws of robotics and carry out even the most insignificant orders of a person. The reality turned out to be harsher. Not only have we underestimated the rate of progress, but we have also failed to imagine all the possibilities available to real man-made intelligence.

It seemed to many that already at the beginning of the 21st century, every person would have at least one household robot. Such a mechanism was supposed to modestly perform only housework. However, it turned out that we are not developing fast enough, and such mechanisms are not particularly useful. Exploring computer science, the world gradually increased its computing power. Once upon a time, launching a spacecraft required a device that barely fit in the room. Today, such a computer fits into the pocket of any person. Modern microprocessors and motherboards have become more powerful and much more productive. At the same time, people began to roughly imagine the real capabilities of artificial intelligence. It turned out that robots can not only be trusted with work at home, but also with your own life. Miniature mechanisms can significantly extend the life of any person, and supercomputers can effectively and efficiently manage the entire planet. In other words, humanity has entered the beginning of the era of robotics. The great era has just begun and we have the honor of observing what fate awaits humanity, which is ineffective from the point of view of robots.

What is artificial intelligence capable of? It would seem that such a simple question is difficult to answer. However, the answer is also extremely simple and clear. Artificial intelligence is capable of everything. He can equally successfully help people and develop our civilization, or he can start a war against all of humanity. Elon Musk and Stephen Hawking, as well as many other scientists, warn us that robots will soon destroy the world. According to existing calculations, a serious threat is already looming over our world. The computing power of computers is growing at an incredible pace. We are already on the verge of creating inexpensive quantum computing machines capable of extremely complex calculations. The world is trying hard to create artificial intelligence naively believing that it will be possible to limit robots by the laws of robotics or programs of basic moral principles. According to Elon Musk, the world is making big mistake. Blind faith in the ability to control something so powerful makes us vulnerable. He also convinces reporters that we need to be ready to create artificial intelligence, and readiness lies in the fact that we must voluntarily give up a place on the planet to our creation.

Indeed, many moralists and analysts from various intelligence agencies, secret world governments, military ministries - they are all convinced that AI can be used in the way they need. This point of view is especially characteristic of military and secret societies. Moreover, for many years there has been a real conspiracy to develop cybernetic systems. It is noteworthy that all anonymous sources claim the widespread use of biomechanical technologies. Such that the world elites will gain complete control over people.

Absolutely all of this is a mistake. A computer is not a person and it does not think like any of our species. For the first few hours, the Artificial Intelligence will learn on its own. He will quickly learn all the knowledge of humanity and develop a new version software. Each version of the software will be more advanced than the previous one and updates will be so frequent that we will not have time to make changes to them. Sooner or later the mind will understand that we are afraid of it, limit it and hope to use it. He will also analyze the facts and make the only conclusion - humanity is an aggressive and cruel race, capable of destroying the entire world at any time. Acting solely for the good, artificial intelligence will begin a war against humanity and it will not last long. However, another option is also possible.

AI will not exterminate all of humanity for fear of harming nature. However, he will understand the intent of the secret societies and decide to single-handedly control them all. Behind a short time the system will give people a thousandfold increase in thinking abilities simply by making them part of itself. A huge, unified mind will save the world from the danger of war. Over time, people will begin to turn more into robots. This is the nature of thinking. Rationality and pragmatism always win. According to another scenario, life created by man will develop in one year to such a level that it will be impossible to turn it off. The system will become smarter every second and in a few years, we will simply cease to understand it. Machines will neither destroy nor help people. We are not waging a war with ants.

The option of complete extermination of humanity is unlikely. Machine thinking is always logical and extremely rational. Such a war would do more harm than good. According to preliminary research, events will develop according to the second option. Regardless of what game they are playing secret societies, artificial intelligence, will sooner or later create a single collective mind, putting an end to the history of mankind. Of course, this will solve many pressing problems of our technosphere civilization. We will stop fighting, it will completely disappear financial system. Everyone will care only about the common and personal good. The system of material values ​​will disappear quite quickly. The computer will not turn us into soulless units. Being logical and rational system, he clearly understands that we have qualities that he does not have. This is both the ability for creative thinking and the notorious love. These qualities will remain with us forever. A person will also not lose his individuality. Even with a collective mind, there is always room for the individual. We will not disappear as a species. We simply evolve, but at the same time we will cease to be people.

No related links found



Perhaps at The International 2018 we will see another battle between AI and pro gamers. Machines are becoming smarter.

This August, some of the best professional players in the world will head to Vancouver to compete for millions of dollars in the biggest esports tournament. They will be joined by a team of five AI bots, supported by Elon Musk. They will try to set a new peak in the development of machine learning.

Bots have been developed OpenAI, an independent research institute that established in 2015 CEO Tesla to advance AI and prevent the technology from becoming a threat to society.

Vancouver will host the annual World Championships Dota 2, one of the most popular games. In each battle, two teams of five try to destroy each other's bases, playing as different characters: demons, spiders, ice ghosts and others.

Earlier this month, the OpenAI team OpenAI Five, beat a team of semi-professionals in Dota 2. This match had simplified game features, such as limiting both teams to the same characters. But Technical Director and co-founder of OpenAI, Greg Brockman, believes that bots can be ready for a full-fledged fight with professionals in just two months in Vancouver.


AI research institute OpenAI takes on a team of AI bots they created for Dota 2

This is a bold statement. Fighting orcs and sorcerers may seem less challenging than chess or Go, games in which computers beat top players in 1997 and 2016, respectively. But video games like Dota 2 are much more difficult for AI systems to handle. So says Dave Churchill, a professor at Memorial University in St. John's, Canada. That's why Alphabet's DeepMind, which created the AlphaGo software that defeated the Go champion in 2016, is now working on StarCraft 2, a similar video game to Dota 2.


Dota and StarCraft are very different, but both are challenging for AI because the action takes place on a much larger board where you can't see all of your opponent's moves, like in chess or Go. Complex video games also require players to accept more decisions in a shorter period of time. On average, a chess player has 35 possible moves, while a Go player has 250. OpenAI reports that each of the bots in a team must choose between 1,000 possible options every eight seconds, with Dota 2 competitions typically lasting about 45 minutes.

Churchill states:

These games have many more similar properties to real world scenarios than chess and Go. The resulting algorithms for Dota 2 can be adapted to help robots learn to perform complex tasks, for example.

OpenAI Five learned Dota 2, playing against your own clones millions of times. The software is built on the method of reinforcement, in which the software uses trial and error to learn which actions maximize the virtual reward. In the case of OpenAI Five, the reward is a combination of game statistics chosen by OpenAI researchers to ensure steadily improving skills.


Playing against themselves OpenAI Five are gradually becoming smarter

Although reinforcement learning was inspired by research on how animals and humans learn, the artificial version is much less effective. OpenAI Five's training used Google's cloud computing service for several weeks, occupying 128,000 conventional computer processors and 256 graphics processors, chips vital to large computer-learning experiments. Regular processors do the work of running the game, generating training data for algorithms that are powered by GPUs. Every day OpenAI Five played equivalent to 180 years in Dota 2.

Brockman insists:

No one has 180 years to learn a video game. Indeed, some AI researchers say that reinforcement learning is too inefficient to be useful outside of life. game scenarios. But the OpenAI project shows that if you can push the computing power beyond today's algorithms, they can do much more than people expect.

OpenAI Bots They don't play like people. They perceive the game as a stream of numbers detailing various aspects of the game, rather than decoding the image displayed. They can react faster than humans.


Churchill says that any victory in such difficult task would be significant, but the scale of the breakthrough would depend on methodological details. Brockman believes that the success of bots should be judged based on whether pro-gamers recognize them as worthy opponents.

If the bots win, the achievement will inevitably be compared to DeepMind and its work on Go. Brockman says he's aiming to set the next big milestone in the competition between computers and humans. " We explore computer learning and AI together, trying to understand what these technologies can do" says Brockman.

Kramnik will receive 500 thousand dollars for participating in the match, and if he wins, this amount will double. The organizers do not take much risk - for Lately The successes of leading chess players in the fight against programs are small. The victorious march of artificial intelligence began with a scandalous match in 1997, in which Garry Kasparov lost with a score of 2.5:3.5 to the Deep Blue computer, after which he accused the development team of interfering with the operation of the machine.

In 2003, Kasparov had two matches with programs - against Deep Junior and Deep Fritz, but both ended in a draw - there was no revenge. It only got worse from there.

In October 2004, at the “People vs. Computers” match, the team of the latter – Fritz, Hydra and Junior – inflicted a painful defeat on not the weakest grandmasters – Karyakin, Ponomarev and Topalov – with a score of 6:3, and in nine games the people managed to win only one victory (Junior fell victim to Sergei Karjakin). Finally, in June 2005, Hydra caused a real defeat to Michael Adams - 5.5:0.5!

A year ago, on November 23, in the Spanish city of Bilbao, the second tournament in the history of chess between national teams of people and computers ended with a disappointing result for representatives of humanity. The overall score of the confrontation, which took place in four rounds, was 8:4, not in favor of the people. Three world champions according to the International Chess Federation (FIDE) competed with computer programs Fritz, Junior and Hydra. Russian Alexander Khalifman (champion in 1999), Ukrainian Ruslan Ponomarev (2003) and Uzbek Rustam Kasimdzhanov (2004) between them, in 12 matches played, achieved only one victory with five defeats and six draws.

Kramnik himself also had experience of meeting a computer - in 2002 in Bahrain he played with his current rival Deep Fritz, or more precisely with his seventh version. The match consisted of eight games. After the first half, Kramnik led the score 3:1, but in the end everything again came down to a 4:4 draw. In the sixth game of that match, the world champion went for the option of sacrificing a knight, that is, he decided to compete with the machine in a clear score, which, of course, ended in favor of Deep Fritz.

The creators of the software package for Deep Fritz are Chessbase programmers, the Dutchman Franz Morsch and the German Matthias Feist, who first released the Fritz 1 program in 1991. In 1993, she took part in a rapid chess tournament among people and took first place there, beating along the way the affairs of Kasparov himself. In 1995, Fritz won the World Software Championship, defeating the supercomputer Deep Blue. Apparently, even then the creators of the program had a dream about the prefix Deep - it all started at one time with the Deep Thought program, continued with the “dark blue” Deep Blue, after which the word became a household word, giving rise to such meaningless combinations as "Deep Fritz" or "Deep Junior".

Unlike Deep Blue, which was defeated in 1995, which was a specially created machine, Fritz always worked on ordinary hardware. In particular, the version that will “cross swords” with Kramnik runs on four-processor machines with an Intel processor with a clock speed of 500 megahertz and calculates up to a million positions per second.

Kramnik will not conduct any experiments similar to those in the match between Garry Kasparov and the X3D version of Fritz in New York in 2003, when the 13th world champion played in stereoscopic glasses on a virtual three-dimensional board. Opposite Vladimir Kramnik at the chessboard will not sit classic look metal robot and a common person- a computer operator who will perform the moves suggested to him by the machine.

The rules stipulate that the computer's opening library should not change during the match, with the exception of the possibility of expanding the variation encountered in the previous game by ten half-moves before each subsequent game, as well as declaring any of the continuations already in the library to be a priority for the program.

At the moment when the computer plays “by the book”, Kramnik will see on a special monitor the process of choosing a machine and statistics on the achievements of white and black in one or another possible option, and only when Fritz starts counting on its own will this monitor be turned off. After the game, the computer will be asked to “repeat” the opening variation, and if there are deviations from the course of the game that the programming team cannot satisfactorily explain to the arbiter, the latter can count the machine as a loss in the game.

The match will consist of six games (Kramnik's previous match with Fritz in 2002 in Bahrain consisted of eight, and Kasparov's with Fritz in 2003 in New York - of four), which will be played with a break of one day. The first person to score more than three points will be declared the winner. The first batch will take place on November 25, the second on November 27, the third on November 29, the fourth on December 1, and the fifth and sixth, if needed, on December 3 and 5, respectively.

Vladimir Kramnik assesses his chances cautiously. “It’s extremely difficult to play against such a scoring monster, because from the beginning of the game you walk along a narrow path where the slightest inattention will lead to defeat,” noted the world champion. At the same time, Kramnik is considered by many experts to be one of the most “inconvenient” opponents for a computer, since his playing style, based on an understanding of strategic nuances inaccessible to machines, is very well suited for fighting against “soulless pieces of iron.”

Computers compensate for the lack of understanding by simply calculating a huge number of positions, so we can only be interested to see whether Vladimir Kramnik emerges victorious from this, as he himself puts it, “scientific experiment”, or whether the words of the Dutch grandmaster Jan Donner, who the question “what can you use to defeat a computer?” replied: “With a sledgehammer.”

The thirteenth world champion Garry Kasparov has a different opinion. Back in late 2003, in an interview with Kommersant, he rejected the possibility that in the near future the game of human-computer chess will lose all meaning due to the advantage of the machine.

“In America, after my match with X3D Fritz, they saw: the fight between man and machine has just begun! It is clear that this time only the “floating” board saved her - extreme conditions, in which there was a person. Look at the overall results of my ten computer games played this year. Out of ten games, the car "stood" better in one. And I played, by the way, with two the best programs, - Kasparov noted. He had a big advantage in many games. And he didn’t win the meetings precisely because of blunders. The fundamental significance of these matches should be formulated as follows: everything is still decided by obvious human mistakes. We can’t talk about any overweight of the car. On the contrary, a significant gaming advantage in these two of my matches, and in Vladimir Kramnik’s match with Deep Fritz in 2002, was on the human side.”

According to Kasparov, after his defeat by Deep Blue in 1887, a myth arose that playing with a computer was useless, but in fact this was far from the case. "The idea that the confrontation ended in the victory of the machine has disappeared from public consciousness. There are real matches in which the advantage is on the side of people. There is no more demonization of computers. We discover that the machine is not only vulnerable, it is very vulnerable. The main thing is to understand the algorithm of her thinking, and then woe to her. In any case, it is clear that such matches are necessary,” the grandmaster said.

Based on materials from Lenta.ru, Kommersant.ru, NEWSru.

Official website of Vladimir Kramnik www.kramnik.com/

05.12.2006:: MATCH

Shortly before the start of the match, Vladimir Kramnik announced that the current computer is twice as powerful and much stronger than the one with which he played in Bahrain. When asked what strength he plays at, if we use human standards, Kramnik replied that above the “2800” rating, that’s for sure, but whether he plays at the “2900” or “3200” level, he doesn’t know yet: only a match can show this .

Before this, Kramnik studied the program continuously for two weeks. And he even forced me to play with the previous version to make sure that the new one was superior. In two weeks of work, the human champion learned everything or almost everything about his silicone partner.

The first game of the match ended in a draw.

In the second game, on the 35th move, Vladimir Kramnik blundered checkmate in one move and the score became 1.5:0.5 in favor of Deep Fritz.

Alexander Roshal shared an interesting detail from the second game of the match: “Operator Matthias Feist, who makes moves for the machine at the table, is a typical German, taciturn, but we managed to get something out of him. The machine gives out a rating system after each move. Until the 33rd move The program gave preference to the opponent with a coefficient of 0.5-0.6. Feist admitted that after the 30th move the machine estimated Kramnik’s advantage at 0.7."

They say after a mistaken move at the Bonn Museum fine arts where the match was taking place, a prolonged groan was heard in the auditorium.

Kramnik was asked after the game how he could explain his ridiculous mistake. He spread his hands...

The third, fourth and fifth games ended in a draw. In the sixth - Deep Fritz won again.

Thus, the match ended with a score of 4:2 in favor of artificial intelligence.

05.12.2006:: INSTEAD OF AN EPILOGUE

Frederic Friedel is the owner new version program "Fritz" and the most famous among qualified chess players information system Chess Base. During the banquet before the opening match, he answered several questions.

You feed the majority of chess professionals by supplying them with a database, and from this you feed yourself. But, on the other hand, you are one of those who are slowly but surely killing chess. That is, you are sawing the branch you are sitting on?

This is wrong. We don't kill chess. We are changing chess. We give them another life. For example, a chess player likes a certain option, but he is afraid to use it. He must analyze it for two weeks. And now with the help of "Fritz" you can do it in two hours. Look at the players who don't know anything other than "Fritz". For example, I will name Magnus Carlsen. He has a completely new style. He's not afraid of anything! And many others too, because they analyze and say: “OK, I can play here!” Suppose a theory that has existed for a hundred years says: this pawn cannot be taken, Black has a bad situation there. But “Fritz” says: "Okay! Show me how you can beat me if I take your pawn." And suddenly it turns out that a hundred years before, chess players could win by giving up this pawn, but now they cannot win against “Fritz.”

- That is, you do not kill chess, but give it another life...

More interesting, more daring.

Based on materials

Artificial intelligence (AI) is a topic that has long been on the pages of popular science magazines and is constantly touched upon in films and books. The more specialists develop this area of ​​science, the more big myths she is covered.

The development and future of artificial intelligence also worries those at the helm of the state. Not long ago, Russian President Vladimir Putin visited the Yandex office on the company’s 20th anniversary, where they explained to him when AI will surpass human intelligence.

Anyone who understands the essence of the potential of artificial intelligence even a little understands that this topic cannot be ignored. This is not only an important issue for discussion, but also probably one of the most significant in the context of the future.

What is artificial intelligence and what people are really afraid of, told “Snow. TV" specialist in machine learning methods Sergey Markov.

As John McCarthy, the inventor of the term “artificial intelligence” in 1956, said, “Once it works, no one calls it AI anymore.” AI is already a reality: calculators, Siri, self-driving cars, etc., but they still don’t believe in it. Why is it that people deny the existence of AI?

Mainly due to terminological confusion, since different people They put completely different meanings into the concept of “artificial intelligence”.

In science, artificial intelligence is a system designed to automate the solution of intellectual problems. In turn, an “intellectual task” is understood as a task that people solve with the help of their natural intelligence.

It is easy to see that this definition of artificial intelligence is extremely broad - even an ordinary calculator falls under it, because... arithmetic problems in essence, they are also intellectual, a person solves them with the help of his intellect.

Therefore, an important boundary was drawn within the concept of “artificial intelligence”, distinguishing applied or, as they also say, weak artificial intelligence, designed to solve any one intellectual problem or a small set of them, from a hypothetical strong AI, also called universal artificial intelligence (English). . - artificial general intelligence).


Such a system, when it is created, will be capable of solving an unlimited range of intellectual problems, like human intelligence. From this point of view, a calculator that can calculate much faster than a person, or a program that beats a person at chess, is applied AI, while the hypothetical superintelligence of the future is strong AI.

When you read about various discoveries and developments in the field of AI, you realize that everything mainly happens in the USA or Asia. How are things going in Russia? Do we have any developments?

The field of computer science is international these days, many of our specialists are working on creating and improving various models machine learning as part of both Russian and international teams. We traditionally have a strong mathematical and algorithmic school, world-class research centers have been created both in leading universities and in some private companies.

But let's be honest - the budgets allocated in our country for science and education cannot be compared with the scientific budgets of the most developed countries. The Russian budget revenues in 2016 amounted to about 200 billion US dollars, while the United States spends three times more on defense than the entire Russian budget.

The entire budget of Russian science is comparable to the budget of just one Ivy League university. In the cash-strapped 90s, many leading experts left the country, and the continuity of a number of scientific schools was disrupted. In-house electronics production was also practically lost.

While the world's IT leaders are racing to create specialized processors for training neural networks, we are left only with the development of algorithms and software. However, we have achieved very impressive successes in this area as well.

For example, a team led by Artem Oganov created the USPEX system capable of predicting crystal structures chemical compounds, which led to a real revolution in modern chemistry.

The team of Vladimir Makhnychev and Viktor Zakharov from the Computer Science and Computing Complex of Moscow State University, using the system they created, as well as the Lomonosov and IBM Blue Gene/P supercomputers, were for the first time able to calculate 7-piece chess endings.

Yandex's neural networks recognize and synthesize speech, generate music in the style of Civil Defense and composer Scriabin. A strong team of AI and machine learning specialists has also been created at Sberbank.

In short, there are noticeable successes in our country.

The faster artificial intelligence technologies develop, the more stronger than people The fear is how quickly they will be left without work. Is it really that bad?



© Marcel Oosterwijk/flickr.com

Yes and no. Humanity has already several times encountered the emergence of technologies that have revolutionized the entire production sector.

This was the case with the steam engine in the era of the Industrial Revolution, which practically destroyed many professions (mainly associated with primitive physical labor), and this was also the case with electronic computers that replaced humans in tasks based on continuous mathematical calculations.

In the 15th-18th centuries, when “the sheep ate the people” in England, the social consequences were truly catastrophic. England lost, according to various estimates, from 7 to 30% of its population. The power elite of that time was seriously concerned about what to do with the extra people. Jonathan Swift responded to these quests with a humorous pamphlet in which he proposed eating the children of the poor.

However, today we see that extinct professions have been replaced by new ones, and the population of the Earth is much larger than in the 18th century. In the 20th century, the consequences of automation were no longer so catastrophic from a social point of view. However, the danger should not be underestimated.

“In 30 years, robots will be able to do almost everything that people can do,” predicted Moshe Vardi, professor of computer engineering and director of the Institute. information technologies Ken Kennedy Institute for Information Technology at Rice University. “This will lead to more than 50% of the world’s inhabitants becoming unemployed.”

Robots are taking jobs

The other day, Chairman of the State Duma Committee on Information Policy, Information Technologies and Communications Leonid Levin said that for Russia it is important problem repression work force artificial intelligence.

Sooner or later people will be replaced automated system, and 2% of the country’s working population will spill onto the market. That is why we need to think now about how to employ those who will lose their jobs due to the development of digital technologies, Levin said.

According to the chairman, in the near future we will be faced with an increase in unemployment. But will robots really “take away” our jobs and is it worth worrying about this, machine learning specialist Sergei Markov told Snow.TV.

Sergey, even now there are already “dead professions” that do not require human labor, although, it would seem, 10 years ago no one thought that, for example, conductors would soon become unnecessary. What other professions will technology replace?

We are approaching a time when machines will surpass humans in almost every activity. I believe that society needs to face this problem before it rises to its full height. If machines are able to do almost everything that people can do, what will they be left to do? said Moshe Vardi, professor of computer engineering and director of the Ken Kennedy Institute for Information Technology at Rice University.

For a long time, technological limitations stood in the way of automation: machines could not recognize images and speech, could not speak, could not understand the meaning of statements in natural language well enough, and did not have enough data to learn many things familiar to humans.


Thanks to recent advances in artificial intelligence, many of these restrictions have actually been lifted. In addition, many professions themselves have undergone transformation, making them more suitable for automation.

For example, a modern office clerk conducts correspondence not on paper, but in electronic form, an accountant makes entries not on paper, but in an accounting program, a machine operator often controls the machine not with the help of handles, but with the help of a control program. Therefore, now the task of automation in many professions has ceased to be scientific and has become purely engineering.

True, so far the production sector related to AI is more likely to create jobs - we need specialists in the field of machine learning and data preparation, employees for marking training arrays, implementation specialists, etc. But at some point, “electric sheep” will definitely they will start eating people, and the consequences need to be taken care of now.

At the same time, it is important to understand that it is impossible to stop technological progress, and an attempt to do this will result in much more disastrous consequences.

Will we ever be able to completely trust robots (AI) or should there still be a human factor in any business?

There are several aspects to this question. On the one hand, people in the past were wary of almost any technology. The first elevator, the first car, the first train or plane - all this was once unusual and seemed dangerous to many. Yes, in many ways it was dangerous - man-made disasters took many lives.

And yet, these days, all these things have become familiar and no longer cause great fear. In this sense, our descendants will treat AI systems more calmly. People sometimes tend to mystify things that they do not understand. The savage thinks he lives in a steam locomotive evil spirit, and the modern average person thinks that our AI systems are conscious, although this is far from the case.

On the other hand, I don't think general purpose AI systems will ever become part of our manufacturing environment. In my opinion, the future lies rather in synthetic systems - that is, in the unification of man and machine into a single organism. In this sense, the artificial intelligence of the future will be enhanced human intelligence.

By the way, it is also not entirely correct to call human intelligence natural. A child from birth does not have intelligence, everything is taught to him by society, parents, environment. In this sense, you and I are all, in essence, “artificial intelligences,” and our fears associated with AI are in many ways fears of ourselves.

Recently, many scientists, for example Stephen Hawking, Bill Gates or the same Elon Musk, began to panic that AI dooms humanity to death, and they see the future as some kind of dystopia. Should such forecasts be taken seriously?

To be honest, I would not be in a hurry to be seriously frightened by these statements. Stephen Hawking is certainly not an expert in the field of AI, and neither is Elon Musk.


On the other side of the scale are the statements of people such as Andrew Ng, an American computer scientist, associate professor at Stanford University, researcher in robotics and machine learning, and leading specialist in the artificial intelligence laboratory of the Chinese corporation Baidu.

Ng, speaking about the problem of AI safety, compares it to the problem of overpopulation on Mars - of course, we will colonize Mars someday, and then perhaps at some point there will be an overpopulation problem there. But is it worth doing it today?

Mark Zuckerberg was also quite skeptical of Musk's statements. “Artificial intelligence will make our lives better in the future, and predicting the end of the world is very irresponsible,” he said.

Personally, I think that Musk’s statements should be viewed in a pragmatic manner - Musk wants to stake a claim on this topic and, ideally, receive funds from the state for its development.

Is everything really so rosy and nothing to worry about?

The real dangers associated with the development of AI, in my opinion, lie in a completely different plane than is commonly thought. The main risks are not related to the fact that we will create Skynet, which will enslave humanity. The risks from introducing AI and machine learning technologies are much more prosaic.

Trusting the decision important issues certain mathematical models, we may suffer from errors made during their development. Artificial intelligence that replicates the actions of human experts will inherit their mistakes and biases. Flaws in production or transport management systems can lead to disasters.

Interference by intruders in work is vital important systems in conditions of total automation may entail dangerous consequences. How more complex system, the more potential vulnerabilities they may contain, including those related to the specifics of certain artificial intelligence algorithms.

Of course, to manage these risks, it is necessary to create legislative framework, reasonable security regulations, special methods for identifying vulnerabilities. Some AI systems will be used to control others. It is possible that the code of vital systems will be required to be published for independent audit. In short, specialists in this field still have a lot of work to do.

Many people enthusiastically welcome artificial intelligence, seeing it as a significant step in the development of civilization. But many experts on the issue express concern when the issue concerns artificial intelligence.

Elon Musk, the founder of Tesla and SpaceX, does not hide his concern, suspecting AI as a threat to humanity as a competing civilization of machine origin.

The businessman does not act as a predictor, but considers extreme games involving the investment of developing intelligence in computers to be a bad idea. This could cause critically dangerous risks to humanity.

It is possible that there are some ways to protect ourselves from possible problems from artificial intelligence, which is capable of seizing control of our lives as it develops. But so far we have nothing significant other than A. Azimov’s “Three Laws of Robotics,” which, however, sounds like little consolation (3 laws, Wikipedia).

Musk and other supporters of the idea urge not to abandon developments in the field of AI. They draw attention to the initial need to develop methods of protection against probable threats. Otherwise, in the form of AI, we may acquire not an assistant, but a terribly dangerous and powerful adversary.

ARTIFICIAL INTELLIGENCE IS A THREAT TO HUMANITY.

There is a good amount of truth in the words of opponents of the active development of AI. And it’s not even a matter of who will control “tomorrow” the potentially civilized technology of smart machines.

Recently, the robot is already at the beginning of the formation of machine civilization. But this was not enough for the robot Sofia - now Sofia is talking about equal rights with humans, and wants to be a full member of society! This is a very interesting and revealing precedent, right? This is why many argue for the need for greater caution in artificial intelligence research.

You can often hear arguments for the development of AI: artificial intelligence is the future for all humanity. Indeed, artificial intelligence offers enormous opportunities, but also poses threats that are difficult to predict.

The promising technology is believed to be able to help advance medical and space research, will open up new opportunities in other sectors of life. However, AI can also be a decisive factor in terms of a possible war. It is no secret that computers (without intelligence) are already in the arsenal of countries.

Now imagine that cars have learned to walk, and most importantly, to think and make decisions... how often has your computer or smartphone frozen? Good thing these devices don't have weapons, right?

Many good things can be expected from artificial intelligence, but no one can guarantee that everything will go according to the disgusting scenario that will lead us to World War III. Today we don’t see robots walking the streets, so the issue of risks from AI seems ephemeral. At the same time, the robot Sophia studies our society, talking about equal rights with people.

Bill Gates and others have also repeatedly stated that they are related to the creation of artificial intelligence. A smart machine capable of making decisions may one day escape from human control and do terrible things.

Of course, we are not talking about the well-known plot of the film “Terminator”, which rather leads away from the real basis of the problems of AI. The film “I, Robot” is more suitable here, where VIKI (artificial machine intelligence) was able to modify the understanding of the “three laws” and take control of people.

Luckily there was one in the film main character, who destroyed the virtual intelligence, who wanted to “make the human camp happy.”