[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/01/16 10:43:26

Artificial intelligence research

Content

The main articles are:

AI research in Russia

Main article: Artificial intelligence research in Russia

Chronicle of research

2023: Named the top 5 companies in the world in the number of scientific publications in the field of AI

The rating of world companies in the number of scientific publications in the field of artificial intelligence in the period from 2010 to 2023 was headed by Microsoft Corporation. At the same time, Google holds the first place in terms of the intensity of quoting such works. This is stated in a study by Epoch, the results of which were released on November 27, 2023.

It is noted that against the background of the rapid development of AI systems, including generative services, companies from USA are actively increasing the computing resources necessary to train models. So, in 11 years (by 2023), Google has increased the power of computing in the field of AI by about 10 million times. Companies OpenAI and (Meta recognized as an extremist organization; activities on the territory of the Russian Federation are prohibited) raised this value 1 million times in six years (by 2023). This growth is much higher than the overall trend in the segment, where an machine learning increase of 4 thousand times was recorded over the same period. The top 5 companies in terms of the number of scientific publications in the period 2010-2023. include:

  1. Microsoft - 14,550 works;
  2. Google - 10,094 works;
  3. IBM - 9738 works;
  4. Huawei - 5140 works;
  5. Intel - 4928 works.

The study notes that Chinese industry laboratories lag behind American ones in all indicators that Epoch takes into account. Nevertheless, the AI direction is actively developing such companies from the PRC as Alibaba, Tencent, Baidu and Huawei. In particular, Tencent and Alibaba are in 6th and 7th places in the ranking with 3495 and 3424 publications, respectively. In terms of the citation of scientific papers in the field of AI, the list of the top 5 is as follows (in 2010-2023):

  1. Google — 433 009;
  2. Microsoft — 336 207;
  3. Meta — 187 553;
  4. IBM — 105 101;

# DeepMind — 68 983.[1]

2017: Gamalon unveils data fragment self-learning technology

In February 2017, Gamalon announced the development of artificial intelligence technology that can quickly self-learn from several fragments of data. In terms of its effectiveness and accuracy of training, the new development corresponds to powerful neural networks. Read more here.

2016: Development of specialized AI systems and research on ways to create artificial intelligence

In 2016, two areas of AI development were identified:

  • solving the problems associated with the approach of specialized AI systems to human capabilities, and their integration, which is realized by human nature;

  • the creation of an artificial mind representing the integration of already created AI systems into a single system capable of solving the problems of mankind.

At this time, in the field of artificial intelligence, there is an involvement of many subject areas that have more practical relevance to AI, and not fundamental. Many approaches have been tried, but no research group has yet approached the emergence of artificial intelligence.

2013: Image Sorting Studies

In November 2013, it became known about another attempt in the field of creating artificial intelligence: scientists provided the computer with millions of images and offered him the opportunity to analyze for himself what they mean. That is, we are talking about an attempt to create a self-learning system.

A project called [2] by Carnegie Mellon University, which stands for Never Ending Image Learning.

Abhinav Gupta, left, and Abhinav Srivastava inspect the server cluster involved in the study at the Carnegie Mellon Server Campus of the University of Pittsburgh

In July 2013, the possibility of downloading images from the Internet in 24 by 7 mode was opened for the student computer so that he himself could identify and build relationships between them. Thus, scientists are trying to force to earn artificial intelligence: a system capable of self-learning without outside help.

For example, the computer has already been able to independently establish that zebras usually live in the savannah, and tigers are something like zebras. The project is sponsored by Google and the US Department of Defense.

2011: Phase 3 of AI growth

In 2011, the Q&A system IBM Watson defeated the permanent champions of recent years in the game Jeopardy! (the Russian analogue of the program is "Own Game"). The system managed to win in both games. At this time, IBM Watson is a promising development IBM that can perceive human speech and perform probabilistic searches using a large number of algorithms.

Although this part of the story is very similar to what happened another 50 years before, nevertheless, the development of artificial intelligence at this time occurs in fundamentally different conditions.

The complication of communication systems and solved problems requires a qualitatively new level of "intelligence" of supporting software systems, such as:

  • protection against unauthorized access,
  • information security of resources,
  • protection against attacks,
  • semantic analysis and search for information in networks, etc.

On the other hand, the globalization of economic life raises competition to a fundamentally different level, where powerful systems of enterprise and resource management, analytics and forecasting, as well as radical improvements in labor efficiency are required. The third stage after winter is also characterized by the presence of the largest open source of personal data and clickstream in the form of the Internet and social networks. And, finally, the key historical stop factor for the development of artificial intelligence disappears - the most powerful computing systems, which can now be built both on cheap server power and in the largest cloud platforms in pay-as-you-go mode.

All this justifies the optimism of the people involved about the 3rd phase of the growth of artificial intelligence. The pessimism of some experts that the direction of research in the field is again inflating excessively is easy to oppose by the fact that now the developments of researchers have gone far beyond laboratories and prototypes and continue to intensively penetrate almost all spheres of human life, from autonomous lawn mowers and vacuum cleaners equipped with a huge number of modern sensors to smart and learning mobile assistants used by hundreds of millions of people.

Skepticism and alarmism at this stage are even more likely directed towards the excessive development and independence of artificial intelligence and its replacement of the people themselves, who are already inferior to machines in terms of speeds and physical access to a huge data layer.

2003-2010: The Internet and Big Data Era

  • "Explosion of Data": The growth of the Internet leads to a huge increase in available data and information. With the development of the Internet and digital technologies, the amount of available data grew exponentially. This created new opportunities for the application and training of AI, especially in areas related to big data analysis.

  • Evolution of algorithms: Continued improvement of machine learning algorithms, especially teacher and non-teacher learning methods, improved NLP methods and neural network algorithms. The application of these methods in various fields, such as data analysis and pattern recognition, has become more common. However, there was a catastrophic lack of capacity and data for meaningful systems.

1990s - early 2000s: machine learning, neural networks, computer games

  • Growth of commercial interest: Emergence of the first successful commercial applications of AI, especially in the field of expert systems.

  • Machine learning development: Data-based learning algorithms are beginning to replace hard-programmed instructions.

  • Development of neural networks: prototyping and theoretical justification of neural networks. Renewed interest in neural networks and their potential.

Almost all developments during this period were theoretical in nature, there was no significant applied expansion. However, it was in the 1990s that the expansion of robotics using Robotics AI began, and was integrated into industry within the framework of automated control systems (ACS).

One of the significant drivers of AI development in the 1990s was computer games (AI for game bots), which in turn predetermined the development of the industry, both at the hardware level and at the software level.

1997: Deep Blue computer beats world chess champion Garry Kasparov

Another surge in interest in AI occurred in the mid-1990s. In 1997, an IBM computer called Deep Blue became the first computer to defeat world chess champion Garry Kasparov.

Garry Kasparov's chess match against Deep Blue computer, 1997

Kasparov's match against superComputers did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov.

Later, the line supercomputers IBM manifested itself in the projects brute force BluGene (molecular modeling) and the modeling of the pyramidal cell system in the Swiss center Blue Brain.

The 1980th

In the early 1980s Barr and Feigenbaum proposed the following definition of AI:

File:Aquote1.png
Artificial intelligence is a field of computer science that develops intelligent computer systems, that is, systems that have the capabilities that we traditionally associate with the human mind - understanding language, learning, the ability to reason, solve problems, etc.
File:Aquote2.png

Outdated general definitions of artificial intelligence:

  • (J. McCarthy) AI develops machines that are inherent in reasonable behavior
  • (Britannica) AI - the ability of digital computers to solve problems that are usually associated with highly intelligent human capabilities
  • (Feigenbaum) AI - develops intelligent computer systems with the capabilities that we traditionally associate with the human mind: understanding language, learning, the ability to reason, solve problems, etc.
  • (Elaine Rich) AI is the science of how to teach computers to do something in which at the moment a person is more successful.

Later, a number of algorithms and software systems began to be attributed to AI, the distinctive property of which is that they can solve some problems in the way that a person thinking about solving them would do.

1970s: Stagnation of AI development due to lack of technology

Slow development of artificial intelligence (1970s - 1980s). This period is considered to be a period of stagnation of technologies related to AI, which is largely due to the problem of scaling and a critical shortage of computing power with a fundamental inability to fill data for learning (there was neither the Internet, nor memory, nor sufficient bandwidth). This led to disappointment in AI, as a technology that did not correspond to the time.

However, the first prototypes of expert systems began to appear at this time, as did specialized programming languages ​ ​ such as LISP.

1960s: Simple natural language processing

Natural language processing: Initial research in natural language processing has begun, but it has been quite limited. Creating simple AI programs such as ELIZA (a program that simulates dialogue).

1956: The emergence of the term "artificial intelligence" at the Dartmouth Conference

In the summer of 1956, the first conference was held at Dartmouth University in the United States with the participation of scientists such as McCarthy, Minsk, Shannon, Turing, who were subsequently named the founders of the field of artificial intelligence. For 6 weeks, scientists discussed the possibilities of implementing projects in the field of artificial intelligence. It was then that the term artificialintelligence itself appeared - artificial intelligence. And it was after this summer meeting that the "first summer" came in the development of projects related to this area.

As you can see, after the famous conference in Dartmouth, artificial intelligence received an impressive development. Machines were created that could solve mathematical problems, play chess, and even the first prototype of a chatbot that could talk to people, misleading them about their awareness.

All these significant steps forward in the field of machine intelligence were due to serious funding for such initiatives by military research organizations and, in particular, the Defense Advanced Research Projects Agency (DARPA), which was created as a shock reaction to the launch of the first satellite by the Soviet Union.

1954: Chess software

In 1954, American researcher Newell decided to write a chess program. RAND Corporation analysts were involved in the work. As the theoretical basis of the program, the method proposed by the founder of the theory of information Shannon was used, and its exact formalization was carried out by Alan Turing.

1950: Turing Test: When a Machine Equals a Person's Mind

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for his birth had already been formed: among philosophers there had long been debate about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in a formalized form; finally, the foundation of mathematical theory of computation - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of the speed of computing turned out to be more human, so the question crept into the scientific community: what are the boundaries of the capabilities of computers and will the machines reach the level of human development?

In 1950, one of the pioneers in the field of computing, the English scientist Alan Turing, wrote an article entitled "Can a machine think?," In which he describes a procedure by which it will be possible to determine the moment when the machine will be equal in terms of reasonableness to a person called the Turing test.

1940s: Thinking Modeling: Neurokibernetic and Logical Approaches

Since the late 1940s, research in the field of modeling the process of thinking has been divided into two independent approaches: neurokybernetic and logical.

  • The neurokybernetic approach is of the ascending type (Bottom-Up AI) and involves a way to study the biological aspect of neural networks and evolutionary computing.

  • The logical approach refers to the Top-Down AI and means the creation of expert systems, knowledge bases and inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity[3]

1930s: The "Tiny Machine" concept for teaching artificial intelligence as a child

Since the mid-1930s, since the publication of the works of the English scientist Alan Turing, which discussed the problems of creating devices capable of independently solving various complex problems, the problem of artificial intelligence in the world scientific community has begun to be treated carefully. Turing proposed to consider an intelligent machine that the tester in the process of communicating with it will not be able to distinguish from a person. Then the term Baby Machine appeared - a concept that involves teaching artificial intelligence in the manner of a small child, and not immediately creating a "smart adult" robot.

1914: Leonardo Quevedo's device for playing chess

In 1914, the director of one of the Spanish technical institutes, Leonardo Torres Quevedo, made an electromechanical device capable of playing the simplest chess endgames almost as well as a person.

1835: Charles Babbage Chess Machine

In the 1830s, the English mathematician Charles Babbage came up with the concept of a complex digital calculator - an analytical machine that, according to the developer, could calculate moves for playing chess.

1832: Semyon Korsakov invents punched cards and 5 "intelligent machines"

Main article: Research in the field of artificial intelligence in Russia

College adviser Semyon Nikolaevich Korsakov (1787-1853) set the task of strengthening the capabilities of the mind through the development of scientific methods and devices, echoing the modern concept of artificial intelligence as a natural amplifier.

In 1832, S.N. Korsakov published a description of five mechanical devices he invented, the so-called "intelligent machines," for the partial mechanization of mental activity in search, comparison and classification tasks. For the first time in the history of computer science, Korsakov used perforated cards in the design of his machines, which played a kind of role as knowledge bases, and the machines themselves were essentially the forerunners of expert systems.

17th century: Rene Descartes: Animal is a complex mechanism

In the XVII century, Rene Descartes suggested that an animal is a complex mechanism, thereby formulating a mechanistic theory.

Approaches and directions in AI research

There is no single answer to the question of what artificial intelligence is doing. Almost every author who writes a book about AI pushes back in it from any definition, considering in his light the achievements of this science.

In philosophy, the question of the nature and status of human intelligence has not been resolved. There is no exact criterion for computers to achieve "reasonableness," although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. Therefore, despite the presence of many approaches to both understanding the tasks of AI and creating intelligent information systems, two main approaches to the development of AI can be distinguished:

  • Top-Down AI, semiotic - the creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  • Bottom-Up AI, biological - the study of neural networks and evolutionary calculations that model intelligent behavior based on biological elements, as well as the creation of appropriate computing systems, such as a neurocomputer or biocomputer.

The latter approach, strictly speaking, does not apply to the science of AI in the sense given by John McCarthy - they are united only by a common end goal.

Turing test and intuitive approach

An empirical test was proposed by Alan Turing in the paper Computing Machinery and Intelligence, published in 1950 in the philosophical journal Mind. The purpose of this test is to determine the possibility of artificial thinking close to human.

The standard interpretation of this test is as follows: "A person interacts with one computer and one person. Based on answers to questions, he must determine who he is talking to: a person or a computer program. The task of a computer program is to mislead a person into making the wrong choice. " All test participants do not see each other.

The most general approach assumes that AI will be able to exhibit behavior that is not different from human, and in normal situations. This idea is a generalization of the Turing test approach, which claims that a machine will become reasonable when it is able to maintain a conversation with an ordinary person, and he will not be able to understand what he says with the machine (the conversation is by correspondence).

Science fiction writers often suggest another approach: AI will arise when the machine is able to feel and create. So, the owner of Andrew Martin from "Bicentennial Man" begins to treat him as a person when he creates a toy according to his own project. And Data from Star Trek, being able to communicate and learn, dreams of gaining emotions and intuition.

However, the latter approach hardly stands up to criticism when viewed in more detail. For example, it is easy to create a mechanism that will evaluate some parameters of the external or internal environment and respond to their unfavorable values. About such a system, we can say that it has feelings ("pain" - a reaction to the actuation of the impact sensor, "hunger" - a reaction to a low battery charge, etc.). And the clusters created by Kohonen maps and many other products of "intelligent" systems can be seen as a type of creativity.

Character approach

Historically, the symbolic approach was the first in the era of digital machines, since it was after the creation of Lisp, the first language of symbolic computing, that its author had confidence in the ability to practically start implementing these intelligence means. The symbolic approach allows you to operate with poorly formalized representations and their meanings.

The success and effectiveness of solving new problems depends on the ability to highlight only significant information, which requires flexibility in abstraction methods. While a regular program establishes one of its own ways of interpreting data, which is why its work looks biased and purely mechanical. In this case, only a person, analyst or programmer solves an intellectual problem, not knowing how to entrust this to the machine. As a result, a single abstraction model, a system of design entities and algorithms, is created. And flexibility and versatility translates into significant resource costs for not typical tasks, that is, the system returns from intelligence to brute force.

The main feature of character calculations is the creation of new rules during the execution of the program. While the capabilities of non-intelligent systems are completed just before the ability to at least indicate the newly emerging difficulties. Moreover, these difficulties are not solved and finally the computer does not improve such abilities on its own.

The disadvantage of the character approach is that such open possibilities are perceived by untrained people as a lack of tools. This, rather a cultural problem, is partly solved by logical programming.

Logical approach

The logical approach to building artificial intelligence systems is based on reasoning simulations. The theoretical basis is logic.

The logical approach can be illustrated by applying the Prologue language and logical programming system for these purposes. Programs written in Prologue represent sets of facts and rules of inference without setting the algorithm hard as a sequence of actions leading to the desired result.

Agent-centric approach

The latter approach, developed since the early 1990s, is called an agent-oriented approach, or an approach based on the use of intellectual (rational) agents. According to this approach, intelligence is the computational part (roughly speaking, planning) of the ability to achieve the goals set before the intellectual machine. Such a machine itself will be an intelligent agent that perceives the world around it using sensors and is able to influence objects in the environment using actuators.

This approach focuses on those methods and algorithms that will help the intelligent agent survive in the environment when performing his task. So, here algorithms for finding a path and making decisions are studied much more carefully.

Hybrid approach

The hybrid approach assumes that only a synergistic combination of neural and character models achieves the full spectrum of cognitive and computational capabilities. For example, expert rules of inference can be generated by neural networks, and generating rules are obtained using statistical training. Proponents of this approach believe that hybrid information systems will be significantly stronger than the sum of different concepts separately.

Read also

Notes