Content |
The main articles are:
- Artificial intelligence in military affairs
- US Department of Defense (Pentagon)
- Artificial intelligence
- Artificial intelligence in the United States
2024: First successful AI tests to control combat aircraft
In early May 2024, the Associated Press, citing representatives Pentagon , reported that UNITED STATES AIR FORCE they had successfully tested (artificial intelligence AI) for the first time to control a combat aircraft: according to the publication, the F-16 independently conducted a training air battle. More. here
2023
Pentagon publishes military strategy in the field of AI
In early November 2023, the Pentagon unveiled a strategy on data, analytics and artificial intelligence that aims to improve battlefield decision-making. The document was prepared by the General Directorate of Digital Technologies and Artificial Intelligence (CDAO).
The strategy was announced by US Deputy Secretary of Defense Kathleen Hicks. This is an updated and revised version of the document released in 2018. According to Hicks, the strategy takes into account the latest achievements in areas such as decentralized data management, generative AI, etc. It is noted that the task of the Ministry of Defense is to introduce new technologies and AI tools "wherever they can bring the greatest military value."
The document includes sections that highlight the key results that the agency plans to achieve with AI. These include removing political barriers, investing in compatible and federated infrastructure, improving data management, and increasing the number of AI professionals. AI is expected to help conduct a comprehensive analysis of forces on the battlefield, as well as generate summaries of data. The Defense Ministry also intends to use AI to automate simple tasks. At the same time, the document indicates the potential danger of AI - when it comes to controlling combat systems.
In general, instead of identifying several specific combat capabilities using AI, the strategy describes an approach to strengthening the organizational environment in which people can constantly use the capabilities of neural networks to gain a sustainable advantage in decision-making. This will make it possible to form the most effective strategies in a wide variety of military scenarios.
Department of Defence Data, Analytics, and Artificial Intelligence Adoption Strategy
Setting up AI watchdog group
On August 10, 2023, the US Department of Defense announced the formation of a special Lima working group to explore possible scenarios for using generative artificial intelligence in the national interest.
AI, as noted, has become a breakthrough technology capable of revolutionizing various sectors, including defense. Using generative AI models, the Pentagon expects to improve the effectiveness of its activities in areas such as combat, partner and supplier engagement, health care, response speed and the political sphere.
The use of artificial intelligence in defense concerns not only the introduction of innovative technologies, but also the strengthening of national security. The Department of Defense sees the potential of generative AI to significantly improve intelligence, operational planning, as well as administrative and business processes, said U.S. Navy Captain Manuel Xavier Lugo, commander of Lima. |
Lima specialists, among other things, will study issues related to the safe implementation of generative AI systems. In addition, the new group will have to find out how such technologies can be used by US opponents. Lima will partner with various DoD entities, government agencies and intelligence departments to minimize the risks associated with using artificial intelligence.
It is noted that the creation of Lima is partly dictated by increased tensions between the United States and China in the technological sphere, including AI. On August 9, 2023, US President Joe Biden signed a decree restricting investment in certain technologies and products in countries of "concern." China is named such a state, including the special administrative regions of Hong Kong and Macau.[1]
The US Air Force told how ChatGPT will help the Pentagon
At the end of February 2023, US Air Force Chief information officer Lauren Barrett Knausenberger announced that the ChatGPT chatbot could improve the work of the US Department of Defense. In particular, according to her, artificial intelligence is able to simplify document management and synchronization of tasks between all military units. Read more here.
US launches initiative to use AI for military purposes to "change the way war is waged"
On February 16, 2023, the United States launched an initiative to responsibly use artificial intelligence in the military sphere. It is assumed that this will change the way combat operations are carried out. Read more here.
DARPA has created artificial intelligence to control a fighter
On February 13, 2023, the US Department of Defense Advanced Research Projects Agency (DARPA) announced a successful experiment to use artificial intelligence to control an F-16 fighter as part of the Air Combat Evolution program. Read more here.
2022
Pentagon signs strategy to develop responsible artificial intelligence for military purposes
On June 22, 2022, Deputy Secretary of Defense Kathleen Hicks signed the document "Strategy and Implementation of Responsible" artificial intelligence(RAI S&I pathway), which was the next step in implementing the Pentagon AI principles approved in 2020.
The 47-page document defines a strategic approach Pentagon to the practical implementation of the founding principles and, more broadly, provides the basis for how the Department of Defense will deliberately use AI with respect to legality, ethics and accountability.
We desperately need to create a robust ecosystem that not only enhances our military capabilities, but also builds the trust of end users, military personnel, the American public and international partners. This strategy reaffirms the ministry's commitment to act as a responsible organization that uses AI, Hicks said in a statement. |
After consultations with leading experts in the field of AI, which lasted more than a year, the Pentagon in February 2020 officially adopted a number of principles governing the use of technology based on the recommendations received. In May 2021, the Ministry of Defense reaffirmed its commitment to these principles and published six fundamental installations that will be priority aspects that determine the responsible implementation of AI in all its components.
These principles include: ensuring the proper management of RAIs, ensuring the trust of military personnel, AI products and their expiration date, assessing technical needs, building an ecosystem of responsible AI, and the specialists involved in working with AI.
According to Hicks, the RAI S&I pathway is based and organized around these principles and "makes RAI policy easy to implement."
Notably, the strategy also adds new, concise goals to each principle to convey more deeply the desired result of the Ministry in each priority area. For example, under "ensuring RAI is properly managed," the pathway directs officials to modernise governance structures and processes to allow for ongoing oversight of AI use in the MoD and to create "clear mechanisms" to support users and developers in implementing RAI, as well as provide them with the means to communicate potential problems.
RAI Supervisor-designate Diana Staheli will manage and directly facilitate the Department of Defense's implementation efforts by providing daily expert support to all involved in the process.[2]
Training of military AI systems in Ukraine
The Pentagon uses artificial intelligence and machine learning tools to analyze vast amounts of data, obtain useful battlefield intelligence and study Russia's tactics and strategy in Ukraine. This was announced at the end of April 2022 by a senior representative of the US Department of Defense.
What you don't see is our sophisticated intelligence capabilities capable of controlling the battlefield, including collecting and archiving signal intelligence data, "said Maynard Holiday, director of defense research and engineering for modernization. - We will definitely analyze everything that we saw regarding Russian tactics. And all this will fall into a database that we can train and then test in military simulations. |
The United States does not report how much intelligence from the battlefield is transmitted to Ukraine. According to FCW, the United States does not use drones in Ukraine, but commercial NoLimit Electronics provide the public with large volumes of photographs and images.
The data obtained during this military conflict will help the military better model and anticipate how an advanced enemy, especially Russia and China, will behave in the real world. This is what, according to US military leaders, should begin to happen as early as 2022.
Gregory Allen, leader of the AI Governance Project and a senior fellow in the strategic technology program at the Center for Strategic and International Studies, noted that over the past few years, AI military tools for finding and tracking specific objects on personnel have improved significantly. In addition, the military began to use the same tools to work with satellite photographs.
Allen says military AI has come a long way since 2017, when the public learned about Project Maven, the military's object recognition program.
Artificial intelligence [and] machine learning are becoming an increasingly capable and increasingly common factor in United States intelligence, surveillance and reconnaissance operations, "he said. This turned out to be very useful for tracking what is happening in Ukraine. The US Department of Defense and our allies are taking advantage of what has been created over the last five years.[3] |
FCW writes about this with reference to Maynard Holliday, director of defense research and engineering for modernization of the US Department of Defense:
Work is being carried out in conjunction with the Naval Information Center for the Pacific at Naval Base San Diego. The Battlespace Exploitation of Mixed Reality laboratory at the center for April 2022 is studying how new technologies can be used in the military sphere. |
Appointment of Martell Craig as director of digital and AI
USA machine learning Lyft On April 25, 2022, the Ministry of Defense announced that ex-head of department Craig Martell was appointed as the first ever director of digital technology and artificial intelligence. Pentagon More. here
2021: Pentagon creates AI and data chief position
On November 29, 2021, it became known about the creation at the US Department of Defense (DoD) of the position of Head of Data and Artificial Intelligence (CDAO) to oversee the management of agencies focused on innovation, AI projects and work with data.
Consolidated oversight through the creation of an authorized CDAO could help the Pentagon ensure that the tools needed to make gains in U.S. defense innovation and security are in place.
The new position will be overseeing the Joint Artificial Intelligence Centre (JAIC), the office of the Chief Data Officer (CDO) and the Digital Defence Service (DDS). As a result of the change, offices will be excluded from the competence of the Deputy Ministers of Defense, Director of Information Technology and Minister of Defense. After the introduction of a new position, subagents will report to one official.
The consolidation of CDO, JAIC, and DDS units under CDAO represents the potential for organizational growth for the Department of Defense in dealing with data and new technologies. The CDO is responsible for data management and coordination throughout the Department. The JAIC was created to help DoD incorporate and implement the use of artificial intelligence. DDS was created to address data and security issues. All of these disconnected organizations control an important part of data and new technologies - combining them is likely to help DoD optimize the organization's internal processes.
Who will take the post of director of data and artificial intelligence at the Pentagon is not reported by December 7, 2021. This position may be obtained by Jim Miter, head of the analytical company Govini, who worked on the US national defense strategy and in December 2021 returned to work in the US Department of Defense. A spokesman for Meater confirmed his return to Fedscoop, but refrained from commenting further.[4]
2020: Introduction of new ethical principles for using artificial intelligence technologies on the battlefield
At the end of February 2020, the Pentagon adopted new ethical principles for the use of artificial intelligence technologies on the battlefield. According to these recommendations, AI should be "responsible," "proportionate," "understandable," "reliable" and "manageable."
The new principles require people to "exercise an appropriate level of caution" when deploying and using AI systems, such as systems that scan aerial photos to find targets. It is also noted that decisions made by automated systems must be "traceable" and "manageable," that is, "there must be a way to disable or deactivate" them if necessary.
Proportionality in the Pentagon means "necessary steps to minimize unintentional bias" of combat vehicles. The military must also possess the necessary technology and operational skills that will ensure transparency of procedures and documentation. In addition, AI should have "well-outlined functions," as well as be constantly tested during development and operation.
A previous 2012 military directive required people to be able to control automated weapons, but did not address the wider use of AI. The new principles are being developed by the Pentagon expert center, which, in addition to the military and officials, uses the opinions of technical experts. They follow recommendations presented in 2019 by the Military Innovation Council, a group led by former Google CEO Eric Schmidt.
While the Pentagon has acknowledged that AI "generates new ethical ambiguities and risks," the new principles do not provide the strict limits that arms control advocates require. Apparently, the new principles deliberately imply a broad interpretation so as not to constrain soldiers with specific restrictions in the future.[5]
See also Combat robots and drones |
2019
Refusal to use AI to control nuclear weapons
The Joint Center for Artificial Intelligence, a division of the US Department of Defense responsible for the development and implementation of military artificial intelligence systems, will not equip strategic weapons control centers with such technologies. According to[6] Breaking Defense, the head of the center, Lieutenant General Jack Sheneheng, said in September 2019. According to him, people will always be responsible for ballistic missile launches[7].
According to the rules in force in the United States, the duty units of ground-based mine missile systems include five teams of two officers each. One such unit is responsible for launching 50 ballistic missiles. If the country's leadership decides to launch a massive nuclear strike on the enemy, each unit receives a launch order with a launch code. After receiving the order, all five commands must compare the start code contained in it with the code already stored in safes in control centers.
If the codes converge, each officer must perform a certain sequence of actions, including synchronous key rotation (each person on duty has his own key) and pressing a key that will start the start program. In order for the launch of all 50 ballistic missiles to take place, it is necessary that at least two teams in the unit accurately perform the launch sequence of actions. On submarines, the captain, the assistant captain and the team on duty are responsible for launching missiles. Here in the order comes the code from the safe, which stores the missile launch key.
In peacetime, ballistic missiles target remote points in the ocean. During preparation for launch, duty units redirect missiles to enemy targets. If ballistic missiles are launched from submarines, then no more than 15 minutes pass from the moment the order is received until the carriers leave the mines of the submarines. If ballistic missiles are launched from land mines, they must be carried out exactly during the time specified in the order. This is a simplified description of the launch of a missile nuclear strike.
According to Sheneheng, the leadership of the Joint Center for Artificial Intelligence advocates denser integration of systems with artificial intelligence into weapons: such systems can be useful for analyzing intelligence information, searching, recognizing and prioritizing targets and other similar tasks, including weapons guidance. At the same time, a decision on the use of weapons must be made by a person. "This decision should be made solely by the individual... control of nuclear weapons, "Sheneheng said. |
Currently Russia , there is an automated combat control system for a retaliatory nuclear strike "Perimeter." The system was developed in the USSR in the 1970s. The functional features of the system are classified; there are several versions about her work. According to one version, the Perimeter is able to launch ballistic missiles at the enemy in a fully automatic mode if it records multiple nuclear attacks on the territory of Russia and loses contact with the country's leadership and military command.
According to another version, the system is only responsible for ensuring the delivery of orders for a retaliatory missile strike in a massive nuclear strike. According to this version, Perimeter launches command missiles, which for a while become a kind of signal repeaters. They provide communication with control points of strategic complexes in conditions of nuclear pollution and electronic suppression. Devices that are equipped with all ground-based missile systems, strategic underwater missile carriers and bombers are responsible for receiving signals from missiles.
Development of a system for aiming and increasing lethality for tanks and combat vehicles
In early 2019, the US Army Command initiated a program to develop a virtual assistant for tank crews and combat vehicles, which will have to improve the effectiveness of their work in combat conditions. According to Breaking Defense[8], the system, called ATLAS (Advanced Targeting & Lethality Automated System), will be created using machine learning technologies. [9].
During the battle, the tank crew may not notice some additional goals, as well as in stressful conditions, not quickly perform their duties. It is assumed that the ATLAS virtual assistant will reduce the load on the crew. In particular, the system is supposed to entrust the detection of targets that people missed, prioritize the detected targets, and also point a gun at them. The military believes that the new system will increase the reaction rate of combat vehicles in battle.
According to military requirements, the ATLAS system will process not only data from its own sensors and combat vehicle devices, but also receive data from such equipment on other tanks. This will significantly increase the accuracy of detecting new targets. In addition, this will allow you to accurately identify camouflaged targets. ATLAS will not be able to independently decide on the opening of fire - the commander of the combat vehicle will have to give the corresponding command.
According to the preliminary plan, after testing robots and optionally controlled vehicles in 2020, the military will make a list of recommendations for their revision. Advanced prototypes will be tested in 2022. In 2024, the final tests of the finalist cars will be held. According to the results of three stages of testing, US Army specialists intend to determine the tasks that could be entrusted to robotic combat vehicles.
2018
Investing $2 billion in artificial intelligence for weapons
In early September 2018, the US Department of Defense announced a $2 billion investment in the development of artificial intelligence technologies designed for use in weapons.
The Pentagon says that military commanders want computers to explain to them their intentions and the reasons for making certain decisions. The improvement of weapons systems will occur systematically over five years, and the main goal of new investments is considered rivalry with China and Russia in terms of defense and offensive capabilities.
Most of the AI projects in the Pentagon are being developed by the controlled Defense Advanced Research Projects Agency (DARPA). Employees of this unit are engaged in the improvement and implementation of artificial intelligence systems in various weapons. DARPA is working to empower machines to communicate and think the same way people do.
In addition, the American authorities support the development of AI systems that allow robots to quickly identify videos, images and audio recordings with tracked and potentially dangerous content on the network.
Investment in artificial intelligence is by the standards of other spending costs, Pentagon where, for example, the cost of buying and maintaining new F-35 combat aircraft is expected to exceed a trillion dollars. However, we are talking about the largest state AI programs, and the costs of them are comparable to the amounts that the United States spent on the Manhattan project, which produced nuclear weapons in the 1940s, notes The Verge.
While artificial intelligence has helped American weapons better recognize targets and combat drones fly more efficiently, by early September 2018, computer systems that make self-inflicted strike decisions are not approved by the Pentagon.[10]
Development of AI programs for protection against a nuclear missile strike
The US military is increasing spending on secret projects, the purpose of which is to artificial intelligence use a nuclear missile strike to anticipate and identify the location of mobile launchers. AI developments, Pentagon which had not previously been publicly reported, were reported by the agency in early June 2018 Reuters , citing knowledgeable sources.
According to American officials, whose names have not been disclosed, several programs are being implemented in the United States at once under the heading "secret," within which it is planned to create AI systems to increase US protection against a potential nuclear missile strike.
If scientific research is successful, developed computer systems will be able to independently think, quickly and more accurately, people collect a lot of data, including satellite images, and find signs of an impending rocket launch on them.
Having received such information in advance, the American government will be able to take diplomatic steps to resolve the situation, and if an attack is imminent, the military will have more time to try to destroy the missiles before launching them or intercept them.
We must do everything in our power to find the rocket before launch and try to make it as difficult as possible to launch it from the ground, "said one of the agency's interlocutors. |
From several US officials, as well as from budget documentation, journalists learned that the administration of President Donald Trump proposed more than tripling funding for such developments and allocating $83 million for only one of the programs for anti-missile protection using artificial intelligence. The increase in allocations indicates the growing importance for the United States of research in the field of creating AI systems of nuclear missile defense in the context of the overwhelming military power of Russia and the alleged continuing nuclear threat from North Korea.
With artificial intelligence and machine learning, you can find a needle in a haystack, "said former US Deputy Secretary of Defense Bob Work, who worked closely on the topic of military robots and AI technologies. He did not mention specific projects. |
One of the sources told Reuters that the emphasis in the Pentagon's pilot AI programs is on protection from Pyongyang. There is growing alarm in Washington over North Korea's ongoing development of mobile-launched missiles that can be sheltered in tunnels, forests or caves.
Although military AI projects are kept secret, the Pentagon does not hide its interest in artificial intelligence technologies. In particular, the so-called "Project Maven" received widespread media coverage, involving the use of AI to analyze drone images and recognize people in images. The Google corporation involved in the project in June 2018 was forced to announce the termination of cooperation with the US Department of Defense due to the dissatisfaction of its employees.
Details about the Pentagon's work to build AI systems to detect potential missile threats and mobile launchers are scant, but Reuters was able to learn that the first prototypes are already being tested by the military.
According to the agency, both military specialists and private researchers are involved in the project, which is being worked on on the territory of Washington. Scientists tailor for their purposes technological advances developed by commercial firms that are funded by the venture capital fund In-Q-Tel associated with. CIA[11]
Creation of an artificial intelligence center in the US Department of Defense
In April 2018, the US Department of Defense announced the creation of an artificial intelligence center, within which it plans to combine all state AI projects carried out in the country.
According to the Defense News newspaper, the idea of creating an AI center was first voiced by Secretary of Defense James Mattis. At a hearing before the Armed Services Committee of the House of Representatives of the US Congress, he said that the Pentagon is considering forming a "joint office in which all the efforts of the Ministry of Defense could be concentrated, since various efforts are currently underway in the field of artificial intelligence."
We want to unite all these efforts, "he said. |
As the Deputy Secretary of Defense for Research and Development Michael Griffin clarified, by April 2018, the Pentagon decides who will lead the center, where it will be located, what projects it will deal with, and most importantly, how such a center will fit into the general strategy for the development of artificial intelligence of the ministry and the nation as a whole. It is assumed that the center will establish close cooperation with American universities.
Griffin calls artificial intelligence one of the key technologies along with hypersonic, and emphasizes that this is the main direction of his activities as deputy minister. According to Griffin, by April 2018, the Pentagon is developing almost 600 projects where forms of artificial intelligence are present in one way or another.
Billionaire Elon Musk believes that artificial intelligence created by authoritarian governments can become an immortal dictator from which no one can escape.
In the United States, they thought about creating an artificial intelligence center against the background of how Russia and China began to actively invest in this area.[12]
Development of an AI system for face recognition in the dark and through walls
In April 2018, it became known about the creation by the American army of a system that recognizes faces in the dark and even through walls. The development uses artificial intelligence technology.
The US Army Research Laboratory (ARL) has published an article in the arXiv repository describing the operation of an algorithm that allows you to recognize faces in images obtained using a thermal imager.
According to ARL scientist Benjamin Riggan, when using thermal imaging cameras to capture a face image in the dark, the main problem is that the resulting image must be compared with a library consisting of photographs that are taken at normal visibility.
ARL solved this problem by creating an algorithm for recognizing faces on heat maps using machine learning. Thanks to the convolutional neural network, the developers have ensured that the program finds common facial features in images taken by a conventional camera and using a thermal imager to distribute electromagnetic thermal radiation.
On the thermal image of the face , common features (for example, the contour of the face) and individual (nose, mouth and eyes) are distinguished, after which they are compared with the features from the sample on which the neural network is trained, and are compiled into visible features - already in the finished photo of the face. Using a trained model, according to scientists, it was possible to achieve facial recognition accuracy at 85%.
By April 2018, the algorithm allows you to recognize faces from a small database, but in the future it is planned that the system will be able to recognize faces in real time right in places of war. In addition, the development is going to be integrated with a thermal imager capable of seeing through walls and also being developed in the United States.
The American army expects that the new technology will help find places of war and identify gang leaders and others who are being hunted by the authorities.[13]
2016
CIA, Amazon and Nvidia develop AI system to recognize objects in satellite images
In August 2016, it was announced that Amazon, Nvidia, DigitalGlobe and CIA special unit CosmiQ Works had begun developing artificial intelligence that could recognize objects in satellite images (more: SpaceNet).
AI defeated an experienced pilot in a virtual duel
In July 2016, it became known that artificial intelligence for controlling ALPHA fighters won a landslide victory over a former American army ace pilot in a virtual air battle, TASS News Agency of Russia reported, citing the Japanese edition of Sankei Shimbun.
ALPHA Artificial Intelligence is a joint development of the University of Cincinnati, Industry and the United States Air Force (Air Force). The program was created specifically to surpass professional fighter pilots in a virtual duel.
One of them was an experienced instructor pilot, retired US Air Force Colonel Jin Lee. He graduated from[14] Fighter Combat Training School[14].
During a virtual battle held in a large entertainment center, the pilot could not fire a single successful shot, since ALPHA intelligence did it faster and more accurately each time. ALPHA defeated Lee even when he was "transplanted" into a lower speed and maneuverable plane. The program quickly moved from attack to defense.
After the fight, the pilot admitted that of those artificial intelligence that he saw, ALPHA demonstrates the fastest reaction, power and reliability.
In one of the virtual battles against ALPHA, two pilots fought on two fighters. Artificial intelligence won while simultaneously flying four aircraft. At the same time, a computer worth only $35 was used to control ALPHA.
Notes
- ↑ DOD Announces Establishment of Generative AI Task Force
- ↑ Pentagon unveils long-awaited plan for implementing ‘responsible AI’
- ↑ AI is already learning from Russia’s war in Ukraine, DOD says
- ↑ DOD looks at introducing a new AI and data leader
- ↑ Pentagon adopts new ethical principles for using AI in war
- ↑ No AI For Nuclear Command & Control: JAIC's Shanahan
- ↑ Americans refused to introduce AI into strategic weapon control systems
- ↑ ATLAS: Killer Robot? No. Virtual Crewman? Yes.
- ↑ American tankers will acquire a virtual assistant
- ↑ The Pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry
- ↑ Deep in the Pentagon, a secret AI program to find hidden nuclear missiles
- ↑ Pentagon developing artificial intelligence center
- ↑ The US Army is developing AI that can recognize faces in the dark and through walls
- ↑ 14,0 14,1 [http://www.strf.ru/material.aspx?CatalogId=222&d_no=118987 the#.V5HpbCc4mcp Artificial Intelligence