RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/03/22 11:00:00

Artificial intelligence in the United States

Content

Robots in the United States

Main article: Robots in the United States

Artificial intelligence at the Pentagon

Main article: Artificial intelligence at the Pentagon

2024

U.S. Department of Homeland Security Releases First AI Application Roadmap

March 18, 2024 Department of Homeland Security USA (DHS) unveiled the first roadmap for the responsible and safe application of technology. artificial intelligence The activities are focused on the implementation of AI solutions that can bring significant benefits to the American society and improve national security, as well as ensuring the protection of privacy, civil rights and freedoms of people.

The document highlights three key programs for using AI. In particular, the Homeland Security Investigations Division (HSI) employs AI and large language models to improve investigative processes. We are talking about increasing the efficiency of searching and extracting documents in the case, as well as obtaining generalized information. The purpose of this project is to further combat the sexual exploitation of children, identify drug-related crimes, etc.

US Department of Homeland Security (DHS) unveils first roadmap for responsible and safe use of artificial intelligence technologies

In turn, the Federal Emergency Management Agency (FEMA) will use generative AI to assist state governments, local and territorial authorities in planning various activities. This will help in coordinating actions to eliminate the consequences of disasters, as well as minimize risks. The third project is being implemented by U.S. Citizenship and Immigration Services (USCIS): it will begin using AI to improve training for its employees and simplify operations.

Advances in AI will revolutionize many areas that Americans rely on, the paper says. On the other hand, AI can create new risks and threats. To protect cyberspace and critical infrastructure from possible negative impacts, DHS will manage the secure and responsible development and use of AI.[1]

Patents for inventions created by artificial intelligence begin to be issued in the United States

On February 13, 2024, the United States Patent and Trademark Office (USPTO) announced that patents could be obtained for inventions created using artificial intelligence. The main condition is that the person makes a sufficient contribution to the work. In other words, AI itself cannot be considered an inventor, and therefore it will not be possible to register a document for it. Read more here.

US shuts down China's access to AI training in American cloud systems

At the end of January 2024, it became known that the US Department of Commerce is introducing new restrictive measures for Chinese companies regarding cloud technologies. The agency will close access to AI training in American cloud systems for customers from the PRC. Read more here.

2023

The party in the United States involved the "world's first AI agitator" in the election campaign

On December 12, 2023, it became known that the candidate from the Democratic Party to the US House of Representatives Shamaine Daniels (Shamaine Daniels) uses in his election campaign "the world's first agitator based on artificial intelligence." The smart assistant, named "Ashley," is built using generative AI algorithms - similar to those involved in the OpenAI ChatGPT chatbot.

It differs from ordinary telephone robots "Ashley" in that not a single replica or answer of it is recorded in advance: they are generated directly during communication, taking into account the context of the conversation. An AI assistant can simultaneously conduct an unlimited number of one-on-one conversations. In just a couple of days, "Ashley" reached out to thousands of voters on behalf of Daniels. AI analyzes the conversation in real time, adapting to a specific person. Unlike human volunteers, Ashley can work around the clock, is able to communicate in more than 20 languages ​ ​ and remembers all the information received perfectly.

One of the candidates for the US House of Representatives uses an AI agitator

It is noted that "Ashley" is one of the first examples of how generative AI opens a new era of political campaigns. Candidates are starting to use neural networks to interact with voters in new ways. Civox, which created Ashley, says the technology will scale in the future: the system will be able to make tens and hundreds of thousands of calls a day.

On the one hand, experts say, AI in politics makes it possible to reach a huge number of people by conducting personalized conversations. On the other hand, there are risks of misinformation spreading on a large scale. Concerns have been raised that generative AI, coupled with deepfakes, could compromise election integrity.[2]

NSA launched a unit to monitor the safety of AI when implemented in the public sector

On September 28, 2023, the director of the US National Security Agency (NSA), Army General Paul Nakasone, announced the creation of a new structure to oversee the development and integration of artificial intelligence capabilities in the public sector. The unit was called the AI Security Center. Read more here.

The US presidential administration took 8 promises from AI companies. What did the developers agree on?

On July 21, 2023, the administration of US President Biden announced that artificial intelligence technology companies had made voluntary commitments to the White House to implement measures aimed at ensuring security and reducing risks associated with the use of neural networks and large language models.

The agreement involves Amazon, Anthropic, Google, Inflection, Meta (recognized as an extremist organization; activities in the Russian Federation are prohibited), Microsoft and OpenAI. A total of eight commitments are listed, which are said to reflect the three principles at the heart of future AI systems and applications - security, security and trust:

  • Companies undertake to conduct internal and external security testing of their AI systems prior to their release to the commercial market;
  • Companies commit to sharing information with industry experts, as well as governments, the civic community and academia about AI risk management;
  • Developers will invest in cybersecurity and insider threat protection tools;
  • Industry participants undertake to facilitate the detection of vulnerabilities in their AI products by third parties and report identified security issues;
  • Companies must develop mechanisms that allow the user to understand that content is created by AI means - this can be a system of special watermarks;
  • AI product developers commit to publicly communicate the capabilities, limitations, and potential areas of misuse of their systems;
  • Companies commit to prioritising social risk research that AI tools and large language models may represent;
  • Industry participants promise to develop and deploy advanced AI systems to address the most serious public challenges - from cancer prevention to climate change mitigation.

AI companies make voluntary commitments to White House

The initiative is part of the Biden administration's broader program to ensure the secure and responsible development of AI, and to protect Americans from cyber threats and discrimination.[3]

How military AI helps companies crack down on US unions: Shocking truth about employee surveillance technology

At the end of June 2023, it became known that spy devices developed by defense contractors are now sold to employers to identify labor organization. Regulators want to take action to protect the privacy of workers.

According to Wired, surveillance methods familiar to authoritarian dictatorships are now repurposed as American workers. Since 2013, several dozen companies have appeared offering the employer subscriptions to services such as "open source intelligence," "reputation management" and "internal threat assessment." Tools that were often originally developed by defense contractors for intelligence purposes. With the advent of deep learning and new data sources, they have become significantly more complex. With them, your boss can use advanced data analytics to identify labor organization, internal leaks and company critics.

Your boss can use advanced data analytics to identify labor organization, internal leaks and company critics

A report from Wired said big companies like Amazon are already monitoring union organizing. But the expansion and normalization of tools to spy on workers is largely oblivious despite their origins. Military-class AI was designed to fight our national enemies, nominally under the control of elected democratic governments, with safeguards preventing its use against citizens.

FiveCast, for example, started out as an anti-terrorist startup selling its services to the military, but it has now transferred its tools to corporations and law enforcement agencies, which can use them to collect and analyze all kinds of publicly available data. FiveCast doesn't just count keywords, but boasts that its "commercial security" and other offerings can identify people's networks, read text inside images and even detect objects, images, logos, emotions and concepts inside multimedia content. Her "supply chain risk management" tool is designed to predict future disruptions, such as strikes, for corporations.

As such, network analysis tools designed to identify terrorist cells can be used to identify key labor organizers so employers can illegally fire them before forming a union. Standard use of these tools in recruiting may encourage employers to avoid hiring such organizers. And quantitative risk assessment strategies designed to alert the nation to impending attacks can now be used to make investment decisions, such as whether to divest stocks from those areas and suppliers estimated to have high potential for organizing labor.

According to Wired, network analysis methods assign risk by association, meaning a user can be flagged simply for following a particular page or account. These systems can also be deceived by fake content that is easily created on the scale of generative AI. Some companies offer sophisticated ML techniques, such as deep learning, to identify content. However, the capabilities of these systems are growing rapidly. Companies are touting that companies will incorporate next-generation AI technologies into their surveillance tools by 2030. The new features promise to make it easier to explore different data sources with clues, but the ultimate goal seems to be a routine, semi-automatic union surveillance system.

Corporations providing these services are thriving amid obscurity and a lack of regulation. Protection against surveillance in the workplace is made of the thinnest fabric. Industry apologists say their software, sold to help employers "understand the union landscape," is not anti-union. Instead, they brand themselves as sellers of "corporate awareness monitoring" and loudly claim that "every American is protected by federal, state and local laws to work in a safe environment." Obviously, the manufacturer is not to blame if the buyer uses this software to infringe on the legally protected right to organize or protest.

Companies using such tools should be forced to publicly disclose their use so that existing laws in the US can be enforced. And new rules are urgently needed. In 2022, the National Labor Relations Board announced to America that it would seek to ban intrusive and improper surveillance of labor activity, an important step. In addition, workers and unions must testify at legislative hearings about future AI regulation and workplace surveillance. The state needs specific rules to determine what uses of AI, data sources and methods are permissible and under what conditions they can be applied.

These technologies are already being sold and implemented around the world and are being used for cross-border surveillance. At best, an active regulator should become a global leader in responsible AI and work to establish international standards for workplace technologies. Without this work, multinational companies with global supply chains can easily ignore or circumvent country-specific protections.[4]

How artificial intelligence and ChatGPT are changing American politics

At the end of May 2023, there were reports that advanced generative artificial intelligence technologies such as ChatGPT could have a significant impact on the presidential campaign in the United States.

The next election of the head of the American state will be held on November 5, 2024. According to Reuters, very realistic deepfakes with the participation of candidates officially participating in the election race are already appearing on the Internet. According to experts, such falsified materials can have a significant impact on the political sphere and decisions of voters. The fact is that modern AI tools such as ChatGPT are capable of very accurately imitating the voice and facial expressions of politicians and other public figures. And therefore there is a high probability that deepfakes will mislead the electorate.

Advanced technologies of generative artificial intelligence can have a significant impact on the presidential campaign
File:Aquote1.png
I actually really like Ron Desantis [the GOP presidential candidate]. He is exactly the person that this country needs, "Hillary Clinton unexpectedly admits in an online advertising video.
File:Aquote2.png

Such fakes, appearing among a huge number of materials published on the Internet, as noted, blur the line between facts and fiction. Although such fakes have been published before, in 2022-2023 there was an explosive increase in the number of deepfakes. This is facilitated by advanced AI services that reduce the cost and simplify the process of creating convincing audio and video clips. And specialized equipment designed specifically for processing AI algorithms and large language models repeatedly speeds up the generation of content.

File:Aquote1.png
It will be very difficult for voters to distinguish the present from the fake. And you Trump Biden can imagine how supporters or can use this technology to put an opponent in a bad light, "said Darrell West, a senior fellow at the Brookings Institution Center for Technology Innovation.[5]
File:Aquote2.png

The US presidential administration has published a research and development strategy in the field of artificial intelligence. 9 Main Provisions

In May 2023, the presidential administration USA published the National artificial intelligence AI R&D Strategic Plan. The document replaced the plan presented in 2019 and includes the following main provisions:

  • long-term investments in fundamental, responsible AI research;
  • development of effective methods of interaction "AI-system - human";
  • identification of threats of ethical, legal and social nature in the application of AI and countering them;
  • ensuring the safety and security of AI systems;
  • development of publicly available datasets and environments for AI training and testing;
  • measurement and evaluation of AI systems through standards and criteria;
  • a better understanding of the needs of employees involved in national AI research and development;
  • expanding public-private partnerships to accelerate progress in AI;
  • developing a principles-based coordinated approach to international cooperation in AI research.

US Presidential Administration Publishes National AI R&D Strategy

As reported on the website of the US presidential administration, the National AI R&D Strategic Plan will contribute to the research, development and implementation of "responsible artificial intelligence that protects the rights and safety of people and brings benefits to the American people."

File:Aquote1.png
AI is one of the most powerful technologies of our time, with a wide range of applications. President Biden has made clear that in order to take advantage of the opportunities that AI presents, we must first manage its risks. To this end, the presidential administration has taken significant action to promote innovations in artificial intelligence that put people, communities and the public good at the center and manage risks for individuals and our society, security and economy, the website said whitehouse.gov.
File:Aquote2.png

NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN 2023 UPDATE

US President allocated $140 million to create 7 R&D laboratories for the development of artificial intelligence

On May 4, 2023, the administration of US President Biden announced a new initiative to promote "responsible innovations" in the field of artificial intelligence aimed at "protecting the rights and ensuring the safety of Americans."

It is noted that AI is one of the most influential modern technologies. However, its use carries certain risks, and therefore companies must be responsible for ensuring the safety of their products. Discussions on relevant issues are being conducted with four leading American IT companies developing in the field of artificial intelligence: Alphabet, Anthropic, Microsoft and OpenAI.

Meanwhile, the US National Science Foundation announced the allocation of $140 million for the formation of seven new research and development laboratories for work in the field of AI. The total number of such institutions nationwide will reach 25 in the future. Participating organizations are noted to help build breakthrough technologies and innovative solutions in critical areas including climate, agriculture, power, health and education.

File:Aquote1.png
These institutes catalyze the joint efforts of higher education institutions, federal agencies, industry and other organizations to ensure revolutionary advances in artificial intelligence. In addition to promoting responsible innovation, such labs strengthen America's AI R&D infrastructure, the White House said in a statement.
File:Aquote2.png

It is also reported that Anthropic,, Google Hugging Face, Microsoft,, NVIDIA OpenAI, Stability AI and other companies have agreed to a public assessment of their AI models. Testing such systems independently of governmental the structures and/or their developers will help to take effective measures to eliminate the identified problems.[6]

2022

China overtakes the United States in the number of scientists in the field of AI

At the end of 2022, China accounted for approximately 38% of the leading researchers in the field of artificial intelligence working in American institutions. For comparison, in the United States, this figure is estimated at 37%. In 2019, the values ​ ​ were 27% and 31%, respectively. Such figures are given in a study by MacroPolo, the results of which were published in early March 2024. Read more here.

China outpaces US in AI patents

China is increasingly ahead of the United States in the number of AI patents, which indicates the Asian country's determination to shape and influence technology that could have wide-ranging consequences for the world's richest economies.

2021: US to allocate $1.5 billion for the development of AI technologies

On July 15, 2021, it became known that the United States will allocate $1.5 billion for the development of AI technologies.

The funding program will include not only major U.S. defense contractors, but also universities and small businesses.

The "first wave" of the development of artificial intelligence (AI) technologies occurred in the 60s of the 20th century, and five decades later the "fourth wave" came with machines capable of understanding and reasoning in context.

For July 2021, China and the United States are fighting for high positions in the field of AI.

File:Aquote1.png
Chinese authorities have made clear that China intends to become a world leader in AI by 2030. Beijing is already talking about using AI in missions ranging from data collection to cyber attacks and autonomous weapons. In the field of AI, as in many others, we understand that China is setting the pace. And we intend to participate in this race to win, but only the right way, "said US Secretary of Defense Lloyd Austin.
File:Aquote2.png

The funding program will not only involve large U.S. defense contractors, but also universities and small businesses, Austin said.

In May 2021, the Minister of Defense signed a strategic document on the implementation of the Joint All Domain Command and Control (JADC2) concept. The idea of ​ ​ the concept is to combine surveillance and reconnaissance data from all branches of the military into a single network. To develop an optimal response plan, the Pentagon intends to use AI and other technologies, which will reduce the time required to move from environmental analysis to action.

In terms of force and technology, the US was vastly superior to the extremist groups it fought in the Middle East and was able to mobilize its forces and act at its own pace. However, Beijing is catching up with Washington in the land, sea, air, space and cyber spheres and, according to some observers, even surpasses the Americans in the western Pacific region.

With the more or less equal forces of both countries, the speed at which they can gather and analyze intelligence and put their strategies into practice will be crucial.

The Pentagon expects to work with more than 30 countries to implement the JADC2. In the event of a military conflict, the use of data provided by friendly countries will increase the accuracy of intelligence[7].

2020

US positions AI as key strategy and multiplies budget by AI

Only 4% of U.S. CEOs plan to implement company-wide AI

The PwC study "Forecasts of the development of AI technology for 2020" presents an assessment of the investments of American companies in AI technology and the consequences of wider development and integration of this technology into the economy. This became known on February 20, 2020.

PwC's third annual AI Forecasts study found that in 2020, only 4% of U.S. company executives plan to implement company-wide AI technology, up from 20% a year ago.

In the next ten years, profits from the application of AI technology on a global scale are estimated at almost 16 trillion US dollars, with North America and China expected to receive the largest profits.

In general, in 2020, fewer American companies plan to expand the introduction of AI technology, with almost one in five organizations (18%) already implementing these technologies in several areas, and 42% of companies analyze the possibilities of using AI technology.

Reportedly, 90% of executives surveyed in the United States said that the capabilities of AI technology exceed the risks associated with it. Almost half of executives expect that as a result of using AI technology, they will gain advantages in their markets or in their areas of activity. The main areas of investment in AI technology include managing risks and threats associated with unfair actions and cybersecurity (38%) and automating routine tasks (35%).

Executives tend to underestimate probably the most important problem with data - labeling it for use by AI systems. Only a third of executives in the United States list data labeling among their companies' priorities for 2020.

File:Aquote1.png
If we talk about the Russian market, then it is actively moving towards the use of innovative technologies. AI is one of the end-to-end technologies included in the national digitalization program, the strategy of which was adopted in 2019. An alliance for the development of artificial intelligence was also created on the basis of the largest market players, for whom this technology is increasingly relevant,
said Oleg Danilchenko, Director, Head of the Center for Applied Data Analysis, PwC in Russia
File:Aquote2.png

As the presence of AI technology (often left invisible) increases in everyday business processes as well as in solutions offered by suppliers, having an effective risk management system based on AI technology becomes essential.

Most respondents noted that their companies have corporate-wide AI management departments, with 50% saying they attribute technology to those who build and operate the system, and 49% focusing on explaining technology to those affected by it.

Among the priorities of AI technology in 2020 is to determine how companies rethink the processes of advanced training of employees, offering non-technical employees to master the relevant skills.

50% of managers talk about the necessary possibility of immediately applying the knowledge gained to improve the efficiency of[8].

2019: US allocates $1 billion for artificial intelligence

In mid-September 2019, the US government allocated about $1 billion for research in the field of artificial intelligence, but this statement caused a mixed reaction from industry leaders. Intel and Nvidia consider this amount ridiculous, because to maintain a competitive advantage in the field of AI, business needs much more significant support.

It is known that the US federal government for the first time calculated requests from specific institutions for the cost of developing AI. Defense Department spending in this area is classified. The 2020 AI Research Plan outlines key government programs and strategic priorities, including coordinating long-term federal research investments, promoting safe and effective methods of human-AI interaction, and evaluating AI technologies through new criteria and standards.

The US government has allocated about $1 billion for research in the field of artificial intelligence

At the same time, the government said that the reduction in government spending on AI motivates research organizations to sensibly assess investment opportunities, conduct strategic planning and cooperate with industrial complexes. Trump administration officials also said their numbers are more transparent than Chinese government investment data on AI.

But Intel and Nvidia believe that US investment in AI development is too small. They also criticized federal officials for neglecting laws governing data protection and user privacy. Representatives of large companies believe that the introduction of nationwide data protection laws will strengthen user confidence and directly stimulate private investment in AI, since these technologies are largely dependent on the use of data.[9]

Notes