RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

GPT-3 (neural network)

Product
The name of the base system (platform): Artificial intelligence (AI, Artificial intelligence, AI)
Developers: OpenAI
Date of the premiere of the system: October 2020
Branches: Advertising, PR and Marketing

Content

Main article: Neural networks (neural networks)

2023: Integration of GPT-3.5 with the PBX "Telfin. Office"

On April 27, 2023, Telfin announced that it had integrated the capabilities of the GPT-3.5 language model into the functionality of the Telfin. Office virtual PBX. Read more here.

2021

Availability from Azure OpenAI Service

On November 2, 2021, Microsoft announced Azure OpenAI Service, a service that will provide the company's cloud customers with access to GPT-3 language models. Read more here.

Creating a version of GPT-3 that highlights their summary from books

At the end of September 2021, OpenAI introduced an artificial intelligence (AI) model that can summarize books of any length. As an advanced version of GPT-3 developed in the research laboratory, the technology works in a way that first summarizes small sections of the book and then summarizes these generalizations into higher-level text, following a paradigm OpenAI calls recursive decomposition of problems.

Summarizing documents with a book length can be useful in companies, especially in industries where there is a lot of documentation, such as software development. According to the SearchYourCloud study, workers need up to eight searches to find the document they want, and McKinsey analysts report that employees spend 1.8 hours a day searching and collecting work-related information.

An AI system has been released that distinguishes their summary from books

OpenAI believes that this is an effective recipe that can be used to help a person do many other tasks. A scalable solution to the alignment problem should work on tasks that are difficult or long to evaluate for the person himself.

The company's new model builds on previous research that found that training the model with reinforcement based on feedback from people helps align model summaries with people's preferences for short messages and articles. Reinforcement learning involves actively learning the system to perform a task, such as generalizing text.

To create a model, OpenAI combined reinforcement learning with recursive decomposition of a problem that procedurally breaks down a complex problem, for example, generalizing a long piece of text into simpler and separate problems, for example, generalizing several shorter pieces. This decomposition allows people to quickly evaluate the result of the model using small volume text. Moreover, this allows the model to summarize books of any length, from tens of pages to hundreds or even thousands.

OpenAI trained the model on a subset of books from the GPT-3 training database, which mainly belong to the genre of fiction and contain an average of more than 100 thousand words. To evaluate the model, the lab's researchers took the 40 most popular books published in 2020 and instructed two people to read each book and write a summary, then evaluate the summary, both models and each other.

File:Aquote1.png
This work is part of our ongoing research to align advanced AI systems, which is key to our mission. Our progress in book synthesis is the first large-scale empirical work to scale alignment methods, OpenAI researchers Jeffrey Wu, Ryan Lowe and Ian Leike wrote in a blog post.
File:Aquote2.png

Although the model successfully generated book-level summaries containing most of the important information, it also sometimes generated inaccurate statements due to a lack of context, the OpenAI company acknowledges in its paper. Task decomposition assumes that individual parts of a task can be executed independently of each other, and this rule may not be correct when compiling a summary of the book. For example, it can be difficult to pick up cases where early details in a book only become important later, as is the case in mystery books.[1]

Integration into Microsoft Power Apps

On May 25, 2021, Microsoft introduced its GPT-3-based product - the company is integrating OpenAI's GPT-3 natural language model into the Power Apps low-code development platform. Read more here.

Grant access to Russian project SoMin.ai

The project of an employee of the machine learning laboratory of ITMO University, Professor Alexander Farseev SoMin.ai became the official partner of OpenAI, founded by Elon Musk. The University announced this on February 8, 2021. Alexander's team gained access to the GPT-3 neural network, which can create content that is almost indistinguishable from human creativity. Read more here.

2020: Start launching applications for automatic writing based on the GPT-3 neural network

In mid-October 2020, companies began launching automated writing applications based on a new text generation technology known as GPT-3. The neural network presented in June is capable of processing huge amounts of text and generating text based on the proposed samples, so it quickly gained popularity among entrepreneurs. They use GPT-3 to compile emails or marketing texts.

Many users point out that this service reduces the tedious work of writing text ads or letters. The goal is to get rid of the monotonous work of creating typical text and go directly to editing and honing ideas.

Companies have begun launching applications on OpenAI's GPT-3 neural network, which write letters and ads themselves

The VWO Marketing Content Performance Assessment Tool tested GPT-3 on various material. Of the six tests, a copy created by artificial intelligence received more views and clicks twice, and a copy created by a person once. The other three matches were drawn. Tests are ongoing, but marketers are likely to give preference to automatically created materials, since they are easier to test in practice.

In terms of emails, all GPT-3-based apps have come to a common design and are asking the user to enter multiple abstracts to then turn them into smooth, multi-paragraph text. Applications send the received "hints" fragments to the cloud servers OpenAI, and GPT-3 sends back new text created on the basis of statistically verified patterns.

However, due to the fact that GPT-3 works exclusively with text presented on the Internet, it can often give out meaningless nonsense. In addition, the neural network is inclined to add not only introductory words, but also whole semantic sentences: for example, GPT-3 is quite capable of adding a non-existent visit to a doctor to a hint like "Meeting at one o'clock in the afternoon[2]

Notes