RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Llama (AI model)

Product
Developers: Meta Platforms
Date of the premiere of the system: July 2024
Branches: Information Technology

2024: Product Release

July 23, 2024 Meta (recognized as an extremist organization; activities in the Russian Federation are prohibited) announced the release of the world's largest open model of artificial intelligence - Llama 3.1. It has 405 billion parameters and is said to surpass GPT-4o and Anthropic Claude 3.5 Sonnet in some characteristics.

Llama 3.1, according to Meta, is much more complicated than the previously released Llama 3 AI models. The training of Llama 3.1 involved 16 thousand powerful Nvidia H100 graphics accelerators. As of the date of the announcement, Meta does not disclose the cost of developing Llama 3.1. Market participants say that, based only on the cost of chips, Nvidia we are talking about hundreds of millions. dollars

The world's largest open AI model has been released. It has 405 billion parameters

In addition to the version with 405 billion (405V) parameters, the Llama 3.1 family includes models with 8 billion (8B) and 70 billion (70B) parameters. They have a context window of up to 128 thousand tokens. Models support English, German, French, Italian, Portuguese, Hindi, Spanish and Thai.

The Llama 3.1 8B AI model is suitable for use in conditions of limited computing resources. It can be used to solve problems such as text summarization, classification, translation from one language to another. The Llama 3.1 version is 70B suitable for content creation, conversational AI, language understanding and enterprise applications. The model, as stated by Meta, does a good job of summarizing, classifying text, analyzing, language modeling, generating code, etc. The most powerful version of Llama 3.1 405B is focused on the most complex problems, including mathematical calculations, generation of long texts, multilingual translation, etc. In addition, this model can be used in advanced enterprise-level services.[1]

Notes