Meta has reportedly confirmed that it will release a new open source large language model (LLM) within the next month.

TechCrunch reported that at an event in London on Tuesday (March 10), Nick Clegg, Meta’s president of global affairs, said, “Within the next month, it will actually be less, hopefully in a very short time. Within the timeframe we hope to start rolling out our new next generation base model kit, the Llama 3.

“There’s going to be a lot of different models with different features, different versatility [released] This year, it started very quickly,” he added.

The Information initially claimed that a smaller version of Llama 3 would be released as early as next week, with the full open-source model still due in July. This version will be able to compete with Claude 3 and GPT-4.

Meta chief product officer Chris Cox added that the plan involves using Llama 3 to power Meta’s various products.

The news comes as Meta attempts to catch up in the highly competitive field of generative artificial intelligence. Its predecessor, Llama 2, was released in July 2023 and was criticized for its limitations.

“Our goal is to develop Llama-powered meta-artificial intelligence into the world’s most useful assistants,” said Joelle Pineau, vice president of artificial intelligence research. “However, a lot of work still needs to be done to achieve this goal.”

However, Yann LeCun, chief AI scientist at Meta, believes that the Joint Embedded Prediction Architecture (JEPA) is the answer, not generative AI. According to Meta, unlike generative methods that try to fill in every missing pixel, JEPA can flexibly discard unpredictable information, thereby improving training and sample efficiency.

What is Camel 3?

Llama 3 is a large language model that ranges in size from very small versions (intended to compete with models like Claude Haiku or Gemini Nano) to larger versions fully equipped for response and inference, similar to GPT-4 or Claude Opus.

Details about Llama 3 are still scarce, however, it is expected to follow in the footsteps of previous versions by being open source, and is expected to be multimodal, capable of understanding both visual and textual input.

It is thought that Llama 3 will come in various versions and sizes, with the smallest having 7 billion parameters and the largest having about 140 billion parameters. However, this is still lower than the trillions of parameters used to train GPT-4.

Featured Image: Canva

#Meta #release #Llama #LLM #compete #GPT4

Leave a Reply

Your email address will not be published. Required fields are marked *