Meta launches New Language Model with human precision disrupting Big Tech AI Race.

Meta launches New Language Model with human precision disrupting Big Tech AI Race.

As Big Tech companies race to incorporate the technology into their products and impress investors, Meta Platforms Inc. announced on Friday that it released to researchers a new large language model, the core software of a new artificial intelligence system. This announcement ignited an AI arms race.

With the release of Microsoft-backed OpenAI’s ChatGPT late last year, the public competition to rule the AI technology arena began. This led tech giants from Alphabet Inc. to China’s Baidu Ltd. to tout their own products.

Researchers and organizations connected to the government, civil society, and academia will be able to use Meta’s LLaMA, also known as Big Language Model Meta AI, under a non-commercial license, the company announced in a blog post.

To summarize data and provide content, large language models mine enormous quantities of text. For example, they have the ability to respond to inquiries with sentences that appear to have been authored by people.

The model is trained on 20 languages with a concentration on those using Latin and Cyrillic alphabets Meta said and uses “much less” computational resources than earlier solutions.

Gil Luria, the senior software analyst at D.A. Davidson, said that Meta’s statement today “seems to be a step in proving their generative AI capabilities so they may apply them to their products in the future.”

Since Meta has less experience with it, “generative AI is a novel application of AI that is unquestionably vital for the future of their organization.”

With the tech sector’s slowing development leading to significant layoffs and a reduction in experimental bets, AI has emerged as a promising area for investment.

According to Meta, LLaMA could outperform rivals who look at more parameters, or variables that the system considers.

More specifically, it claimed that GPT-3, a more current model on which ChatGPT is based, can be outperformed by a version of LLaMA with 13 billion parameters.

It called Google’s Chinchilla70B and PaLM-540B, which are even bigger than the model Google used to demonstrate its Bard chat-powered search, as “competitive” with its 65 billion-parameter LLaMA model.

Increasing the amount of “cleaner” data and making “architectural modifications” to the model, according to a Meta spokeswoman, improved performance.

A new version of Meta’s chatbot BlenderBot was built using the huge language model OPT-175B, which the company launched in May of last year and was likewise targeted at researchers.

Later, it unveiled a model called Galactica that could write academic papers and resolve mathematical problems, but it swiftly took the demo down after it produced fake answers that sounded official.

 

 

Facebook20k
Twitter60k
100k
Instagram500k
600k