Could Llama 3 Outshine ChatGPT and Gemini?


The Gist

  • More AI, the merrier? Meta introduces an updated Llama large language model for AI development.
  • On the hunt for OpenAI, Google. For the first time, Llama is being added to MetaAI, giving users an AI prompt and response experience similar to ChatGPT and Gemini.
  • Meta’s roadmap. The combination of Llama 3 and Meta AI creates exciting new enhancements to Instagram, WhatsApp, Facebook Messenger and Facebook – as well as new challenges.

Last year, Facebook parent company Meta entered the AI development race by launching Llama 2. This week, Meta launched an updated Llama — Llama 3 —  and added its latest large language model to its budding AI assistant Meta AI.

Meta’s release of Llama 3 and its integration into Meta AI represents a significant leap in AI-driven functionalities across its social media platforms, notably Instagram, WhatsApp, Facebook Messenger and Faceook. This update enhances user engagement by enabling more interactive and personalized content creation, such as image generation and AI assistance directly within the platforms’ interfaces.

For marketers, this development opens new avenues for innovative content strategies that leverage AI to create engaging visuals and responsive communication tools.

Meta didn’t waste the opportunity to establish its place among the AI giants with its latest advancements, with particularly a veiled shot at OpenAI’s ChatGPT: “Thanks to our latest advances with Meta Llama 3,” Meta officials wrote in its April 18 blog, “we believe Meta AI is now the most intelligent AI assistant you can use for free – and it’s available in more countries across our apps to help you plan dinner based on what’s in your fridge, study for your test and so much more.”

Let’s look at what is new in Llama 3 and how image creation appears in Meta AI.

Introducing Llama 3: Meta’s New Variations and Advanced Capabilities

Meta launched Llama 3 as two open-source variations, the Llama 3 8B and 70B models. The models feature improved pre-training and training. Meta claims that the new improvements lead to more accurate responses and a broader knowledge base to query within the large language model. 

Among the developer features being touted is enhanced integration with Torchtune, a PyTorch extension library designed to allow programmers to fine-tune LLMs parameters. PyTorch is a popular machine learning library used by R programming and Python developers (I noted in the Llama 2 post that Microsoft and Meta were responsible for developing Pytorch), so the additional integration will entice a large number of developers to invest in learning and using Llama 3. 

Further enticing developers will be wider application availability for a variety of development environments. The Llama 3 models are available on a wide variety of LLM development platforms such as AWS, Databricks and Hugging Face. Support from hardware platforms such as AMD, Dell, Intel, NVIDIA and Qualcomm are also offered.

Over the next few months, Meta will introduce additional model performance features such as the capacity to process longer context windows and additional model sizes. The model sizes are notable. In my post on Llama 2, I noted one of the research aspects among AI developers is smaller model sizes. High interest in researching model performance in a smaller size would permit AI to be better designed for devices and smaller program applications. The performance achievements will be explored in an upcoming research paper behind the development of Llama 3. 

Related Article: How Meta’s Llama 2 Shifts Marketing’s Relationship With AI

Meta AI: Elevating User Experience with On-Demand AI Tool

The big news in the Llama 3 launch is its addition to Meta AI, an AI assistant that can be accessed online via a free dedicated website or through the search text boxes of the Meta social media platforms.

At the Meta AI website users see an interface very similar to that of other generative AI assistants like Propexity and ChatGPT. Users interested in image creation use the second button on the left to navigate to the image creation page. Users can then prompt Meta AI to create an image. Image suggestions appear while typing the prompt. This differs significantly from DALL-E in ChatGPT or Propensity where users have to wait for suggested images to appear. 



Source link

We will be happy to hear your thoughts

Leave a reply

BEST-SHOP4U
Logo
Shopping cart