The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

AWS Makes Meta’s Llama 3.2 LLMs Available to Customers

DATE POSTED:September 25, 2024

Meta’s Llama 3.2 multilingual large language models (LLMs) are now generally available in Amazon BedrockAmazon SageMaker and via Amazon Elastic Compute Cloud (Amazon EC2) using AWS Trainium and AWS Inferentia.

This availability will offer AWS customers more options for building, deploying and scaling generative artificial intelligence (AI) applications, Amazon said in a Wednesday (Sept. 25) update.

“The Llama 3.2 collection builds on the success of previous Llama models to offer new, updated and highly differentiated models, including small and medium-sized vision LLMs that support image reasoning and lightweight, text-only models optimized for on-device use cases,” the update said. “The new models are designed to be more accessible and efficient, with a focus on responsible innovation and safety.”

The collection includes Llama 3.2 11B Vision and Llama 3.2 90B Vision, which are Meta’s first multimodal vision models; Llama 3.2 1B and Llama 3.2 3B, which are optimized for edge and mobile devices; and Llama Guard 3 11B Vision, which is fine-tuned for content safety classification, according to the update.

“According to Meta, Llama 3.2 models have been evaluated on over 150 benchmark datasets, demonstrating competitive performance with leading foundation models,” the update said. “Similar to Llama 3.1, all of the models support a 128K context length and multilingual dialogue use cases across eight languages, spanning English, German, French, Italian, Portuguese, Hindi, Spanish and Thai.”

Meta announced Llama 3.2, a big advancement in its open-source AI model series, on Wednesday at its annual Connect conference.

The vision models can analyze images, understand charts and graphics, and perform visual grounding tasks, PYMNTS reported Wednesday. The lightweight models, optimized for on-device use, support multilingual text generation and tool-calling abilities, enabling developers to build personalized applications supposedly prioritizing user privacy.

“It’s only been a year and a half since we first announced Llama, and we’ve made incredible progress in such a short amount of time,” Meta said in a Wednesday blog post. “This year, Llama has achieved 10x growth and become the standard for responsible innovation. Llama also continues to lead on openness, modifiability and cost efficiency, and it’s competitive with closed models — even leading in some areas.”

The post AWS Makes Meta’s Llama 3.2 LLMs Available to Customers appeared first on PYMNTS.com.