Tech | Visa | Scholarship/School | Info Place

SambaNova’s new Samba-CoE v0.2 AI beats Databricks DBRX

Join us in Atlanta on April 10 to explore the future of a safe workforce. We’ll explore the vision, benefits, and use cases of artificial intelligence for security teams. Request an invitation here.


AI chip maker SambaNova Systems has Announce Samba-CoE v0.2 Large Language Model (LLM) is a major achievement.

The model ran at an impressive 330 coins per second, outperforming several well-known models from competitors such as Databricks’ new DBRX, which was just released yesterday, MistralAI’s Mixtral-8x7B, and Elon Musk’s xAI’s Grok -1 etc.

What is particularly striking about this achievement is the efficiency of the model – it achieves these speeds without compromising accuracy and requires only 8 slots to operate.

In fact, in our tests of LLM, it responded to our inputs with blinding speed, producing a 425-word answer about the Milky Way in 330.42 seconds.

VB event

Artificial Intelligence Impact Tour – Atlanta

Continuing our tour, we will head to Atlanta for the AI ​​Impact Tour stop on April 10th. This exclusive, invitation-only event in partnership with Microsoft will discuss how generative AI is transforming the security workforce. Space is limited, please request an invitation now.

request an invitation

Questions about quantum computing also received a similarly robust and fast response, with a whopping 332.56 coins delivered in one second.

This is a more efficient alternative to configurations that may require 576 sockets and run at lower bitrates.

Efficiency improvement

SambaNova’s emphasis on using a smaller number of sockets while maintaining high bitrates demonstrates significant improvements in computational efficiency and model performance.

It is also working with LeptonAI on the upcoming release of Samba-CoE v0.3, demonstrating continued progress and innovation.

In addition, SambaNova Systems emphasizes that the foundation of these advancements is built on the open source models of Samba-1 and Sambaverse, using a unique integration and model merging approach.

This approach not only underpins the current version, but also provides a scalable and innovative approach for future development.

Comparison with other models such as GoogleAI’s Gemma-7B, MistralAI’s Mixtral-8x7B, Meta’s llama2-70B, Alibaba Group’s Qwen-72B, TIIIuae’s Falcon-180B, and BigScience’s BLOOM-176B demonstrating Samba-CoE v0 .2 Competitive advantage in this field.

This announcement is likely to spark interest in the AI ​​and machine learning communities, sparking discussions about the efficiency, performance, and future of AI model development.

SambaNova’s background

SambaNova Systems was founded in Palo Alto, California in 2017 by three co-founders Kunle Olukotun, Rodrigo Liang and Christopher Ré.

Initially focused on creating custom AI hardware chips, SambaNova’s ambitions have quickly expanded to include a broader product portfolio, including machine learning services and a comprehensive enterprise AI training, development and deployment platform, the SambaNova Suite in early 2023 and earlier this year , a 1 trillion-parameter artificial intelligence model Samba-1, composed of 50 smaller models in an “expert portfolio.”

The evolution from hardware-focused startup to full-service AI innovator reflects the founders’ commitment to enabling scalable, accessible AI technology.

As SambaNova carves out a niche in the highly competitive artificial intelligence industry, it positions itself as a strong competitor to established giants like Nvidia, raising $676 million in Series D funding in 2021 at a valuation of more than $5 billion.

Today, in addition to giants like Nvidia, the company competes with other dedicated AI chip startups like Groq.


#SambaNovas #SambaCoE #v0.2 #beats #Databricks #DBRX

Leave a Reply

Your email address will not be published. Required fields are marked *

Index