Tech | Visa | Scholarship/School | Info Place

Symbolica hopes to stop the AI ​​arms race by betting on symbolic models

In February of this year, Demis Hassabis, CEO of Google’s DeepMind artificial intelligence research laboratory, warned that investing more and more computing power in the types of artificial intelligence algorithms widely used today may lead to diminishing returns. Hassabis said reaching the “next level” of artificial intelligence will require breakthroughs in basic research that provide viable alternatives to today’s entrenched approaches.

Former Tesla engineer George Morgan agrees. So he founded a startup, Symbolica AI, to do just that.

“Traditional deep learning and generative language models require unimaginable scale, time and effort to produce useful results,” Morgan told TechCrunch. “By building [novel] Models, Symbolica can achieve higher accuracy with lower data requirements, less training time, lower cost, and provide provably correct structured output. “

Morgan dropped out of the University of Rochester to join Tesla, working on the team developing Autopilot, Tesla’s suite of advanced driver-assistance features.

Morgan said that while working at Tesla, he came to realize that current approaches to artificial intelligence—much of which revolve around scaling computing—were unsustainable in the long term.

“The current approach has only one dial to turn: scale up and hope for emergent behavior,” Morgan said. “However, scaling requires more compute, more memory, more training dollars, and more data. But ultimately, [this] It won’t make you perform significantly better. “

Morgan isn’t the only one to reach this conclusion.

In a memo this year, two executives from semiconductor maker TSMC said that if artificial intelligence trends continue at their current pace, the industry will need 1 trillion transistor chips — which contain an average number of transistors. 10 times the size of today’s chips – within ten years.

It’s unclear whether this is technically possible.

Separately, a report co-authored by Stanford University and independent artificial intelligence research organization Epoch AI found that the cost of training cutting-edge artificial intelligence models has increased significantly and changed over the past year. The report’s authors estimate that OpenAI and Google spent approximately $78 million and $191 million respectively to train GPT-4 and Gemini Ultra.

With costs still set to climb — see OpenAI and Microsoft’s reported $100 billion AI data center plans — Morgan began working on what he calls “structured” AI models. These structured models encode the underlying structure of the data (hence the name) rather than trying to derive approximate insights from huge data sets like traditional models, allowing them to use less overall computation to get what Morgan describes as more good performance.

“Domain-specific structured reasoning capabilities can be generated in smaller models,” he said, “combining deep mathematical toolkits with breakthroughs in deep learning.”

Structured modeling, better known as symbolic artificial intelligence, is not a new concept. They date back decades and have their roots in the idea that artificial intelligence can be built on symbols that represent knowledge using a set of rules.

Symbolic AI solves tasks by defining a set of symbolic manipulation rules that are specialized for a specific job, such as editing lines of text in word processing software. This is in contrast to neural networks, which try to solve tasks through statistical approximation and learning from examples.

Neural networks are the cornerstone of powerful artificial intelligence systems such as OpenAI’s DALL-E 3 and GPT-4. But, Morgan claims, they are not the end all be all; Morgan believes that symbolic AI may actually be better suited for efficiently encoding knowledge of the world, reasoning through complex scenarios, and “explaining” how they arrive at answers.

“Our model is more reliable, more transparent and more accountable,” Morgan said. “Structured reasoning capabilities have huge commercial applications, especially code generation, which is reasoning about large code bases and generating useful code, and existing products fall short.”

Symbolica’s product, designed by a 16-person team, is a toolkit for creating symbolic AI models and models pre-trained for specific tasks, including generating code and proving mathematical theorems. The exact business model is constantly changing. But Morgan said Symbolica may offer consulting services and support to companies that want to use its technology to build custom models.

Today marks Symbolica’s secret launch, so the company has no customers — at least none willing to talk publicly. However, Morgan did reveal that Symbolica received $33 million in investment led by Khosla Ventures earlier this year. Other investors include Abstract Ventures, Buckley Ventures, Day One Ventures and General Catalyst.

$33 million is no small sum; Symbolica’s backers are clearly confident in the startup’s science and roadmap. Vinod Khosla, founder of Khosla Ventures, told me via email that he believes Symbolica is “solving some of the most important challenges facing the AI ​​industry today.”

“To achieve large-scale commercial AI adoption and regulatory compliance, we need models with structured outputs that can achieve greater accuracy with fewer resources,” said Khosla. “George has assembled the industry’s largest One of the best teams to do this.”

But others are less convinced that symbolic AI is the right path forward.

Os Keyes, a Ph.D. candidate at the University of Washington focusing on law and data ethics, noted that symbolic AI models rely on highly structured data, which makes them “extremely fragile” and dependent on context and specificity. . In other words, symbolic AI requires well-defined knowledge to function, and defining knowledge can be highly labor-intensive.

“It would still be interesting if it combined the advantages of deep learning and symbolic methods,” Keyes said, referring to DeepMind’s recently released AlphaGeometry, which combines neural networks with symbolic AI-inspired algorithms to solve problems with Challenging geometry problems. “But time will tell.”

Morgan counters that current training methods will soon be unable to meet the needs of companies looking to leverage AI for their purposes, making any promising alternatives worth investing in. And, he claims, Symbolica is strategically positioned to do so. Considering its latest funding already has “a few years” of runway and its models are relatively small (and therefore cheap) to train and run, its future looks promising.

“For example, tasks such as automating software development at scale will require models with formal inference capabilities and cheaper operating costs to parse large databases of code and generate and iterate useful code,” he said. “Public Perception of AI Models It’s still ‘scale is all you need’. To make progress in the field, symbolic thinking is absolutely necessary – structured and interpretable output with formal reasoning capabilities is needed to satisfy the demand.”

There’s nothing stopping large AI labs like DeepMind from building their own symbolic AI or hybrid models—Symbolica’s points of differentiation aside—Symbolica is entering an extremely crowded and well-capitalized AI space. But Morgan still anticipates growth and expects San Francisco-based Symbolica to double its headcount by 2025.

#Symbolica #hopes #stop #arms #race #betting #symbolic #models

Leave a Reply

Your email address will not be published. Required fields are marked *