Tech | Visa | Scholarship/School | Info Place

5 Artificial Intelligence Trends to Watch in 2024

The AI ​​trend appears to be following a similar hype and adoption trajectory to previous enterprise technology trends like cloud and machine learning, although it differs in significant ways, including:

  • AI requires massive amounts of computing to digest and recreate unstructured data.
  • Artificial intelligence is changing the way some organizations think about organizational structures and careers.
  • Artificial intelligence content that can be mistaken for photos or original artwork is shaking up the art world, and some worry it could be used to influence elections.

Here are our predictions for five trends in artificial intelligence (often referred to as generative models) to watch in 2024.

AI adoption increasingly looks like integration with existing applications

Many generative AI use cases for businesses and enterprises integrate with existing applications rather than creating entirely new use cases. The most striking example is the proliferation of co-pilots, or generative AI assistants. Microsoft has Copilot next to its 365 suite of products, and businesses like SoftServe and many others also provide Copilot for industrial work and maintenance. Google offers a variety of Co-Pilots for everything from video creation to security.

But all of these co-pilots are designed to sift through existing content or create content that sounds more like what a human would have written for the job.

SEE: Is Google Gemini or ChatGPT better for work? (Technology Republic)

Even IBM has called for a reality check on popular technology, noting that tools like Google’s 2018 Smart Compose are technically “generative” but aren’t credited with changing the way we work. The key difference between Smart Compose and contemporary generative AI is that some of today’s AI models are multimodal, meaning they are capable of creating and interpreting pictures, videos, and diagrams.

“I think by 2024 we’re going to see a lot of innovation around (multimodality),” Arun Chandrasekaran, distinguished vice president and analyst at Gartner, said in a conversation with TechRepublic.

At NVIDIA GTC 2024, many of the startups at the show were running chatbots on Mistral AI’s large-scale language models, as the open model can be used to create custom-trained AI that has access to company data. Using proprietary training data, AI can answer questions about specific products, industrial processes, or customer service without feeding proprietary company information into the trained model, which might publish that data to the public internet. superior. There are many other open models for text and video, including Meta’s Llama 2, Stability AI’s model suite (including Stable LM and Stable Diffusion), and the Abu Dhabi Institute of Technology Innovation’s Falcon series.

“There is a lot of interest in bringing corporate data into the LLM as a way to build models and add context,” says Chandrasekaran.

Customizing open models can be accomplished in a variety of ways, including rapid engineering, search enhancement generation, and fine-tuning.

artificial intelligence agent

Another way in which AI may become more integrated with existing applications by 2024 is through AI agents, which Chandrasekaran calls a “bifurcation” of AI progress.

AI agents can automate tasks that other AI bots do, meaning users don’t have to specifically prompt a single model; instead, they can provide a natural language instruction to the agent, which essentially lets their team combine the different commands needed to execute the instruction. combine it all toghther.

Sachin Katti, senior vice president and general manager of Intel’s Networking and Edge Group, also mentioned AI agents, saying at a pre-event ahead of the Intel Vision Conference from April 9 to 11 that mutual delegation of AI work can be done. tasks of the entire department.

Search empowers a generation of leading enterprise artificial intelligence

Retrieval-enhanced generation allows LL.M.s to check their answers against external sources before providing them. For example, an AI can check its answers against a technical manual and provide users with footnotes linking directly to the manual. RAG is designed to increase accuracy and reduce hallucinations.

RAG gives organizations a way to improve the accuracy of their AI models without skyrocketing costs. RAG can produce more accurate results than other common methods of adding enterprise data to LLM, rapid engineering, and fine-tuning. This is a hot topic in 2024 and will likely continue to be so later this year.

Organizations express hidden concerns about sustainability

Artificial intelligence is used to create climate and weather models that predict catastrophic events. At the same time, generative AI requires a lot of energy and resources compared to traditional computing.

What does this mean for AI trends? Optimistically, awareness of energy-intensive processes will encourage companies to build more efficient hardware to run these processes or right-size their use. On a less optimistic note, generative AI workloads may continue to consume large amounts of power and water. Either way, generative AI could become a contributing factor to the national conversation about energy use and grid resiliency. AI regulation currently focuses on use cases, but in the future, its energy use may also be subject to specific regulation.

Tech giants are addressing sustainability issues in their own ways, such as Google buying solar and wind energy in certain areas. For example, NVIDIA claims to save energy in data centers by using fewer server racks and more powerful GPUs while still running artificial intelligence.

Energy use of AI data centers and chips

The 100,000 AI servers NVIDIA expects to ship to customers this year will generate 5.7 to 8.9 terawatt hours of electricity annually, a fraction of the electricity used by today’s data centers. This is according to a paper published in October 2023 by PhD student Alex de Vries. But if the paper predicts that by 2027, NVIDIA alone will add 1.5 million AI servers to the power grid, and these servers will use 85.4 to 134.0 TWh per year, which will cause a more serious impact.

Another study found that using Stable Diffusion XL to create 1,000 images produced the same amount of carbon dioxide as driving an average gasoline-powered car 4.1 miles.

“We found that, even when controlling the number of model parameters, general-purpose generative architectures are orders of magnitude more expensive than task-specific systems for a variety of tasks,” Hugging researchers Alexandra Sasha Luccioni and Yacine Jernite wrote. Face and Emma Strubell of Negro Mellon University.

Microsoft AI researcher Kate Crawford pointed out in Nature magazine that training GPT-4 used about 6% of local water.

The changing role of artificial intelligence experts

Rapid engineering is one of the hottest skills in tech in 2023, with people vying for six-figure salaries to coach ChatGPT and similar products into generating useful responses. The hype has waned somewhat, and as mentioned above, many businesses heavily using generative AI have customized their own models. In the future, rapid engineering may become part of a software engineer’s daily tasks, but not as a specialization—just as part of how software engineers carry out their day-to-day responsibilities.

Using artificial intelligence for software engineering

“The use of artificial intelligence in software engineering is one of the fastest-growing use cases we see today,” said Chandrasekaran. “I believe rapid engineering will become an important skill across organizations because anyone who interacts with artificial intelligence systems (which will be many of us in the future) will have to know how to guide and bootstrap these models. But of course, people in the software engineering field There’s a need to really understand large-scale just-in-time engineering and some of the advanced techniques for just-in-time engineering.”

As for how AI roles are distributed, this will largely depend on the individual organization. It remains to be seen whether most people who work in just-in-time engineering will use just-in-time engineering as their job title.

Administrative positions related to artificial intelligence

A January 2024 survey of data and technology executives conducted by MIT Sloan Management Review found that organizations sometimes cut the number of chief artificial intelligence officers. There is “some confusion” about the responsibilities of hyper-specialized leaders such as AI or data officers, while by 2024 there may be normalization of “overarching technology leaders” who create value from data and report to the CEO, regardless of where the data comes from Where to start.

See: The responsibilities of an AI leader and why organizations should have one. (Technology Republic)

Chief data and analytics officers and chief artificial intelligence officers, on the other hand, are “not as common” but are increasing in number, Chandrasekaran said. It’s difficult to predict whether the two will maintain separate roles from the CIO or CTO, but it may depend on what core capabilities the organization is looking for and whether the CIO finds himself balancing too many other responsibilities at the same time.

“We do see these roles (AI officer and data and analytics officer) popping up more and more in our conversations with clients,” Chandrasekaran said.

On March 28, 2024, the U.S. Office of Management and Budget issued guidance on the use of artificial intelligence within federal agencies, which includes a requirement for all such agencies to designate a chief artificial intelligence officer.

Both AI art and vitrification of AI art are becoming more common

As art software and stock photo platforms embrace the gold rush of simple images, artists and regulators are looking for ways to identify AI-powered content to avoid misinformation and theft.

AI art is becoming increasingly common

Adobe Stock now offers tools to create and tag AI art in its catalog of stock images. On March 18, 2024, Shutterstock and NVIDIA announced the launch of an early access version of the 3D image generation tool.

OpenAI recently promoted filmmakers using photorealistic Sora AI. The demos were criticized by artist advocates, including Fairly Trained AI CEO Ed Newton-Rex (a former Stability AI employee), Call them “artists”: When you solicit positive reviews of your generative AI model from a handful of creators, while simultaneously training people on their work without permission/payment. “

By 2024, two possible responses to AI artwork may further develop: watermarking and glazing.

Watermark AI Art

The leading standard for watermarking is from the Content Provenance and Authenticity Alliance, OpenAI (Figure A) and Meta work together to tag images generated by their artificial intelligence; however, watermarks that appear both visually and in the metadata are easily removed. Some say watermarks don’t go far enough in preventing misinformation, especially around the 2024 U.S. election.

Figure A

Metadata on images generated by DALL-E reveals the image's provenance.
Metadata on images generated by DALL-E reveals the image’s provenance.

SEE: Last year, the U.S. federal government and leading AI companies agreed to a series of voluntary commitments that include watermarking. (Technology Republic)

Poisoning original art against artificial intelligence

Artists who want to prevent AI models from being trained on original art posted online can use Glaze or Nightshade, two data poisoning tools made by the University of Chicago. Data poisoning tweaks the artwork enough to make it unreadable by AI models. With AI image generation and the protection of artists’ original works remaining a focus in 2024, more similar tools are likely to emerge in the future.

Is artificial intelligence overhyped?

Artificial intelligence is so popular in 2023 that it will inevitably be over-hyped by 2024, but that doesn’t mean it doesn’t have practical applications. In late 2023, Gartner announced that generative AI had reached the “peak of exaggerated expectations,” the proverbial peak of hype before an emerging technology becomes practical and normalized. The peak is followed by the “trough of disillusionment,” then back to the “slope of enlightenment” and ultimately a return to productivity. Arguably, generative AI’s position at the top or bottom means it’s overhyped. However, many other products have gone through hype cycles before, and many eventually hit a “productivity plateau” after the initial boom.


#Artificial #Intelligence #Trends #Watch

Leave a Reply

Your email address will not be published. Required fields are marked *

Index