Adobe said it is building an artificial intelligence model to generate videos. But it didn’t reveal when exactly the model would be released, nor much more than the fact that it exists.
Adobe’s model – part of the company’s growing Firefly family of generative AI products – will come to the market in part as a response to models from OpenAI’s Sora, Google’s Imagen 2 and a growing number of startups in the emerging field of generative AI video, Adobe said , Adobe’s flagship video editing suite, Premiere Pro, will launch later this year.
Like many of today’s generative AI video tools, Adobe’s model creates footage (cue images or reference images) from scratch, and it supports three new features in Premiere Pro: Object Addition, Object Removal, and Generate Extensions.
They are pretty self-explanatory.
Object addition allows users to select a segment of a video clip (such as the upper third or lower left corner) and then enter a prompt to insert an object in that segment. During a TechCrunch briefing, an Adobe spokesperson showed stills of a real-world briefcase filled with diamonds generated by Adobe models.
Object removal removes objects from the clip, such as a boom microphone or coffee cup in the background of a shot.
As for the generation extension, it adds a few frames to the beginning or end of the clip (unfortunately, Adobe won’t reveal how many). Generating an extension doesn’t mean creating an entire scene, but rather adding buffered frames to sync with the soundtrack or retaining footage for extra beats – such as adding emotional weight.
To address concerns surrounding deepfakes that are inevitable with such generated AI tools, Adobe said it will introduce content credentials (metadata used to identify AI-generated media) in Premiere. Content Credentials, a media origin standard supported by Adobe through its Content Authenticity Program, already exists in Photoshop and is a component of Adobe’s Firefly model for image generation. In Premiere, they indicate not only what content is AI-generated, but also which AI model was used to generate it.
I asked Adobe what data (images, videos, etc.) it uses to train the model. The company did not disclose, nor would it disclose, how (or if) contributors to the dataset are compensated.
Last week, Bloomberg reported, citing people familiar with the matter, that Adobe is paying photographers and artists on its stock media platform Adobe Stock up to $120 to submit short video clips to train its video generation model. It is said that the compensation ranges from US$2.62 per minute to US$7.25 per minute depending on the video content submitted, with the higher the quality of the video, the higher the charge.
This would be a departure from Adobe’s current arrangement with Adobe Stock artists and photographers, whose work Adobe uses to train its image generation models. The company pays these contributors annual bonuses, rather than a one-time fee, based on how much content they have in Stock and how that content is used — though the bonuses are governed by an opaque formula and aren’t guaranteed to be earned every year. .
The Bloomberg report, if accurate, describes an approach that contrasts sharply with that of generative video AI competitors like OpenAI, which is said to have scraped publicly available web data, including videos from YouTube, to train its models. YouTube CEO Neal Mohan recently said that using YouTube videos to train OpenAI’s text-to-video generator would violate the platform’s terms of service, underscoring the legal implications of OpenAI and other companies’ fair use arguments. of vulnerability.
Companies including OpenAI have been sued over allegations they violated intellectual property laws by training artificial intelligence on copyrighted content without providing credit or payment to the owners. Adobe seems intent on avoiding this outcome, like its sometimes generative AI rivals Shutterstock and Getty Images (which also arrange licensing of model training data), and positioning itself as verifiable through its intellectual property indemnification policy “Safe” select enterprise customers.
On the subject of payment, Adobe hasn’t revealed how much it will cost customers to use the upcoming video generation feature in Premiere; presumably, pricing is still under discussion.But the company did It was revealed that the payment scheme will follow the points-generating system established by the earlier Firefly model.
For customers with paid subscriptions to Adobe Creative Cloud, Generate Credits begin renewing monthly, with monthly allocations ranging from 25 to 1,000, depending on the plan. In general, more complex workloads (such as higher resolution generated images or multi-image generation) require more credits.
The big question on my mind is, will Adobe’s AI video feature become a reality? worth Whatever the final cost to them?
To date, the Firefly image generation model has been widely derided as mediocre and flawed compared to Midjourney, OpenAI’s DALL-E 3, and other competing tools. The video model’s lack of release time doesn’t lead to confidence that it will avoid the same fate. In fact, Adobe refused to show me a live demonstration of object addition, object removal, and build extensions, insisting instead on using pre-recorded hiss reels.
Perhaps hedging its bets, Adobe said it’s in talks with third-party vendors about integrating its video generation model into Premiere to support tools like generation extensions.
OpenAI is one of these vendors.
Adobe said it is working with OpenAI to explore how to bring Sora into Premiere workflows. (The OpenAI partnership makes sense given the AI startup’s recent overtures to Hollywood; apparently OpenAI CTO Mira Murati will be attending this year’s Cannes Film Festival.) Other early partners include Pika, a developer Startup that uses artificial intelligence tools to generate and edit videos and Runway, one of the first vendors to adopt the generative video model in the marketplace.
An Adobe spokesperson said the company is open to working with other companies in the future.
Now, to be clear, these integrations are currently more of a thought experiment than a working product. Adobe repeatedly emphasized to me that they are in the “early preview” and “research” stages and not a product that customers can expect to use anytime soon.
I’d say this speaks to the overall tone of Adobe-generated video journalism.
Adobe is clearly trying to show with these announcements that it’s thinking about generating video, even if only in a preliminary sense. It would be foolish not to—assuming the economics ultimately work in Adobe’s favor, being caught flat-footed in the generative AI race risks losing valuable potential new revenue streams. (After all, AI models are expensive to train, run, and service.)
But frankly, the concept it showcases – isn’t very compelling. With Sora now in the field, and more innovations sure to come, the company still has a lot to prove.
#Adobe #developing #video #generation
Discover more from Yawvirals Gurus' Zone
Subscribe to get the latest posts sent to your email.