With the launch of Firefly Video in Beta, Adobe is taking a new step in integrating generative AI into all of its products. The JDN met Ely Greenfield, CTO of Adobe Digital Media, to discuss the publisher’s plans.
JDN. Adobe recently enriched Creative Cloud with around a hundred new features, and presented new integrations of its Firefly model. How will AI change the practices of creatives?
Ely Greenfield. Since launching our first Adobe Firefly template for image generation a year and a half ago, we’ve always sought to integrate AI directly into creative workflows. Over the last six months, we’ve added features like the ability to generate images in specific styles via reference images or even custom templates, for example allowing users to train the AI with multiple images to get a very specific style model. Our goal is to provide full control so our customers can operate AI safely, with aesthetic and professional results. Our Firefly family of generative AI models was designed for use in a commercial setting
Adobe has just extended its family of generative AI models to video, with Firefly Video currently available in beta. How much time was needed and what types of data were used for the development of this model ?
Our teams have been working on different AI models for video for over a year, but we were waiting for the technology to be mature to launch it in beta. Firefly Video, in development since this summer, has also benefited from the nine months of experience accumulated with our model dedicated to image. Regarding data, we never use our customers’ data. Our models are only trained on licensed content, data we own rights to, or public domain resources.
What will be the pricing terms for using Firefly Video? ?
We haven’t announced it yet. We launched it in beta so our users can try it out already. It will likely be priced higher than our image generation model, but we will share details soon.
What to expect from upcoming versions of Firefly for image and video ? Do you anticipate major progress in the short term? ?
We continue to invest in improving the quality of results with Firefly Video. In terms of future improvements, therefore, expect longer clips, higher resolutions, and increased fidelity to instructions. For example, we introduced the “image to video” function, allowing users to obtain videos with an improved level of control starting from a simple image. When it comes to movement and actions in videos, we’re going to work harder to improve control over details, like camera angles and transitions. On the image side, Firefly Image 3 now allows images to be generated four times faster. In total, Firefly has already generated more than 13 billion images. For the future, we are also working to offer more consistency between elements and characters to allow the creation of series of images in varied settings.
What are the new AI features dedicated to audio? Will it soon be possible to generate videos with sounds or dialogues?
We recently integrated Generative Extend into Premiere Pro, which allows you to extend an audio or video clip by a few seconds using AI. During our Sneaks session at Adobe MAX, we demonstrated technologies that can generate synchronized audio elements, such as footsteps or crash sounds, to accompany video scenes. We’re getting closer to an AI that can generate audio and video separately and then combine the tracks. Although progress exists, we will still have to wait for solutions that are fully usable and controllable by our users.
Adobe recently introduced several features that make 3D design easier with AI. Will this expertise become more accessible thanks to Firefly? ?
Yes. We recently launched the beta version of Project Neo, a web application for easily creating and modifying 3D designs, making 3D design more accessible. The Firefly integration should simplify this process even further, allowing, for example, to use a 2D reference image to generate a 3D render with simple instructions. We are working on technologies that we will present soon and which should greatly facilitate 3D design thanks to AI. We also introduced a new version of Substance 3D Viewer in beta, which allows you to view and edit 3D objects while working in Photoshop, making transitions between 2D and 3D easier.
Do you think that Firefly, currently integrated into Adobe solutions and available via a web interface, could ultimately become a standalone product? ?
Our AI strategy is based on three pillars. The first is to offer controllable technology adapted to the needs of creatives and professionals, to guarantee precise and commercially exploitable results. The second pillar is integrating AI natively into all of our products, so that it fits naturally into creators’ existing workflows. Finally, we also develop autonomous AI-driven tools to initiate new creative approaches. Project Concept, for example, is a standalone tool designed for ideation and conceptualization, illustrating how AI can be placed at the center of the creative process. This new creative and collaborative development platform allows images to be remixed in real time so that creative professionals can design concepts live on a single work surface.
Adobe has announced that it is open to integrating third-party AI models into its tools. When do you plan to integrate these models, and how do you imagine their coexistence alongside Firefly ?
We don’t have a date to announce yet, but we are indeed talking with several actors. Our priority is to meet the needs of our customers, without imposing our models on them if effective alternatives already exist. We are therefore open to the idea of offering other models within our tools. For example, third-party templates for workflows like “text to image” or “text to video” could be interesting. However, some more technical integrations, such as generative fill in Photoshop, require time and adjustments.
Adobe is behind Content Credentials, an open source technology for identifying the origin of content. Do you think it could become a standard in the era of generative AI, just like PDF has become? ?
I believe Content Credentials is well on its way to becoming a standard, and we are working towards its adoption by ISO. Launched in 2019 under the aegis of the Content Authenticity Initiative (CAI), this open source technology was designed to guarantee the authenticity of content. Today, it brings together more than 3,700 partners, including social networks, media, hardware manufacturers like Canon and creatives. Content Credentials helps secure the origin and integrity of content via cryptographic signatures. This technology can be used to identify content generated by AI, but also and above all to authenticate works created by humans. For example, a Canon device can now cryptographically sign a photo to certify its authorship and attest that the image has not been modified.
Does the Adobe Express tool, Canva’s direct competitor, have a chance of becoming your most popular solution, while design and graphic design seem to have become more popular in recent years?
We often talk about creatives as if they are a single group, even though this community encompasses many different disciplines. Adobe Express is designed to meet specific needs and is therefore a valuable tool for certain use cases. Although Express is set to become one of our flagship products, its goal is not to replace advanced solutions like Photoshop or Illustrator for expert users. On the other hand, professional designers, experts in Illustrator or Photoshop, can use Express to import their content and create simple animations quickly. Express aims to be accessible and efficient, which makes it a tool appreciated by many professional customers, particularly for daily mobile use.
Ely Greenfield is the CTO of Adobe’s Digital Media division. He works closely with the Creative Cloud and Document Cloud teams to integrate innovative technologies and support the group’s innovation. Most recently, he oversaw the development of Adobe Firefly, the company’s first generative AI model. He also oversees Adobe Research, the division focused on AI innovation and creative tools. Prior to Adobe, he was CTO of AppDynamics. Ely Greenfield has a degree in computer science from Brown University.
Related News :