{"id":540573,"date":"2025-10-31T20:39:13","date_gmt":"2025-10-31T20:39:13","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/540573\/"},"modified":"2025-10-31T20:39:13","modified_gmt":"2025-10-31T20:39:13","slug":"adobes-experimental-ai-tool-can-edit-entire-videos-using-one-frame","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/540573\/","title":{"rendered":"Adobe\u2019s experimental AI tool can edit entire videos using one frame"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Adobe demonstrated some of the experimental AI tools it\u2019s working on at its Max conference that provide new ways to intuitively edit photos, videos, and audio. These experiments, called \u201csneaks,\u201d include tools that instantly apply any changes you make to one frame across an entire video, easily manipulate light in images, and correct mispronunciations in audio recordings.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Project Frame Forward\u202fis one of the more visually impressive sneaks, allowing video editors to add or remove anything from footage without using masks \u2014 a time-consuming process for selecting objects or people. Instead, Adobe\u2019s demonstration shows Frame Forward identifying, selecting, and removing a woman in the first frame of a video, and then replacing her with a natural-looking background-similar to Photoshop tools like Context-aware Fill or Remove Background. This removal is automatically applied across the entire video in a few clicks.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Users can also insert objects into the video frame by drawing where they want to place it and describing what to add with AI prompts. These changes will similarly be applied across the whole video. The demonstration shows that these inserted objects can also be contextually aware, showing a generated puddle that reflects the movement of a cat that was already in the video.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Another tool is Project Light Touch, which uses generative AI to reshape light sources in photos. It can change the direction of lighting, make rooms look as if they were illuminated by lamps that weren\u2019t switched on in the original image, and allows users to control the diffusion of light and shadow. It can also insert dynamic lighting that can be dragged across the editing canvas, bending light around and behind people and objects in real time, such as illuminating a pumpkin from within, and turning the surrounding environment from day to night. The color of these manipulated light sources can also be adjusted, letting you tweak warmth or create vibrant RGB-like effects.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Project Clean Take is a new editing tool that can change how speech is enunciated using AI prompts, removing the need to re-record video or audio clips. Users can change the delivery or emotion behind someone\u2019s voice \u2014 making them sound happier or inquisitive, for example \u2014 or replace words entirely while preserving the identifying characteristics of the original speaker\u2019s voice. It can also automatically separate background noises into individual sources so that users can selectively adjust or mute specific sounds, helping to preserve the overall audio while improving voice clarity.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">These are just a handful of sneaks that were showcased at Adobe\u2019s Max event. Other notable mentions include <a href=\"https:\/\/youtu.be\/Xg4n60hYfhA\" target=\"_blank\" rel=\"noopener\">Project Surface Swap<\/a>, which lets you instantly change the material or texture of objects and surfaces, <a href=\"https:\/\/youtu.be\/BLxFn_BFB5c\" target=\"_blank\" rel=\"noopener\">Project Turn Style<\/a> for editing objects in images by rotating them like a 3D image, and <a href=\"https:\/\/youtu.be\/z3lHAahgpRk\" target=\"_blank\" rel=\"noopener\">Project New Depths<\/a>, which lets you edit photographs as if it were a 3D space that identifies when inserted objects should be partially obscured by the surrounding environment. You can read more about each sneak preview in detail <a href=\"https:\/\/blog.adobe.com\/en\/publish\/2025\/10\/30\/adobe-max-2025-sneaks-where-ai-creativity-play-collide\" target=\"_blank\" rel=\"noopener\">over on Adobe\u2019s blog<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Sneaks aren\u2019t publicly available to use, and they\u2019re not guaranteed to become official features in Adobe\u2019s Creative Cloud software or Firefly apps. Many features, like Photoshop\u2019s <a href=\"https:\/\/www.theverge.com\/2024\/10\/14\/24268813\/photoshop-distraction-removal-ai-tool-launch\" target=\"_blank\" rel=\"noopener\">Distraction Removal<\/a> and <a href=\"https:\/\/www.theverge.com\/news\/715073\/adobe-photoshop-ai-harmonize-composite-editing-feature\" target=\"_blank\" rel=\"noopener\">Harmonize tools<\/a>, initially started out as sneaks projects, however, so there\u2019s a good chance that some version of these experimental capabilities will be available to creatives in the future.<\/p>\n","protected":false},"excerpt":{"rendered":"Adobe demonstrated some of the experimental AI tools it\u2019s working on at its Max conference that provide new&hellip;\n","protected":false},"author":2,"featured_media":540574,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[1191,323,1192,12,326,53,16,15],"class_list":{"0":"post-540573","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-adobe","9":"tag-ai","10":"tag-creators","11":"tag-news","12":"tag-tech","13":"tag-technology","14":"tag-uk","15":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115470713900888175","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/540573","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=540573"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/540573\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/540574"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=540573"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=540573"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=540573"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}