Exploring Adobe Generative Fill

I feel like the wave of AI tools hitting us is just the advance warning, and as companies like Adobe find financial success in some way, they will continue to invest and refine their tools to attract more people *cough, money*, to their products, and or lock them into unreasonably high subscriptions to use their software - or get left behind in the 'efficiency revolution'. 
In any case, regardless where you stand on the ethics of this new era, I think it's a matter of time before it's a part of a graphic artists day to day process. So, I think it's prudent to get in front of it and learn about its boundaries, if for nothing else.
Below I've compiled the results of some experiments to take existing images, maintain the poses, faces, and expressions, but alter most everything else. Have a look.

The examples are quite detailed, I encourage you to zoom in, though from a cell phone you'll have to touch outside of the example images to make it scroll or zoom...
To The Moon
Above: Here's an example of what 4-5 hours of effort can yield. This was pretty ambitious, and very tricky in some areas. The level of detail and accuracy is a bit hit or miss,  but for what it is - a creative AI artwork - I'm happy with the result. 
What I learned on this one, was how much the current state of the art influences each generative request. For example, if I generate a 'robot cat' and it gives me a shiny plastic result, that will affect the look of a 'robot dog' in a subsequent request.
*disclaimer. This is not for Lost in Space Season 5. There is no Lost in Space Season 5. In fact there is no Lost in Space Season 4. If there was, well, maybe they could reboot it with this family?
Medieval Fantasy
Above: Notice the removed modern elements (houses, cars etc.) as well as the extended left, right, and up with new content. About 2 hours for this level of detail. The biggest reveal here for me was just how many times I have to try and try again before it gives me an acceptable result.
Time Warp
Above: I wanted to see how the same family could be used but for an alternate theme. This was a more detailed piece, which took about 5-6 hours - in part because I added some non-AI polish like the neon, the champagne bubbles, etc.
Below: take a look zoomed in. This detail is insane. Isn't this insane? I get excited.
Parental Advisory
Above: Not enough 'street cred'? No problem. I'll build your online reputation one fake rap album cover at a time. After this many exercises I'm realizing, in it's current state, this technology is going to always leave some unavoidable and telltale AI fingerprint, meaning nothing is so far hero-quality, for me. But overall pretty excellent for background and gak items.
Shark Week
Above: The majority of these experiments are generated using generative fill, but my process still relies heavily on editing and adjusting the pieces together for a cohesive final image - like adding the underwater colour space into this scene. Selecting and masking expertise is another essential skill to getting it to do what you want.
Make it ... better
Above:  Switching gears here, I've re-worked an image that just needed a few minimal edits to bring the best composition forward. These kinds of 'tweaks' can be exceptionally fast, and can sometimes be done in 5 or 10 minutes, depending.
Above: Similar to the previous image,  this is another cleanup test. The original had visible lines from a scanner, blurring, and quite a bit of noise. I've removed the scan lines, reduced the noise, sharpened things, and added contrast. I then used a few AI prompts which altered some elements, but also added clarity to them. The foot in the air. The blanket. The sofa. And then finished it off with the flowers. 
Above: The image above is a good example of how the creativity of this technology can quickly turn a photo into a story. It's brilliant for adding little additions of character to photos, and less so for complete overhauls of images.

In summary
Generative fill has huge potential, but for a select range of uses. If it's BG and gak, workflows can be incredibly fast and convincing. If it's hero or insert, you'll have to still put in a lot of time, and even then it's a gamble if you'll get what you wanted.
Regardless, it's still so new that it will inevitably improve, and maybe in that time they'll have made progress on the huge legal and ethical hurdles that still stand in the way.
Tips / Tricks / Considerations
- Common everyday items will get better results. 'Raccoon' will look realistic, 'Alien' will probably not. Photoshops AI is built on a massive collection of photographs and historical images, so abstract and unreal things (like alien) will usually be duds.
- Other AI software like Stable Diffusion, Dall-E, and Midjourney each have powerful  and unique capabilities. Many people are generating incredible and fantastic "new" images from text prompts alone, which may seem like magic, but are in reality a lot of work, and require investing quite a few hours into learning the prompt language. Its really not far off from programming TBH.
- AI tools will create anomalies - things that 'aren't quite right'. A lot of them are fixable  through overpainting or 'traditional' pre-AI techniques, but it's quite an endeavor to fully hide the AI origins of heavily worked images.
Back to Top