Skip to main content

Image editing

Exploring AI image editing features and considering how much is too much

As are many photographers, I’m still coming to grips with how artificial intelligence will impact my work. I have plenty of concerns about text generators and text-to-image tools and how they’ve gone about building their reference libraries. But there are other ways we’re encountering AI, even in the tools we’ve come to rely on. Photoshop has had AI features for some time. Noise reduction can even be achieved/enhanced by AI.

One of the more recent and visible implementations of AI has come in the form of generative fill. I tend to stay away from any tools that alter the image in a less-than-truthful way, as I mostly work in editorial circles where I feel the images need to convey a sense of the actual place. Warts and all. I may try and keep a light pole out of frame when I’m shooting, but for editorial stock I wouldn’t Photoshop one out.

That said, I do like to play with some of the new tools just to know what’s possible. It begins to make you think about where the line should be drawn. After all, if you remove sensor dust spots on an image using the clone tool, is it that much different to use the AI remove tool for the same task?

Generative Fill for expanding/completing a pano stitch.

Today I tried using the generative fill tool to patch some holes in a stitched panorama. I honestly can’t say that artificial intelligence isn’t already built into Lightroom and Photoshop’s pano-stitching tools. I’m pretty sure it is if you check the “expand edges” checkbox. Normally, for professional work I wouldn’t do this, but it was an interesting experiment to see what’s currently possible.

I started with two images, one horizontal and one vertical, of the Plaza Mayor in Trujillo, Spain:

I shot both of these on the 17 end of a 17-35mm lens. Looking at them now, several months after I shot them, I like seeing the full reflection of the church in the vertical but prefer seeing more of he surrounding scene as you can in the horizontal. Had I taken my 11mm Irix with me that day, I probably could have achieved both in one frame but that’s hindsight, now. The next best solution seemed to be to create a pano from these two files.

The vertical and horizontal stitched together well, but once you were to crop out the missing white corners, you really weren’t left with any more image than one of the original files had. Here’s where I decided to try generative fill. You can also add this inside the pano controls, but I wanted to do it manually after the stitch to have the ability to try some different options.

The result was pretty good. There were questionable spots (putting it mildly) where Photoshop had to build the other half of a car that was cropped in the original images, but I decided to crop all of that “created” image out anyway. In this final image, there’s little left of the generative fill other than the additional sky in the top corners and reflections and stone in the lower corners. I was impressed that Photoshop replicated the real cloud on the left in the added reflection. That looks pretty believable.

Here’s a comparison slide that might make it easier to see what was added by the generative fill. The original stitched pano is on the left, and my final crop of the generative fill version is on the right. Move the slider back and forth and you can compare the two, see what was added, and what I cropped off of the original pano:

I also used the AI “remove tool” to get rid of a partial car on the left of the frame. None of this would I do on an image for an editorial assignment, but it was an interesting experiment. Maybe there’s a place for these kinds of hybrid images in stock as long as they are labelled as altered photography.

In this case, however, not much was added that wasn’t in the original scene. It’s not like the AI made up an entire building to add to the plaza. It’s food for thought, but — for now — I leave it in my experimental mode and don’t plan on licensing images like this. We’ll see what the future holds.

Your thoughts? If AI is just adding sky, or water, is that an acceptable alteration? I think the answer to that is going to lie in the purpose/use of the image and in the clarity provided in the image’s labeling.

Leave a Reply

Michael C. Snell

Michael C. Snell is a travel photographer based in Lawrence, Kansas. After working as a designer and art director in the advertising and marketing industry for over 12 years, Michael left to pursue a freelance career in photography and design. Since then, he has had images published in a variety of publications around the world and his stock photography is available through Robert Harding World Imagery and at Alamy.com.

Michael is a member — and former Board member — of the Society of American Travel Writers (SATW). He is a past Chair of SATW’s Freelance Council and is currently the Chair of the SATW Photographers’ Sub-Council.