JLH Sig Logo web2


Adobe adds "Super Resolution" Enlargement Tool to Photoshop


One question I get asked frequenlty by photographers of all varieties is:

"How can I increase image resolution without losing too much image quality?"

Enlargements are something we deal with every day in the world of digital imaging and output. In order to make digital photos larger, images must be "upsampled" using a software algorithm to add more pixels.

Because most digital photos are made up of a grid or "bitmap" of pixels, enlarging these images frequently results in either blurry or jagged results as the software is forced to either increase the size of the pixel (which produces jagged results) or "guess" what additional pixels should look like (which produces blurry results).

Software makers have created a variety of solutions over the years in an effort to solve the inherent problems of upsampling.

Photoshop alone offers quite a number of different enlarging algorithms you can use when resizing an image. They have names like "Bicubic", "Bilinear", "Nearest Neighbor", and "Preserve Details" and each one is designed to work with different types of images and for different enlargement (or reduction) tasks.

"Bicubic" works well for gradients. "Nearest Neighbor" works well for hard edges. "Preserve Details" works best for enlargements. But they all have their limitations and the larger you try to go with an image, the more you will begin to see flaws like jagged edges and blurred details.

We need a better way to make bigger images!

The first great leap forward in enlarging software came with the release of a plugin called Genuine Fractals Print Pro from Alatamira Software all the way back in 1999. This program used a branch of mathematics called fractal geometry to create an entirely new algorithm for enlargement that was capable of scaling images up or down with minimal loss of quality. The results it produced were far superior to the native ones in Photoshop like Bicubic and Bilinear and it really was a game-changer at the time it was released to the public.

Genuine Fractals was later purchased by OnOne software and this excellent upsampling algorithm still lives on today as part of On1 Resize and On1 Raw.

Alien Skin (now rebranded as Exposure Software) has their own upsampling plugin called Blow Up and Topaz has one called Gigapixel AI. Both do a fairly decent job of making enlargements with minimal quality loss.

The upsampling tools available to us reached a plateau over the last few years. There hasn't been much in the way of advancement until last week when Adobe released an update for Photoshop and Camera Raw that included a new feature they are calling Super Resolution.

You can read all the details in this blog post from the ACR team explaining what it is and how they developed it.

The short version is that they have used machine learning to develop an entirely new algorithm for image enlargement and it has pushed Photoshop right up to the front of the pack when it comes to the quality of the enlarged images. 

As you can see from the example below, it does a noticeably better job than the existing Bicubic/Bilinear/Nearest Neighbor routines when it comes to enlargement. The results are smoother with far fewer artifacts and interference patterns.

Bicubic Upsampling
Super Res
Super Resolution Upsampling

How the feature was developed:

"The idea is to train a computer using a large set of example photos. Specifically, we used millions of pairs of low-resolution and high-resolution image patches so that the computer can figure out how to upsize low-resolution images.

With enough examples covering all kinds of subject matter, the model eventually learns to up sample real photos in a naturally detailed manner.

Teaching a computer to perform a task may sound complicated, but in some ways it’s similar to teaching a child — provide some structure and enough examples, and before long they’re doing it on their own. In the case of Super Resolution, the basic structure is called a “deep convolutional neural network,” a fancy way of saying that what happens to a pixel depends on the pixels immediately around it. In other words, to understand how to up sample a given pixel, the computer needs some context, which it gets by analyzing the surrounding pixels. It’s much like how, as humans, seeing how a word is used in a sentence helps us to understand the meaning of that word."

The programmers at Adobe concentrated on training the system using "challenging" images with lots of textures and small details. These types of images typically do very poorly when enlarged resulting in heavy artifacts so they were excellent choices for training the algorithm.

The results of Adobe's research and development are nothing short of amazing.

The new "Super Resolution" enhancement represents the next big leap in upsampling quality and it comes free with your Creative Cloud subscription. The feature is currently available as part of the recently released Camera Raw 13.2 and Photoshop 22.3 updates and it will be added to Lightroom Classic and Lightroom Cloudy in an upcoming release expected within the next few weeks.

I've had a chance to try it out on a variety of images and the results are truly impressive. I was able to upsample the images with no apparent loss of quality allowing me to print them at considerably larger sizes. Combine this with the sharpening and de-noising tools available from Topaz Labs and you've got a recipe for making high quality enlargements that don't suffer from any of the typical detail loss or artifacts seen with older upsampling routines.

Sounds great! How do I use it?

Super Resolution is capable of working with both Raw files (CR2, NEF, ARW, DNG, etc) and linear files (JPEG, TIFF, PSD) but typically does best when you give it a high quality Raw file to begin with. The cleaner your image is to begin with, the better it will look when enlarged. If you are starting with a JPEG that has strong artifacts, those are likely to be exaggerated as part of the enlargement process. If your source image has a fair amount of noise, I suggest running it through something like Topaz DeNoise AI first so you don't amplify the noise when you enlarge the image.

Because the Super Resolution feature is part of the Camera Raw architecture, you'll need to open your file using Camera Raw in Photoshop regardless if it is a Raw or Linear file to begin with.

Raw files are easy to open in Camera Raw and usually do so automatically. TIFF, JPEG, PSD and other linear files will require an extra step to get them open in Camera Raw.

If you are coming from Adobe Bridge with a linear files, you can Right-Click on it and choose "Open in Camera Raw" from the pop-up menu. This will bring the file directly into the Camera Raw interface in Photoshop.

Open in Camera Raw

If you aren't using Bridge and want to open the file up directly into Camera Raw within Photoshop, it takes one extra step. From the Open File dialogue in Photoshop, choose any linear file you wish to open and change the Format to "Camera Raw". This will tell the program that you wish to open the in Camera Raw even though it isn't actually Raw to begin with.

Note that you cannot open a linear file normally within Photoshop and then apply the Camera Raw filter. It must be opened up directly into Camera Raw or the Enhance feature will be unavailable.

Once you have your Raw or Linear file opened in Camera Raw, right-click on the image and choose Enhance... from the pop-up menu that appears. From there you can choose Super Resolution. (The Enhance Details option will be automatically selected when you choose Super Resolution)

Click the Enhance button and wait for the program to process your image. The task is very processor intensive. Depending on the speed of your GPU and storage, this can take a few seconds or even a minute or two to complete.

before afterWhen it finishes, the program will have created an entirely separate (much larger) DNG file which will be visible on the filmstrip at the bottom of the Camera Raw window. You don't need to do anything to "save" this file as it is automatically saved to disk in the same location as the original as part of the enhancement process.

Because the file is being doubled in size both horizontally and vertically, the result is actually four times larger in overall area and megapixels.

In the case of my camera, applying the Super Resolution enhancement to a 20-megapixel original file (5272x3648) results in an 80 megapixel enlargement (10944 x 7296). And because the output file is a DNG, you can continue to edit it using the tools available in Camera Raw. The output produced by this new algorithm is remarkably free of artifacts and retains all the details of the original without getting blurry or jagged in the process. It really does end up looking like a larger version of the exact same picture. Honestly, the results of my first few tests were so clean that I wasn't sure it had even done the enlargement until I went in and looked at the pixel dimensions to confirm it really was four times larger overall.

One last tip:

Because this is an enlargement algorithm and not a sharpening one, you may also want to apply some extra sharpening after the upsampling to enhance some of the details in the enlarged image. The sharpening routines currently available in Photoshop are decent enough but a dedicated program like Topaz Sharpen AI will do even better on a job like this.


Adobe has done a superb job of developing their new Super Resolution tool and integrating it into Camera Raw and Photoshop. I look forward to it being part of Lightroom Classic as soon as possible. This is something that has broad application across the entire field of photography.

I can't wait to see what the programming wizards at Adobe come up with next using this kind of machine learning to train their algorithms. If I had to guess, based on the blog post linked above, I suspect they will use this same training technique to develop better sharpening and noise removal algorithms of their own. I'd love to see them come up with something built-in that can give Topaz DeNoise AI and Sharpen AI a run for the money.

Easy Sky & Reflection Replacement (Free Video)

Beforeafter2Click here to get your free video.

In this tutorial, I'm going to show you an easy shortcut for Sky Replacement with Reflections Using Luminar 4

And best of all, there’s NO HAND-MASKING REQUIRED!

One of the coolest features in Skylum’s Luminar 4 program is the ability to do something they call “AI Sky Replacement” which simplifies the process of swapping out a boring or poorly exposed sky for a more interesting or better exposed one.

Using an artificial intelligence algorithm, Luminar 4 can completely and believably replace the sky in your image without you ever having to use selections, masking, or layers. It’s all fairly automatic.

And that’s the genius of this feature. It finds and masks off the sky areas automatically so you don’t have to spend hours in Photoshop with a brush and a magnifier painstakingly selecting the sky areas by hand and praying that you didn’t accidentally color outside the lines resulting in halos at the edges of your selections and a poorly blended replacement sky. As Skylum says in their advertising: “Results in seconds without manual editing!”

I’ll admit I was skeptical when the feature was first introduced. But I have to say after using it for some time now, I am more than a little impressed with how well it works on the majority of images that call for sky replacement.

The AI routine does a fantastic job of automatically masking the sky portions and allowing you to fill them with one of the built-in replacement skies that come with the program or even one of your own sky photos.

The key to successful sky replacement is making sure that it looks realistic, natural, and true to the original scene in terms of exposure, color, and detail. Differences in exposure and white balance between the original image and the new sky will make the composite image look fake to your viewers even if they can’t explain exactly why.

Fortunately, Luminar comes with some clever adjustments you can make in the Sky Replacement tool that give you the ability to modify the position, exposure, color temperature, and blur amount of the replaced sky so it blends in naturally with the existing image. There’s even a slider to “re-light” the original scene so it’s more harmonious with the replacement sky. Using these kinds of adjustments, I am able to more successfully match my new sky with my original scene in almost every case.

Which brings me to the reason for this little video…

One of my photography students recently asked me if Luminar can do sky replacement on a photo that has reflections in it (like on the surface of a lake or other body of water) and make it so that the reflections match the replaced sky and still look natural.

And I thought to myself “You know…that’s a darn good question!”

Because if you are going to convincingly replace a sky, you need to do it everywhere the sky appears in your image – including reflections.

A quick search on YouTube netted me a dozen or more tutorial videos from various authors and educators explaining how to accomplish this sort of thing in Luminar.

Every one of these tutorials suggested a nearly identical technique for solving the problem of matching the reflections to the replaced sky.

Unfortunately, every one of their solutions had a significant drawback to it which I will explain momentarily.

First, they all tell you to use Luminar to replace the sky as you normally would, but they also caution you that for their reflection technique to work, you can’t use one of the built-in skies. You have to use one of your own skies because you’ll need access to that sky file again for making the reflection later on. And it’s not easy to get at the ones that are built into the program.

Second, after using Luminar’s AI Sky Replacement, they have you create a new image layer on top of your base layer and to load it up with the same sky you had used before. This new image layer then gets flipped over vertically and positioned to be used as the reflection in your image.

Finally, (and here’s where they lose me completely)
In the last step of every one of these tutorials, they instruct you to use the layer masking brush on the reflection image layer to hand-mask the reflected sky into the lower half of the scene. Yep, you heard me right. They all want you to hand-paint the mask for the reflection.

And in every one of these videos you can watch the person doing the demo struggle to paint that mask evenly. They have to work hard to avoid leaving halos at the edges of their hand masked reflections. They are always painting and erasing and painting and erasing to get it just right. They frequently fast-forward over that tedious part of the process in their tutorials and skip to the reveal at the end where they show off their replacement sky with matching reflection.

Now the first thing I thought when I watched these videos is that the entire purpose of using a program like Luminar to replace a sky is to take advantage of that amazing AI Sky Replacement tool to do all of that difficult masking work for you. Why on earth would you want to manually mask-in by hand all that stuff for the reflection? You may as well do that in Photoshop if you are going to do it by hand.

So it seemed to me that all of these solutions were:
   a) way more work than most folks probably want to do, and 
   b) a complete waste of Luminar’s great AI Sky Replacement tool.

After scratching my head over this little problem for a few minutes and wondering why it had to be so hard, I came up with what I thought might be a relatively quick and simple solution. In fact it was so simple, I doubted myself at first. I wondered if there was something I was overlooking.

Surely, there’s some way we can make the AI Sky Replacement algorithm automatically find and mask the reflected area as well? I mean…There’s got to be a better way, right?!

And sure enough, after a couple of experiments, I managed to figure out a simple repeatable method to do exactly that without any sort of hand-masking. And best of all you can use either the built in skies that come with Luminar or you can use your own skies.

And when you see how the trick is done, you won’t believe how easy it is to do. It’s almost embarrassing how simple the solution to the problem actually is.

UPDATE 2020-10-15:

Sklyum has just announced they will be adding a reflection feature to the new version of AI Sky Replacement coming in the next version of Luminar!

UPDATE 2020-10-20:

The latest release of Adobe's flagship program Photoshop includes a Sky Replacementfeature driven by their "Sensei" AI system. This is no doubt, a response to a similar feature that Skylum includes with their editor Lunimar. Here's my initial reaction to their offering.

I'm going to cut right to the chase and tell you that in my initial tests, Luminar is noticeably better than Photoshop.

It does a better job masking and retaining fine details. And it offers more useful tweaks for adjusting the replacement sky and how it interacts with your existing image.

Right out of the box, with no adjustments, Luminar produces more natural results. In many cases no adjustments are necessary at all.

With Photoshop, I was only able to get decent results by adjusting the parameters a fair bit after choosing a replacement sky.

Skylum has a head start on this tool and have updated and improved their implementation over the last 18 months. Adobe is new to the game, so we will have to watch and see if they can improve their algorithm. Remember that "Select and Mask" wasn't so great at first and now it's phenomenal. I will try to be patient as they work on their version of Sky Replacement.

For now, I'm sticking with Luminar for that particular task

Click here to get your free tutorial video.

Review of Luminar AI

share luminar ai

Ever since it was released on Monday, I've been getting emails asking me for my opinion of the new Luminar AI program from Skylum.

I will add my own thoughts in a bit, but I suggest you start with a pair of external reviews, this one from Digital Camera World and this one from Digital Photo Mentor.

Both of these reviews echo my own experiences so far so I think they make a good jumping off point for the rest of my comments.

I've had some time to play with Luminar AI and I think it's great as a plugin for achieving certain special effects, but... that's all it's ever going to be for me – a filter or plugin for doing something like sky replacement or detail enhancement.

It's definitely not a substitute for Photoshop or Lightroom. More importantly, you should understand that Luminar AI is NOT an upgrade from Luminar 4, although it has many features from that program that you will recognize.

Luminar AI is actually something different entirely.

Skylum appears to be moving away from pro photographers and more toward the broader instagram/influencer/content creator market who just want an easy way to add sizzle to a photo without having to learn a lot about editing.

From the DCW review: "Skylum has taken a bold step. It is ending development of Luminar 4 and taking its core technologies in a new direction. That may disappoint a lot of Luminar fans hoping for an ever more sophisticated and advanced Lightroom or Photoshop alternative. It doesn’t look like that’s going to happen now."

...and then a bit later...

"Luminar AI does exactly what it sets out to do, though, by allowing novice photo editors to inject some magic into their images without the need for a lot of know how to time consuming manual editing. It is very easy to create ‘idealized’ reality with Luminar AI, which we suspect will be popular with content creators but perhaps controversial too."

Did you catch that last part about "idealized reality"? It was almost mentioned in passing, but I think they are referring to the fact that the built-in AI has a tendency to make all your images look the same if you rely on it too much. The algorithms can do some nice things to your photos, but there is no substitute for the creative eye.

Now that you've had an overview, let's dig a little deeper into the various features and functions of Luminar AI

The program has a series of automatic "templates" you can apply to process an image using "Artificial Intelligence" and these templates include a mixture of various processing parameters that are automatically configured using Luminar's various AI components for composition, accent, color, contrast, and more. The program will suggest various templates based on analyzing the image you give it. (This is something of a hit or miss process.)

I'm going to be honest here and say that I find these templates to be nearly useless for anything practical. There are only a few of them so far and the results they produce are rarely pleasing and often quite garish and over processed looking to my eyes.

Instead of relying on one of the templates to do everything automatically, I suggest users go in to each of the individual parameters and make more thoughtful adjustments there. The AI features will still be available, but for realistic results, you'll need a lighter touch than the automatic routines seem to exhibit thus far.

From the AI enhanced functions, I would say Accent AI, Structure AI, and Sky Enhancer AI are all useful adjustments when applied tastefully. The Body AI, Face AI, and Iris AI functions have more limited use but can be handy in certain retouch scenarios.

From the non-AI based adjustments, I would say that the Details, Landscape, Super Contrast, and Color Harmony are the most useful and unique functions. I particularly liked the "Golden Hour" and "Foliage Enhancer" sliders within the Landscape adjustment. Most of the other tweaks can be done just as easily in Lightroom or Photoshop or whatever your host program might be.

Then of course, you have special functions like Sky Replacement (now called Sky AI), Augmented Sky AI (which allows you to add suns, moons, and other celestial objects to your scene), and Atmosphere AI which allows you to add haze, mist, or fog to your image. Other effects include Glow, Film Grain, Mystical, Matte, Mood, Dramatic, and more. Each imparts a specific look and you'll want to try them out yourself to see what they really do to your images.

They really seem to be positioning this program as a "gee whiz" special effects "enhancement" app for users who are looking for an easy fix. Experienced photo editors will likely find themselves only using a smaller sub-set of these effects as needed. This is why I view Luminar AI as being more like a Filter or Plug-In I can use with Photoshop or Lightroom to add a special effect. It's never going to be my daily editing program for the bulk of my tasks if they keep going in the direction they are.

Some other things to consider:

1) Luminar AI cannot import catalogs from the previous versions. I don't really use them. So this isn't a deal breaker.

2) The Sky Replacement feature still doesn't do reflections as teased in their promotional videos. Maybe this feature will be added at a later date? Until then, you can keep using the technique I described in my article on Easy Sky Replacement with Reflections.

3) Luminar AI has no support for layers as of this release. This is disappointing as I though the layers function worked very well in Luminar 4 for stacking adjustments on top of each other. Given the new direction Skylum appears to be headed, I wonder if they will ever add layer support.

The novice editors and instagram content creators will love the simplified interface and ability to make gee-whiz special effects, but for the rest of us I think this is going to be a niche product.

I will use Luminar AI as a plug-in or special effects filter, but if this trend continues, I don't see it ever becoming a daily-use photo editor for myself and the majority of the photographers I work with.

Join the mailing list!

Sign up with your email address to receive news and updates from Jeff Hirsch Photography. 

Thank you for subscribing!