Our Upscaling Workflow
Upscaling is a huge topic if you use AI image generators and want to produce high-quality work. In online services and when running Stable Diffusion on your local machine, you most likely create your initial images in a relatively low resolution (between 512px and 1024px) before you upscale.
Now, your usual upscaling algorithms, like ESRGAN and co., will give you high-resolution images relatively quickly and with relatively low resources, but they also tend to lose details and make the image feel much flatter. That might work well for some art styles. However, those upscaling algorithms usually lead to less-than-desirable images, especially when working with finely detailed fabrics like linen.
This is where image-to-image upscaling workflows come into play. While there are online services for this use case (e.g. Magnific AI), numerous Coworkflows also work on your local machine and produce equally, if not better, results. The following is not necessarily the best for all cases, but it is the one that currently works best for us (with highly detailed photorealistic images, particularly with fabrics).
SUPIR ComfyUI Workflow

Practically every image and video we post has gone through the SUPIR workflow. SUPIR combines an SDXL model with its own "denoise encoder" VAE. This allows you to use the same SDXL model to upscale an image that you initially generated with it. SUPIR's denoising encoder and ControlNet implementation provide more controlled upscaling compared to a regular image-to-image workflow with low denoising strength. Now that you have the background information, we can get to the settings we use in our workflow. We cannot explain every parameter in full detail, but it gives you a grasp of what might be a helpful starting point.
SUPIR Single Image Upscale
Load Checkpoint Node
For our SDXL model, we're using the same model that we used to create the input image. In our case, that usually is the Zavy Chroma XL V6.0 model. We did some tests with the Juggernaut Hyper model, which is optimised to work with fewer steps, but the results were not really satisfying. However, if you have only few computational resources, you may find it helpful to use a Lightning or Hyper model.
SUPIR Model Loader Node
There are two SUPIR models that the creator of the custom node, Jukka Seppänen, made available for users. According to Seppänen, the models can be characterised as follows:
- SUPIR-v0Q: Default training settings with paper. High generalisation and high image quality in most cases.
- SUPIR-v0F: Training with light degradation settings. Stage1 encoder of SUPIR-v0F remains more details when facing light degradations.
We have yet to find significant differences in our tests, so either should perform decently well.
We also keep the remaining settings at their standard values, i.e.:
- fp8_unet = false
- diffusion=dtype = auto
- high_vram = false.
Load Image Node
Here, we select our input image.
Image Resize Node
For the Image Resize node, we usually calculate the target resolution externally. Make sure that those are in the correct aspect ratio. Otherwise, your output image will be stretched. Often, we upscale from either 512x512 or 1024x1024 to 2048x2048. Alternatively, you can input the actual image dimensions in width and height and instead use the "multiple_of" value (although this didn't work in some of our tests). You may also turn on "keep_proportion" to avoid stretching the output image as described earlier. We keep the remaining settings at their default values, i.e.:
- interpolation = lanczos
- condition = always.
We usually ignore the "SUPIR First Stage (Denoiser)" node, as well as the "SUPIR Encode". The standard settings worked for all of our use cases so far.
SUPIR Conditioner Node
This brings us to the "SUPIR Conditioner" node, which takes a positive and a negative prompt. There are two ways you may want to approach this. Generally, shorter, more general prompts seem to produce slightly better results. The node's default workflow comes with the positive prompt "high quality, detailed, photograph..." and the negative prompt "bad quality, blurry, messy." You can just plug in some additional information to that template, such as "photograph of a fully veiled person". If we want to put particular emphasis on something in the high-resolution output, we can also add it here, e.g. "veiled in soft linen, detailed linen". This will not completely change your output, but it can provide some additional details. Sometimes, if you don't specify these details in the positive prompt, they may get lost. We had an instance with a person with freckles in the input image, where the freckles were basically erased by the upscaling, as SUPIR probably took them as noise rather than as a feature. By putting "person with freckles" in the positive prompt, the freckles were retained after the upscaling.
Similarly, it holds for the negative prompt. If we create portraits with faces, we like to keep the depicted people androgynous. Thus, we like to put the term "woman" as a positive prompt and "woman with makeup" as the negative prompt, as most models tend to create women with makeup by default (a longer post about this will come soon). So, here as well, if we don't use the negative prompt "woman with makeup", SUPIR tends to just add makeup to the person during the upscaling process.
Of course, if you're upscaling an AI image, you can also just use the exact positive and negative prompts you used to create that image. But sometimes it can be bothersome to look up the prompt, negative prompt, and seed (relevant later), plus this also doesn't structurally seem to guarantee better results.
SUPIR Sampler Node
In the "SUPIR Sampler" node are some crucial parameters that we may adapt to change our output image. We may start with the seed. We initially used the seed of the original generation here, but it does not seem to correlate to the quality of the output image. This may be due to the fact that the seed may be the same, but the noise from which the upscaled image is generated is in a different resolution. So, practically, you may put any seed here and change it if you're not fully happy with the result.
We kept the "control_after_generation" as fixed in our tests, which produced good results.
The "steps" parameter is the next most important value that you may want to adapt to your use case. Generally speaking, a higher number of steps will produce higher quality images, although there is something like a bell curve, meaning that infinitely more steps won't produce better images but may also break them after a certain tipping point. For still images, we tend to use 60 to 80 steps, which takes around 5 to 7 minutes to render on an Nvidia Geforce RTX 4070TI. However, if you're on a lower-end device or don't have the time to upscale each image for 5+ minutes, you may go down to values of around 20 to 30 steps. In fact, for our video upscaling, we upscale each image with practically the same settings but 20 steps to speed up the process. You may also consider using a Lightning-type model, which can produce high-quality images in just around 4 steps. However, in our tests, the results with the Juggernaut Hyper model were less than satisfying. But with a bit of dialling in the settings, this might as well be a feasible alternative for a faster upscaling workflow.
Next, we have the "cfg_scale_start" and "cfg_scale_end" parameters. With these, we also played around a little bit, but the best results came with the defaults of 2 for the start value and 1.5 for the end value.
We will ignore the parameters "EDM_s_churn", "s_noise", and "DPMPP_eta" as we couldn't tell you exactly what they were doing and as we didn't make extensive tests with them. However, the default parameters always worked fine.
In contrast, "control_scale_start" and "control_scale_end" are quite interesting though. While, here again, the default values of 1 for the start and 0.9 for the end produced the most reliable outputs in our tests, you may raise the values by a factor of 1.1 each, for example, to keep the consistency a bit higher. On the other hand, if you have an image that remains to have some minor flaws, e.g. in the face of a person, lowering the control scale by a factor of 0.9 can act almost like an ADetailer to "fix" the face.
With "restore_cfg", we didn't play too much either and kept it at 1 throughout testing.
The parameter "keep _model_loaded" can come in handy when batch-loading images for upscaling, as it sped up the upscaling per image by about 10 per cent in our tests.
The sampler is quite important and may become a critical value when running into error messages. By default, we use the "RestoreDPMPP2MSampler". However, when we upscale to roughly 4K with an Nvidia Geforce RTX 4070TI, this caused the error "Allocation on device", seemingly due to a lack of available VRAM (in this case, 12GB). Thus, you may run into this issue with different resolutions depending on your GPU. As soon as we get that particular error, we change the sampler to "TiledRestoreDPMPP2MSampler", which requires less VRAM due to the tiling, as it seems. However, the tiling is barely ever visible (unless other values are changed extensively). So, this should work in most cases, producing almost equal outputs.
Lastly, we haven't played around much with the "sampler_tile_size" and "sampler_tile_stride", as 1024 and 512, respectively, always worked just fine.
SUPIR Decode Node
This brings us to the "SUPIR Decode Node" where we can check whether to use the tiled VAE and set the "decoder_tile_size". We always keep "use_tiled_vae" checked and keep the "decoder_tile_size" to 512 for our use cases.
Color Match Node
In the color match node, we always keep the method set to "mkl", but you may test what works best for you.
From there we only have our preview, save and comparison nodes that are not necessary and can be adapted to your liking.
SUPIR Batch Image Upscale
We already mentioned most of the values that change when switching between single and batch image upscaling. Generally, batch upscaling only works reliably if the same positive and negative prompts sensibly apply to the input images. Thus, it worked very neatly with our output image sequences from our Stable Video Diffusion workflow (article coming soon). But if you have a folder with a bunch of very different images, it makes more sense to go through them individually and change the positive and negative prompts accordingly.
The workflow starts by switching the "Load Image" node to the "Load Image Batch" node, where we specify the input folder. The "index" further specifies where the batch loading starts. Thus, "0" starts from the first files, but if you want to upscale only a part of the input images in a run, you may check what the last "index" was that you upscaled (this is shown in the console log) and pick up the upscaling from there in the next run.
Also, make sure that you connect the "Load Image Batch" node to both the "Set_InputImage" and the "Image Resize" nodes.
Then, adapt the output resolution to your liking and add the positive and negative prompts.
This brings us to the "SUPIR Sampler" node, where we most likely want to lower the steps to values around 20 steps for regular SDXL models. Now, we can also enable "keep_model_loaded". The sampler we can then also (re-)set to the "RestoreDPMPP2MSampler".
Lastly, we usually don't connect the comparison node in this workflow, but only put the image into the "Preview Image" node and save it via the "Save Image" node to its own folder, to keep the image sequence organised.
With these settings, it takes about 30-35 seconds per image to upscale on an Nvidia Geforce RTX 4070TI.
Download
You can download our workflow via the button below. Simply drag and drop the JSON file into your ComfyUI.
Conclusion
We personally love the SUPIR upscaling workflow. It comparatively produces the best high-resolution images in our use case and gives enough values to fine-tune the output images. While it's not the fastest way to upscale images, and while it requires some decent hardware, it's very much worth the effort from our perspective. We hope you found this helpful. If you have any further questions, make sure to reach out to us on Instagram or join our Patreon. If you want to stay updated, you can also subscribe to our newsletter. No worries, we won't spam.