- Comfyui upscale image reddit. Depending on the noise and strength it end up treating each square as an individual image. If your starting image is 1024x1024, the image gets resized so that the inpainted area becomes the same size as the starting image which is 1024x1024. There are also "face detailer" workflows for faces specifically. downscale a high-resolution image to do a whole image inpaint, and the upscale only the inpainted part to the original high resolution. natural or MJ) images. So, I've used the simple tiles custom nodes to break it up and process each tile one at a time, there is a batch-list switch you can toggle to do it all as a batch if you have the V-ram. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Give an upscaler model an image of a person with super smooth skin and it will output a higher resolution picture of smooth skin, but give that image to a ksampler (using a low denoise value) and it can now generate new details, like skin texture. k. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Is this possible? Thanks! Welcome to the unofficial ComfyUI subreddit. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. This is done after the refined image is upscaled and encoded into a latent. 5 denoise (needed for latent idk why though) through a second ksample. 2x upscale using lineart controlnet. Please keep posted images SFW. I have a much lighter assembly, without detailers, but gives a better result, if you compare your resulting image on comfyworkflows. com and my result is about the same I was running some tests last night with SD1. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. SD upscaler and upscale from that. This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. Thanks! IMAGE: The input image to be upscaled. This means that your prompt (a. mask by text workflow that identifies specific things in the image by prompt and inpaints them. , inpainting, hires fix, upscale, face detailer, etc) and no control net. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". 0. 5, but appears to work poorly with external (e. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Search for upscale and click on Install for the models you want. Please share your tips, tricks, and workflows for using this software to create your AI art. 5 to get a 1024x1024 final image (512 *4*0. And above all, BE NICE. With no finishing (i. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. A step-by-step guide to mastering image quality. It will replicate the image's workflow and seed. Then I have a nice result I do composition ( Image 2). You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Welcome to the unofficial ComfyUI subreddit. Ultimate SD Upscale 2x and Ultimate Upscale 3x. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. It uses CN tile with ult SD upscale. Still working on the the whole thing but I got the idea down Welcome to the unofficial ComfyUI subreddit. Does anyone have any suggestions, would it be better to do an ite Welcome to the unofficial ComfyUI subreddit. I have a custom image resizer that ensures the input image matches the output dimensions. So generate a batch, and then right click the one you want to send on to upscale. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Click on the image below and drag and drop the full-size image to the ComfyUI canvas. For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. so i tested with aspect ratios < 1 (more vertical) and it definitely changed the output. upscale_method: COMBO[STRING] Specifies the method used for upscaling I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. A lot of people are just discovering this technology, and want to show off what they created. Basically it doesn't open after downloading (v. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. upscale_method: COMBO[STRING] Specifies the method used for upscaling I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. ComfyUI for Image Upscale . That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. This parameter is central to the node's operation, serving as the primary data upon which resizing transformations are applied. (I think I haven't used A1111 in a while. py, in order to allow the the 'preview image' node to Because the upscale model of choice can only output 4x image and they want 2x. View community ranking In the Top 1% of largest communities on Reddit. - latent upscale looks much more detailed, but gets rid of the detail of the original image. thats Welcome to the unofficial ComfyUI subreddit. Enhance image by adding HDR effects. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. But I probably wouldn't upscale by 4x at all if fidelity is important. g. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. 5 models (seems pointless to go larger). 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Overall: - image upscale is less detailed, but more faithful to the image you upscale. Or cancel if you don't like any of them. The best method I Thanks for all your comments. Do the same comparison with images that are much more detailed, with characters and patterns. 6 denoise and either: Cnet strength 0. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 0 for ComfyUI, which is free, uses the CCSR node and it can upscale 8x and 10x without even the need for any noise injection (assuming you don't want "creative upscaling"). 9 , euler Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. I don't get where the problem is, I have checked the comfyui examples and used one of their hires fix, but when I upscale the latent image I get a glitchy image (only the non masked part of the original I2I image) after the second pass, if I upscale the image out of the latent space then into latent again for the second pass the result is ok. After that I send it through a face detailer and an ultimate sd upscale. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. The area you inpaint gets rendered in the same resolution as your starting image. I gave up on latent upscale. Here is a workflow that I use currently with Ultimate SD Upscale. Jan 5, 2024 · Start ComfyUI. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. e. 5=1024). I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. We would like to show you a description here but the site won’t allow us. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y oh, because in SD i noticed the aspect ratio of the latent image will influence the result of the output - like if you wanted a tall, standing person, but had the aspect ratio of a standard desktop (1920x1080, or 1. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Welcome to the unofficial ComfyUI subreddit. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. Uses face Detailer to enhance faces if required. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Belittling their efforts will get you banned. In Image 3 I compare pre-compose with post-compose results. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. Upscaled by ultrasharp 4x upscaler. You end up with images anyway after ksampling so you can use those upscale node. 9, end_percent 0. Instead, I use Tiled KSampler with 0. 5 model, since their training was done at a low resolution. Hires fix with add detail lora. First you need to do is stop the generation mid way or later like if you have 40 steps, instruct sampler to stop at 29, then you upscale the unfinished photo (either as a latent model or as an image, I found that it's better to upscale it as an image and redecode it as a new latent) feed it to a new sampler and instruct to continue generation Welcome to the unofficial ComfyUI subreddit. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. I liked the ability in MJ, to choose an image from the batch and upscale just that image. A homogenous image like that doesn't tell the whole story though ^^. - comfyanonymous/ComfyUI Welcome to the unofficial ComfyUI subreddit. Look at this workflow : The Upscaler function of my AP Workflow 8. 22, the latest one available). I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. X values) if you want to benefit from the higher res processing. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. Click on Install Models on the ComfyUI Manager Menu. a. This way I can upscale my images while I am away from my system. A followup composition using IPAdapter with a simple color mask and three input images (2 characters and a background) Note how the girl in blue has her arm around the warrior girl, A bit of detail that the AI put in. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). It works beautifully to select images from a batch, but only if I have everything enabled when I first run the workflow. But more useful is that you can now right-click an image in the `Preview for Image Chooser` and select `Progress this image` - which is the same as selecting it's number and pressing go. This is not the case. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. yet when I try to upscale more than 500 - 1000 images in a single batch from 1024x576 --> 1920x1080, it blows up Welcome to the unofficial ComfyUI subreddit. The quality and dimensions of the output image are directly influenced by the original image's properties. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. So I basically want to select multiple images from my drive so that the upscaler scales all the images I have selected, using the same sampler settings and whatnot. The final node is where comfyui take those images and turn it into a video. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. I switched to comfyui not too long ago, but am falling more and more in love. The image in the left (directly after generation) is blurry and lost some tiny details; the image on the right (after mask-compose node) retains the sharpness, but you can see clearly the bad composition line, with sharp transition. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. This is useful to redraw parts that get messed up when you use ultimate SD upscale with a high denoise. The workflow isn't attached to this image you'll have to download from the G-drive link. ) This makes the image larger but also makes the inpainting more detailed. Thanks. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. That's where cn tile comes in allowing you to push your i2i denoise levels WAY up without loosing the input image composition. After borrowing many ideas, and learning ComfyUI. This works best with Stable Cascade images, might still work with SDXL or SD1. , and then re-enable once I make my selections. Love it! Thanks ComfyUI. "LoadImage / Load Image" "Upscale Model Loader / Load Upscale Model" "ImageUpscaleWithModel / Upscale Image (using Model)" "Image Save / Image Save" or "SaveImage / Save Image" That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels per ESRGAN upscaler models. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. 5, euler, sgm_uniform or CNet strength 0. Click on Manager on the ComfyUI windows. 7777) the person often comes kneeling. You could try to pp your denoise at the start of an iterative upscale at say . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Advertisement Coins. Both these are of similar speed. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. 0 coins. 2x upscale using Ultimate SD Upscale and TileE Controlnet. 5 if you want to divide by 2) after upscaling by a model. (Optional) Upscale to 3x by Default and using ControlNet to stick to base image, speed provided by Automatic CFG. And since Ultimate upscale only renders a section of the image at a time, the prompt and the image doesn't necessarily go along well together at higher Denoise levels. . 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler Generate initial image at 512x768 Upscale x1. I generate an image that I like then mute the first ksampler, unmute Ult. This is the fastest way to test images vs an image I have a higher rez sample of for testing. Ugh. Grab the image from your file folder, drag it onto the entire ComfyUI window. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage Pause/Preview images to proceed forward in workflow. Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. Latent quality is better but the final image deviates significantly from the initial generation. Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. ComfyUI Manager issue. Save image with meta data. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. The resolution is okay, but if possible I would like to get something better. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. There’s only so much you can do with an SD1. zdcve yepxrk nav aqggp vngfn zgqw lrvw xzs bimf owdiqlg