Comfyui reddit
-
It compares what you passed back last time with what you passed back this time, using Python equality. ai instance with the comfyui docker image but, since I use the container on demand, I usually stop it when I'm done. For how-to, refer to the YouTube link provided below. This tool also lets you export your workflows in a “launcher. If you use a 'latent upscale' you need to make your denoise on the following ksampler at anywhere from 0. 0" mim install "mmpose=1. ComfyUI Style Model, Comprehensive Step-by-Step Guide From Installation if you follow the video to youtube, there's a link to a redo of this with sound working. That's great to hear! I'm glad you're enjoying the ComfyUI interface. Find tips, tricks and refiners to enhance your image quality. This pack includes a node called "power prompt". There's an SD1. A1111 is more like a sturdy but highly adaptable workbench for image generation. Workin on it. It is time consuming to transfer/download all the models, loras and plugins all over again every time I want to use it. by mythical_artist_. easiest for the modeles is to download with this script. I recently discovered ComfyBox, a UI fontend for ComfyUI. 5 depending on how much you are upscaling. Omost + ComfyUI - Pros and Cons. Worked wonders with plain euler on initial gen and dpmpp2m on second pass for me. When in Live mode whatever you do on the canvas is translated to a virtual custom workflow and executed. π Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. and install the extra packages needed. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. Lastly if you Ctrl+c and Ctrl+shift+v the pasted node (s) will also copy the inputs. Enjoy a comfortable and intuitive painting app. ComfyScript v0. ComfyUI runs in a browser - try running it full screen or hide those bars from the browser settings menu. But - if any inputs have changed, it isn’t called, because any changed inputs means it will be re run. Local instance of ComfyUI + vast. ago. Next, install RGThree's custom node pack, from the manager. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. You can copy paste across comfyui sessions or also Ctrl+drag select and right click "save as template". ComfyShop phase 1 is to establish the basic painting features for ComfyUI. Clicking and dragging to move around a large field of settings might make sense for large workflows or complicated setups but the downside is, obviously, a loss of simple cohesion. I just don't think I have all the plugins. Just the video I needed as I'm learning ComfyUI and node based software. Really nice visuals - thanks for the mention. 13K subscribers in the comfyui community. Award. Then switch to this model in the checkpoint node. • 6 mo. ai GPU (s) I've rented a vast. Tried the llite custom nodes with lllite models and impressed. If ==, it means no change. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. ComfyUI installed and running! Such a beautiful interface! Every time I get an application working and starting properly, I feel like a master hacker! HAHAHA! I'll now continue reading the links you sent folks earlier including the youtube link too. New ComfyUI Node - Better Image Dimensions. βΊοΈππΌππΌ. Looking forward to seeing your workflow. You can gradually build a collection of these images that you can separately load into ComfyUI in order to perform specific basic tasks. 1. Looks like a nice tutorial, thanks! Since your tutorial leads with the 'Comflowy' interface option, can you give me a little compare-and-contrast about how Comflowy will fit in with ComfyBox, StableSwarmUI, and SDFX ? New here, about three weeks in , love comfy Welcome to the unofficial ComfyUI subreddit. You should be able to drag it anywhere. I still have to use a desktop to alter comfy workflows, but once they are set I just use this extension and webui works pretty decent on mobile devices. Install ComfyUI Manager. Hi geekies, Dalia here! Along with u/AuraRevolt we worked on a fun test tonight. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. 22K subscribers in the comfyui community. Take a look at Flowise. Hi Reddit! I just shipped some new custom nodes that let you easily use the new MagicAnimate model inside ComfyUI!… Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. r/comfyui. It uses LangChain under the hood which is a framework for connecting and triggering modules based on LLMs. Results and speed will vary depending on sampler used. This is HalloNode for ComfyUI. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets 27 votes, 12 comments. Best Comfyui Workflows, Ideas, and Nodes/Settings. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will ComfyUI Portrait Master. Most of them already are if you are using the DEV branch by the way. ComfyUI does support some models in diffusers format (advanced->loaders->UNETLoader) but how it works is that it converts them to stability (ldm or sgm) format internally. I do a lot of plain generations, ComfyUI is since you already have experience with a1111 go for it its really fun. Image Realistic Composite & Refine ComfyUI Workflow. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. I've been slowly migrating to comfyui. Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs 70 votes, 26 comments. I will be playing with this over the weekend. Please keep posted images SFW. I’m asking because I’ve read someone got banned from using local tunnel. If it is, turn it off. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. However, given the right nodes you should be able to generate the same kind of videos and ComfyUI does in my opinion work exceptionally well for video generation due to the flexibility of working node-based. Maybe Comfy UI just need quick settings or previous settings like the all-in-on prompt extension saved that way people don't have to type it all again. Looks good, but would love to have more examples on different use cases for a noob like me. 3. It allows you to put Loras and Embeddings Welcome to the unofficial ComfyUI subreddit. Automatic1111 for multiple workflows and extensions. Developed a flow that turns simple sketches into game icons. com in the process, we learned that many people found it hard to locally install & run the workflows that were on the site, due to hardware requirements, not having the right custom nodes, model checkpoints, etc. It comes with Comfy and (optional) controlnets and models. 15K subscribers in the comfyui community. Please share your tips, tricks, and workflows for using this software to create your AI art. ComfyUI basics tutorial. The code is simple. If you have another Stable Diffusion UI you might be able to reuse the dependencies. . I have a wide range of tutorials with both basic and advanced workflows. Ah, That might be it. ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. There are tutorials covering, upscaling Welcome to the unofficial ComfyUI subreddit. 0. Comfyui + SDxl + SVD. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Ipadaptor for all. The graphic style Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It seems to facilitate easy sketching and discussion of directions. Please share your tips, tricks, and workflows for using this… There is an extension for autos webui that lets it launch and interact with comfy workflows, this is how I solved it. After you get the logic of diffusion you start creating your own workflows. My comments would be chock-full of embarrassing grammar errors and typos if I didn't edit them. I go to ComfyUI GitHub and read specification and installation instructions. Its like an experimentation superlab. Applied a technique to enhance details in the final stage. 25K subscribers in the comfyui community. Ho pubblicato su GitHub il mio nodo custom per ComfyUI per la generazione semi-automatica dei prompt dedicati ai ritratti. 5+ denoise. If you can View community ranking In the Top 10% of largest communities on Reddit Sad that I had to go from ComfyUI Everything was going pretty well, and I was learning a lot on how this technology works, don't get me wrong ComfyUI is an amazing tool to learn, but I had to say goodbye just because of this. You could try to pp your denoise at the start of an iterative upscale at say . I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner You will likely never use "Deforum" in ComfyUI, as it is unlikely to fit well into ComfyUI. Nice one! 100 votes, 15 comments. I had to code some simple custom nodes which you can find here. Good for depth, open pose so far so good. The power prompt node replaces your positive and negative prompts in a comfy workflow. I always run in a window. 2. Look Ma! No lines in my comfyui workflow! Well almost! Sorry for the clickbait title. I've seen that there is a way to use a remote Welcome to the unofficial ComfyUI subreddit. 6. So in this workflow each of them will run on your input image and you For those intimidated by ComfyUI, I made a complete guide starting from beginning. ComfyUI - great for complex workflows. you can use these images as "tools" in a toolbox. 5 and SDXL version. Please share your tips, tricks, and workflows for using this…. In this workflow we try and explore one concept of making T shirt mockups with some cool Input images and using the IP adaptor to convert same into final images. 1 - LDSR upscaler or more powerful upscaler: Saw a different grade of details upscaling in Automatic1111 vs. Thanks for taking the time to help us newbies along! ComfyShop has been introduced to the ComfyI2I family. Please share your tips, tricks, and workflows for using this… I use ComfyUI for crazy experiments like pre-generating for controlnet (I dont like how comfy handles controlnet though) and then using a multitude of conditions in the process. 0". UPDATE 2: I suggest if you meant s/it, you edit your comment, even though it will leave me looking completely confused. . For example I want to install ComfyUI. 0 upscale usually requires 0. Delightful to watch. And above all, BE NICE. If you want to build a custom node yourself you probably need to get familiar with Litegraph Welcome to the unofficial ComfyUI subreddit. MOCKUP generator using SDXL turbo and IP-adaptor plus workflow. And the new interface is also an improvement as it's cleaner and tighter. But the problem I have with ComfyUI is unfortunately not with how long it takes to figure out, I just find it clunky. Unfortunately, I can't see the numbers for your final sampling step at the moment. :) But after a little bit work, I was able to clean up lines quite a bit. To experiment with it I re-created a workflow with it, similar to my… Welcome to the unofficial ComfyUI subreddit. 1" mim install "mmdet=3. Press go π. Thanks, that is exactly the intent, I tried using as many native nodes, class, functions provided by ComfyUI as possible, but unfortunately I can't find a why to use KSampler & Load Checkpoint node directly without re-write the core models script, after struggled for two days, I realized the benefits for that are not much, so I decided to focus on improve the functionality and efficiency Welcome to the unofficial ComfyUI subreddit. I really recommend. Took my 35 steps generations down to 10-15 steps. There is an option to dump those workflows to disk to load them directly into ComfyUI. 1. Uses less VRAM than A1111. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. Fooocus / Fooocus-MRE / RuinedFooocus - quick image generation, and simple and easy to use GUI (Based on the Comfy backend). Linked is the article I wrote on Civitai if you want more details, as well as the repo if you want to give it a try. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. This is because latent upscalers are kinda destructive in how they upscale images. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. ComfyUI on Paperspace. Step three: Feed your source into the compositional and your style into the style. Better Image Dimensions is my first developed node for ComfyUI, starting simple to get a feel for how they work and solving a problem I had right away with ComfyUI. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. I expect it will be faster. After installing from the manager you can activate your comfyui environment. These were inspired by Searge's approach of packaging parameters together. The idea was, given a pre-generated image, generate a new one based on the source image and apply the face of our beloved model, Dalia (me). We would like to show you a description here but the site won’t allow us. If not, go into settings and see if the option to “remember” (or lock) the manager menu is on. In researching InPainting using SDXL 1. 3 to 0. ZeonSeven. POD-MOCKUP generator using SDXL turbo and IP-adaptor plus #comfyUI. Thank you for the well made explanation and links. Prompt: Add a Load Image node to upload the picture you want to modify. about a month ago, we built a site for people to upload & share ComfyUI workflows with each other: comfyworkflows. it can result in things like extra fingers and lines being doubled. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. ComfyUI Fundamentals Tutorial - Masking and Inpainting. 3: Using ComfyUI as a function library. Learn how to use Comfy UI, a powerful GUI for Stable Diffusion, with this full guide. Reply. Sames as Swin4R which details a lot the image. My UI with renamed panels First day with ComfyUI and I am getting some pretty nice The latest ComfyUI update introduces the "Align Your Steps" feature, based on a groundbreaking NVIDIA paper that takes Stable Diffusion generations to the next level. Launch ComfyUI by running python main. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. The video was edited for faster playback speed. ComfyUI Academy - a series of courses designed to help you master ComfyUI and build your own workflows /r/StableDiffusion is back open after the protest of Reddit Welcome to the unofficial ComfyUI subreddit. Nella repository trovate le istruzioni di installazione e la descrizione di tutti i settaggi disponibili. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. Now you can manage custom nodes within the app. Adding new models to the core software even if they are in diffusers format isn't difficult I just prefer waiting a bit to see if it's a model people are actually using and We would like to show you a description here but the site won’t allow us. Thanks mate, very impressive tutorial! keep going! :) 23K subscribers in the comfyui community. First thing I always check when I want to install something is the github page of a program I want. Hi, I’m looking to run ComfyUI on Paperspace and which way you recommend running it: Cloudfared, Local tunnel or iFrame. MembersOnline. Lacks the extension and other functionalities, but is amazing if all you need to do is generate images. Not too hard tbh. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Also if you hold shift when moving a node it will snap to the grid. Step one: Hook up IPAdapter x2. Hopefully this one will be useful to you :D, finally figured out the key to getting this to work correctly. A lot of people are just discovering this technology, and want to show off what they created. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. Sometimes it's easier to load a workflow 5-10 minutes ago than spend 15-30 seconds to reconnect and readjust settings. Welcome to the unofficial ComfyUI subreddit. pip install --no-cache-dir -U openmim mim install mmengine mim install "mmcv=2. [ π₯ComfyUI - HalloNode π₯ ] . Utilized ComfyUI's Painting Custom node. Il controllo del prompt avviene attraverso dei selettori e slider. ComfyUI lives in its own directory. py; Note: Remember to add your models, VAE, LoRAs etc. Input images can be any AI art generated or your own Welcome to the unofficial ComfyUI subreddit. Here is ComfyUI's workflow: Checkpoint: First, download the inpainting model Dreamshaper 8-inpainting (opens in a new tab) and place it in the models/checkpoints folder inside ComfyUI. Great job! I do something very similar and find creating composites to be the most powerful way to gain control and bring your vision to life. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. 1 or not. To help the model guessing our expecations we added a depth-zoe to our coditioning. Install the ComfyUI dependencies. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. Better use pythongosssss' ComfyUI-Custom-Scripts package that allows you to conveniently store workflows to be loaded via dropdown menu. What this means in practice is that people coming from Auto1111 to ComfyUI with their negative prompts including something like "(worst quality, low quality, normal quality:2)" are absolutely blowing out their images because the AI actually can't really understand emphasis that high when it is strictly applied. What's the song? Your work looks magic OP. ComfyUI x4 upscalers. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. Automatic1111 is still popular and does a lot of things ComfyUI can't. The workshop that Olivio Sarikas shared on the openart website really helped me to understand how to use it. This feature delivers significant quality improvements in half the number of steps, making your image generation process faster and more efficient than ever before. There are always readme and instructions. rivet on github (ironclad/rivet) is an IDE for creating complex AI agents and prompt chaining. So you can install it and run it and every other program on your hard disk will stay exactly the same. GitHub Repo. It’s the equivalent but for LLM’s instead. Step two: Set one to compositional and one to style weight. I teach you how to build workflows rather than just use them, I ramble a bit and damn if my tutorials aren't a little long winded, I go into a fair amount of detail so maybe you like that kind of thing. ai. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Belittling their efforts will get you banned. Thanks. No quality loss that I could see after hundreds of tests. Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. You can now use half or less of the steps you were using before and get the same results. jf ci tu oz gf wx kv tf nd tu