UK

Comfyui workflow png reddit


Comfyui workflow png reddit. If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. An example of the images you can generate with this workflow: Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Second, if you're using ComfyUI, the SD XL invisible watermark is not applied. Again I got the difference between the images and increased the contrast. Comparisons and discussions across different platforms are encouraged. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. Jul 28, 2024 · Actually there is better way to access your computer and comfyui. To access your computer you can use the windows remote desktop and forward the tcp port using https://remote. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. Flux Schnell is a distilled 4 step model. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . More to come. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. A quick question for people with more experience with ComfyUI than me. it and the same way you could port forward the comfyui. png. 1 for ComfyUI | now with LoRA, HiresFix, and better image quality | workflows for txt2img, img2img, and inpainting with SDXL 1. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now I spent around 15 hours playing around with Fooocus and ComfyUI, and I can't even say that I've started to understand the basics. The test image was a crystal in a glass jar. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. I would like to use that in-tandem with with existing workflow I have that uses QR Code Monster that animates traversal of the portal. I tried to find either of those two examples, but I have so many damn images I couldn't find them. Instead, I created a simplified 2048X2048 workflow. This includes yiff… A transparent PNG in the original size with only the newly inpainted part will be generated. I generated images from comfyUI. I can load default and just render that jar again … but it still saves the wrong workflow. If you really want the json, you can save it after loading the png into comfyui. However, I may be starting to grasp the interface. Layer copy & paste this PNG on top of the original in your go to image editing software. You signed out in another tab or window. I use a google colab VM to run Comfyui. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. And above all, BE NICE. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. You can save the workflow as a json file with the queue control panel "save" workflow button. 5 from 512x512 to 2048x2048. Please share your tips, tricks, and… Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. Just the workflow including the wildcard prompt, but not what the random prompt generated. I compared the 0. EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. (vid2vid made with ComfyUI AnimateDiff workflow The workflow joson info is saved with the . -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. 2) or (bad code:0. ComfyUI is a completely different conceptual approach to generative art. Please share your tips, tricks, and workflows for using this software to create your AI art. 43 votes, 16 comments. I had to place the image into a zip, because people have told me that Reddit strips . It works by converting your workflow. Im trying to do the same as high res fix, with a model and weight below 0. If you need help just let me know. Reload to refresh your session. Please keep posted images SFW. Not a specialist, just a knowledgeable beginner. 8. Belittling their efforts will get you banned. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Anywhere. Save one of the images and drag and drop onto the ComfyUI interface. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. \ComfyUI_01556_. You can use the remote. You can then load or drag the following image in ComfyUI to get the workflow: I'll do you one better, and send you a png you can directly load into Comfy. If you see a few red boxes, be sure to read the Questions section on the page. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. 4K subscribers in the aiyiff community. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. The workflow joson info is saved with the . 9 and 1. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and I'm not sure which specifics are you asking about but I use ComfyUI for the GUI and use a custom workflow combining controlnet inputs and multiple hiresfix steps. 0 VAEs in ComfyUI. The complete workflow you have used to create a image is also saved in the files metadatas. 15 votes, 14 comments. But let me know if you need help replicating some of the concepts in my process. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. Mar 31, 2023 · You signed in with another tab or window. This is a subreddit for the discussion, and posting, of AI generated furry content. Anyone ever deal with this? This missing metadata can include important workflow information, particularly when using Stable Diffusion or ComfyUI. Save the new image. My only current issue is as follows. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. 0 and refiner and installs ComfyUI Welcome to the unofficial ComfyUI subreddit. How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. true. It is not much an inconvenience when I'm at my main PC. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Share, discover, & run thousands of ComfyUI workflows. It'll create the workflow for you. There is no version of the generated prompt. Here you can see random noise that is concentrated around the edges of the objects in the image. I'm currently running into certain prompts where latent just looks awful. it to port forward Up To five ports on free plan. 0 | all workflows use base + refiner Not sure if my approach is correct or sound, but if you go to my other post - the one on just getting started- and download the png and throw it into ComfyUi you’ll see the node setup I sort of cobbled together. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. 0 and refiner and installs ComfyUI First of all, sorry if this has been covered before, i did search and nothing came back. This should import the complete workflow you have used, even including not-used nodes. A lot of people are just discovering this technology, and want to show off what they created. Oh crap. You can simply open that image in comfyui or simply drag and drop it onto your workflow canvas. The png files produced by ComfyUI contain all the workflow info. Searge SDXL Update v2. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. My workflow where you can choose and image (or several) from the batch and upscale them on the Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. 0 and refiner and installs ComfyUI Just started with ComfyUI and really love the drag and drop workflow feature. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). Subreddit Dedicated to Foxgirls, Dragons, Felines and any other sexy Hentai or Furry Girl you have! Whether they're Anthropomorphic or Not. pngs of metadata. Collaborator. I'm revising the workflow below to include a non-latent option. Just started with ComfyUI and really love the drag and drop workflow feature. Mar 30, 2023 · edited. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. You switched accounts on another tab or window. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. However, this can be clarified by reloading the workflow or by asking questions. 19K subscribers in the comfyui community. Comfy Workflows Comfy Workflows. 8). Please DO NOT post any Feral, IRL Selfies, Self Made Art (Unless Permisson is Granted) Porn links, or Random spam. and spit it out in some shape or form. So every time I reconnect I have to load a presaved workflow to continue where I started. PNG into ComfyUI. Welcome to the unofficial ComfyUI subreddit. You can use () to change emphasis of a word or phrase like: (good code:1. This makes it potentially very convenient to share workflows with other. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. Reply reply Dry-Comparison-2198 Getting an issue where whatever I generate - a bogus workflow I used a few days ago is saving … and when I try to load the png - it brings up wrong workflow - and fails to render anything if I hit queue. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. SDXL 1. I've mostly played around with photorealistic stuff and can make some pretty faces, but whenever I try to put a pretty face on a body in a pose or a situation, I I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. View community ranking In the Top 10% of largest communities on Reddit. . The Solution To tackle this issue, with ChatGPT's help, I developed a Python-based solution that injects the metadata into the Photoshop file (PNG). I dump the metadata for a png I really like: magick identify -verbose . json files into an executable Python script that can run without launching the ComfyUI server. png Simply load / drag the png into comfyUI and it will load the workflow. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. If necessary, updates of the workflow will be made available on Github. rskdp owyfvwns wanpymo cahcdlf bcbzv djxnx lgirr mnqvtz khlbjgbb sbjj


-->