Fear, anger and joy, this is the typical three emotional stages when working with a ComfyUI workflow. In RunComfy we support tens of thousands of ComfyUI users which makes us handyman of your ComfyUI environment, “To cure sometimes, to relieve often, to comfort always”. Fear and joy is not within the scope of this how-to, but we can provide a framework to alleviate the anger.
- Say the following to yourself,
- It is not my fault,
- There is a fix to anything broken, just a matter of time and money.
- Nodes complexities
- Missing Nodes
- Thanks to ComfyUI Manager, you just need to install missing nodes from there.
- Pay special attention to channels, certain nodes only exists on certain channels.
- Search in
ComfyUI Registry if the install doesn’t work and follow the instruction there
- Sometimes the name in error message only exists in code, please try searching them here as well.
- Conflicting
- Versions
- You will eventually encounter two nodes ask for the same lib but conflicting versions
- Node A asks for huggingface-cli > 0.26, Node B asks for huggingface-cli < 0.26
- This is not your fault and not a bug of ComfyUI, not a bug of the nodes, not a bug of RunComfy, but an inherent byproduct of an open extension system.
- You can try the following
- File a bug in the node github repo and tell the node author to specify the lib version explicitly as a good Python samaritan should.
- But that doesn’t really fix the conflicting versioning issue, since most authors just lock the packages with the latest version which still conflict with other nodes. You can find the requirements.txt in the node A, normally it is easier to downgrade the package version since the new version might have incompatible changes. Change the huggingface-cli > 0.26 in the requirements.txt to a lower version. If the requirements.txt doesn’t contain the offending package huggingface-cli, that means some package X in this requirements.txt is referring to huggingface-cli that you need to fix that package X to a lower version, in our case it is transformer package that we need to lower the version.
- Names
- It is rare that two men have the same taste and same drive and same naming strategy when it comes to node names, but sometimes you will find “GetNode” or “SetInt” or some total random node name that is not working correctly. Please do check if you are referencing the wrong node with the same name, it happens.
- Obsolete
- Nodes authors will write nodes that does exactly the same thing, or just run the same model with different node names. Please do make sure you try both nodes with similar names and pick the one that you are comfortable with. Check the following signals,
- Last commit date, the later the better
- Responsiveness in Github repo bugs and commits
- Reputation, when you see Kijai/Matteo node and anyone from Comfy team, please use theirs.
- Safety
- Last year we had couple of safety issues when malicious code sneaked into popular nodes. Comfy Org had done lots of safety gatekeeping for nodes registry already.
- Please make sure you install only from trusted source, this basically means install from Manager only.
- Resource limitations
- VRAM
- Those safetensors files are models. They will be loaded into the GPU VRAM and occupy most of the spaces in it.
- Say you have a 24GB safetensors file, then it will occupy 24GB VRAM. If you want to be safe and avoid an OOM crash (Out of memory), then you need to launch a server with GPU VRAM 48GB and above.
- There are all kinds of terms in here, read on if you want to know more about AI models.
- Model: Think of it as a huge “Robot”.
- Parameter: Think of it as tiny knobs in the “Robot”, the more knobs it has, the smarter it will be. B means billion, so you will see 1B, 2B, 7B, 13B, 22B and 70B, etc. when people talk about model parameter count.
- Precision: You will find FP32, FP16, INT8 and INT4, etc. in the model name. Think of it as how “finely” each knobs can be. FP32 contains 32 knobs so it will control the Robot movement with extreme detailed adjustment, but of course 32 knobs will take a lot of room for each joint. INT4 contains only 4 knobs for each joint thus the movement is very simple (For Image/Video Generation model, this means losing the fine details on the images). But the benefit would be a much smaller model file, for the same “Robot” if you build it(quantize) with INT4, it will be 1/8 size of the FP32 version file.
- Model size = # of Parameter * Precision, will a 70B INT4 model be smarter than a 30B FP32 model?
- Quantization: Simulate the 32 knobs movement of the joint with way less knobs (8 or 4 of them).
- RAM
- Even when we are in an era that glorify the GPU VRAM, there are part of the ComfyUI and even models that needs RAM.
- If your workflow contains classification models, say face detection models, and upscale models, then they will be running with RAM by default mostly. So please do watch out for those models, as they will also cause OOM crashes (Out of memory). Although ComfyUI is getting better at controlling RAM and VRAM, by far this is the most common crash we see.
- Computing Power
- If larger size machine couldn’t meet your performance requirements, please do consider hiring an AI expert that can help with faster generations.
- FLOPs: (WIP)
- Network speed
- ComfyUI frontend will download lots of small files if you watch closely.
- Certain nodes rewrites lots of frontend UI and components with their own files, which will make the initial loading of ComfyUI web page very slow.
- Once you queue the prompt, ComfyUI has this nice greenish box that moves from one node to another and even shows the progress. This will be gone once you close the web browser/computer, or your browser disconnects from the ComfyUI backend (which happens a lot more than you think when working remotely with Cloud machines).
- Check the ComfyUI logs if you want to know if it still continues generate your images/videos.
- Storage
- (WIP)