-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apply in common UI's #2
Comments
So the lora adaptors we trained are for visual layers in UNET (all layers except cross attentions). I believe such adaptors are not supported by a11 or comfyUI. We can get it touch with the developers to discuss potential support of our sliders. For now, we are working on a colab demo for training and using our sliders. |
Hi, great work! I've tried to load the pre-trained xl sliders as Lora in Web UI. It seems to work normally. This was also verified by ComfyUI. comfyanonymous/ComfyUI#2028 |
Yes, but the inference methods we introduce might not be implemented in the UIs. The technique we use in inference is quite a bit useful to allow precise editing. Edit: I commented the same on comfyui issue you shared |
@rohitgandikota By comparing the results with your inference code and the Web UI, it is indeed confirmed that the SDEdit tricks can better maintain the stability of other attributes under different editing scales. |
Yes, and the way we do it is:
This did the trick |
@rohitgandikota, A more intuitive way to describe this technique is to assume that the original scheduler has a timesteps of 1000. If you set the |
Perfect. Yes! |
@rohitgandikota, Great! I have a rough idea of how to implement the SDEdit technique in the Web UI, and I'll spend some time tomorrow to work on it. |
That is awesome! Thanks for looking into this. Really appreciate it! |
For reference, here's the issue I opened on a1111 as well: |
This plugin cheald/sd-webui-loractl is able to approximate the inference process used in concept sliders with the SDEdit technique. Taking |
This is awesome! So does this apply to the a1111 feature implementation request? |
Oh wait a minute, if the slider is changing the structure drastically for scale 3, maybe it is not enough SDEdit? Could you tune the tilmestep variable at which the slider is being turned on ? In our experiments we found that without SDEdit, our sliders can be controlled from range -1 to 1. But with SDEdit, we can go until 4 without altering the prior structure. |
@rohitgandikota I've uploaded the picture "LoRA weight in all steps". This should be the same as what I've said here #2 (comment), if I'm not mistaken. |
Oh yeah, I meant - when you are using large lora scales, you can maybe do the transition in step 15 or 20. It would preserve the structure more and increase the edit strength. |
@rohitgandikota the tilmestep variable at which the slider is being turned on can be certainly tuned. I just provide a simple example. It seems the third picture misled you; I have already edited its content. |
I see - I just wanted to see if the slider plugin you have can recreate our results. Just curious. Would love if you can provide an example with alpha=1, 2, 3 with start noise at time=20 ? Thank you so much! This is very cool |
Awesome! So the 0.15 version seems to work with scale 3. Thanks for the update |
@hkunzhe, I am not very familiar with the UIs, do you think this can be easily applied to the comfyUI as well ? |
@rohitgandikota, I am also completely unfamiliar with comfyui. I think this plugin cannot be used in comfyui, it depends on the comfyui community. |
@hkunzhe lets say I have installed this extension : /~https://github.com/cheald/sd-webui-loractl I downloaded pixar_style.pt into my lora folder so how do I prompt with that extension? |
Awesome! @mofos, can you describe how you were able to achieve this? Did you implement the inference method we introduced with sliders in comfyUI? |
its Simply, used a node called Ksampler Advance. did a 20 Step process with the 0 to 5 being with the base model ( Juggernaut-xl) and 5 to 20 steps was achieved with the LoRa + base model. BTW, eyesize.pt seems like LoRa mode for making eyes small, gave it a strength of -2 and it gave me big eyes if you don't have Comfy UI running just duplicate my hugging face space and drag the image above, to the interface and press queue prompt at the top right corner to see the flow (https://huggingface.co/spaces/zac/ComfyUI) |
This is so cool! the url you shared doesn't seem to work. I am getting a 404 error |
My Bad, Its a private space but i just made it public, also added eyesize.pt to the docker file. but ill suggest to make some changes since the Docker downloads a lot of models during its build ( i know it a really bad practice). alternatively you can check this space (https://huggingface.co/spaces/SpacesExamples/ComfyUI) but you'll need duplicate and add your LoRa to the space and docker. |
Awesome! Thanks @mofos! |
@hkunzhe, could you please let us know how you used a1111 for sliders? |
I am waiting this info too I want to test on Automatic1111 |
|
thank you so much what do those 0@0, 0@0.2, 3@0.2 and 3@1 means? |
<lora:pixar_style:0@0, 0@0.2, 3@0.2, 3@1> could you tell what does these means? @rohitgandikota |
Oh, I am not completely sure since I am not very familiar with A1111. But my guess is that This sounds like the most probable explanation to me. |
i plan to make a tutorial for this so i need to understand how it works and how to apply sliders in automatic1111 ty |
That would be amazing! Once you publish it , share it with us - we will include it on our website |
@filliptm awesome! text sliders or image sliders? |
this was text, i def wanna try out the image next |
@FurkanGozukara those come from /~https://github.com/cheald/sd-webui-loractl, they are extensively documented there in the Readme |
Hey @filliptm |
is there a way to load this into a11 or comfyUI? or would this need some special plugin to work? I tried to load it as an embedding or hypernetwork, but neither worked; thanks
The text was updated successfully, but these errors were encountered: