-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Mac更新1.6.0后无法使用TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. #12907
Comments
possibly related #12526 |
Same here :( |
Yes, same error (Macbook Pro, M2 Max, 96GB memory): It was working up until this commit. Console log ################################################################ ################################################################ ################################################################ ################################################################ ################################################################ To create a public link, set |
Updated this thread, it was a temporary workaround to force using cpu, you should try @ericwagner101 resolution instead |
FWIW, on v1.6.0-78-gd39440bf |
Thanks @akx but my console output shows those startup settings are executed (14 lines below "Launching launch.py..." in my comment above). |
@ngtongsheng Yes that works BUT now it's generating using the actual CPUs and not Apple's Metal GPU! In this screenshot of Activity Monitor you can see 'Python' with 338% CPU and 0% GPU, and the CPU and GPU graphs confirming this, during the creation of one 512x512 image that took 3.5 seconds per iteration (not bad for a CPU but overall, very slow!). |
@nicklansley Well, for one you're evidently ( I would honestly recommend uninstalling all extra packages from that global Python environment, and then ensure you're using a virtualenv going forward. |
Thanks @akx but I promise you I am using a virtualenv! HOWEVER... it may be it is mixing up virtualenv and the global site-packages? OK a way forward - thank you. |
You are using a virtualenv, but you're clearly using a torch from outside that virtualenv. Without a venv activated, run |
I wiped everything but the model directory, performed a git clone then let webui.sh create the virtual env and install the packages. Still the same error but, because i know all the models worked before this commit, I will rollback a couple of weeks and step forward again. My new run of webui.sh is below, with references to /venv/ packages clearly showing in the execution except for access to /opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py Perhaps core packages such as threading.py are not copied into the virtual environment when a full path including major.minor.patch (3.10.13) version of Python is used? Here's the run:
|
They should be, and that shouldn't be the issue. Can you rename your |
I tried that, and new clean install even:
|
Forgot to mention that even with the errors, in the second instance basic generating stuff works but when trying hi-res it goes out of memory like
When it previously wouldn't, that's a 512x768 image I upscaled last week :< |
@akx yes that worked! Renaming the ui-config.json to ui-config.json.backup fixed the issue and now it is working fine including using the GPU (and the Macbook fans!). See screenshot of Activity monitor - GPU at 95.8% on first line (Python): |
@nicklansley Great to hear! It'd be nice to see the backup and the newly generated config so we can figure out what went wrong from the diff. If you don't feel like sharing the files here, you can send them as attachments to (my github username) at iki dot fi. |
@akx - I have just performed a 'diff ui-config.json ui-config.json.backup' and there are no differences! I am still using absolutereality_v16.safetensors and can change to other models and they work too. So, the act of the application not being able to find ui-config.json and having to go through the process of rebuilding it may fix some issue? Anyway, here is my ui-config.json that works, not forgetting is identical to ui-config.json.backup |
@nicklansley Did you by any chance rename he file while webui was running? I'm trying to think of how the files could be identical and still have one work and the other not... |
I jump to your discussion :) i am facing same issue as nicklansley mentioned, I have followed your instruction and ensure to use venv only. Stacktrace is very similar with the final message.
I notice that if i switch between two models from the UI dropdown, then it is working |
@akx I have done some further testing to understand why renaming the file seemed to solve the problem. I found out that the issue is not related to the file name, but to the first model that the application loads from the models/Stable-diffusion directory. When I started the application with webui.sh, it automatically loaded the first model in alphabetical order, which was absolutereality_v16.safetensors. This model caused a stream of errors in the terminal, saying that it could not convert a MPS Tensor to float64 dtype (TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. Stable diffusion model failed to load). However, when I renamed this model to absolutereality_v16.safetensors.bak and restarted webui.sh, it loaded the next model in alphabetical order, which was epicrealism_pureEvolutionV3.safetensors. This model worked fine and did not cause any errors. I then renamed absolutereality_v16.safetensors.bak back to absolutereality_v16.safetensors and restarted webui.sh again. This time, it did not load this model by default, but remembered the last model I used, which was epicrealism_pureEvolutionV3.safetensors. This model continued to worked fine. I then switched to absolutereality_v16.safetensors from the UI, without restarting anything, and it also worked fine. No errors were shown in the terminal, demonstrating that there does not seem to be anything wrong with this model to cause any specific error. However, when I restarted webui.sh again with absolutereality_v16.safetensors as the currently chosen model, the errors reappeared. I also followed @yanis-git's suggestion and switched between all the different models from the UI, without restarting anything, and they all worked fine. So, there is a bug in loading the first model on startup, even if that model is compatible with the application. Of course, there may be some specific issue with absolutereality_v16.safetensors but the crucial point is that it has been working fine up until this commit. Has application code changed to perform extra tests to decide the compatibility of models that this model fails on startup? For example, I assume you must detect the SD version (1.4, 1.5, 2.x, XL) of the model and that absolutereality_v16.safetensors is misrepresenting itself in some way? Just some thoughts... If you want to try the model yourself (Absolute Reality v1.6) you can download it from the CivitAI catalogue here: https://civitai.com/models/81458?modelVersionId=108576 with a note that it has been superseded by v1.8 which does not have this issue as the startup model. |
@nicklansley @akx IMHO don't think the reasoning for blaming a checkpoint is correct. In my installations, where I got rid of the errors by just cycling, I never had version 1.6.0 of absolute reality ( If put to guess, hi-res making 2 passes, how is memory managed there? |
@0-vortex thanks for the update - I seem to be running hires OK with the application using just over 6GB of memory for 512x512 then switching up to just over 10GB for hires 1024x1024 (according to Activity Monitor). If you run Activity Monitor and display the GPU History window when running the app, does this window show the GPU maxxing out during generation, or not used at all? When I followed the advice from @ngtongsheng it worked but only used the CPU and I was getting the same seconds per iteration performance that you experienced (surprisingly good for CPU - all hail RISC architecture!) but way to slow for normal use. So just check your GPU is being engaged. |
It spikes to 75-80% just generating the 512x512 image for me, 95%+ on hi-res :< |
I have the issue that seems the same as this on my Intel mac with GPU, and it seems no effect to rename ui-config.json to ui-config.json.backup. I don't know much about stable diffusion because I just started using stable diffusion today, but the last line says "TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." |
I had the same issue and at least for me it was related to a messed up virtualenv. I had to do this:
After this the issue with certain older models went away. |
@ericwagner101 Thank you very much, tried on my macbook pro 13inch M1 chip and confirmed it works. 512x512 took 38secs, not too bad |
works for me. |
|
I haven't been able to reproduce this issue, but #13099 will hopefully fix it. Please let me know if there are any further issues with that fix applied. |
Thanks. I applied the fix and tested it. It did change something as the original error does not occur, but there is a new ser of errors and the model still does not load. Snip of the errors: File "/Users/eric/ai/stable-diffusion-webui/modules/sd_disable_initialization.py", line 225, in |
i was getting the same error after a git pull which brought me to 1.6.0. im on a 2019 intel mac pro with an amd rx 6800 xt, python 3.10, torch 1.13 (but also works with torch 1.12)(still cant seem to get anything working with gpu using torch 2.x, or sdxl anywhere.. but thats outside the scope of this thread) i first tried --disable-model-loading-ram-optimization when launching, while that did not produce the error, models were still not loading properly.. i was getting a bunch of stuff about producing NaN's and would only seem to create an all black image. after making the changes in #13099, and omitting --disable-model-loading-ram-optimization from the launch command, im no longer seeing that error and so far i seem to be able to load models without any issues and generate images with gpu acceleration. so.. nice one =] one thing i did notice was apparently the new default for attention optimization is now sub-quadratic, as it was invokeai before the upgrade... and i also noticed when switching to a different model, in the console it says: Reusing loaded model to load in any case.. thanks |
Works for me |
Mac M1 here. I started to find this annoying issue about a week ago. None of the solutions in this thread worked for me. After multiple tests with clean A1111 installations, I'm convinced the issue is triggered by some checkpoints but not others. I can't find a reason for that, but in my case, tests are conclusive. I pulled the #13099 PR and tested it, but it didn't make any difference, unfortunately. |
I tried the fix #13099 again and it's now working. |
The solutions doesn't seem to work for me, always when the last step of generation error occurs, I'm not sure how to do a clean reinstall, I deleted my webui folder and followed the instructions to install, how do i clean the global env, etc? But comfyui is running on the other end, will it kill comfyui? |
had the same MPS issue with my Mac M2 16Gb and v1.6. I did the clean reinstall steps. and it seems to be working fine with the checkpoints I had issue with. I also tried to add the 2 lines in the dependency.py file, but this generated another set of errors. EDIT : it worked the first time. After I closed the browser and restarted form the terminal, the error showed again. |
It seems the problem is related to not loading the configs/v1-inference.yaml for checkpoints. If at startup I get this error, I switch to the base SD 1.5 model and then it says it loads that config. Then I can switch to another model. A related issue though is I can't use an inpainting model I make (following the steps on the wiki, where you merge three models). It tries loading the configs/v1-inference.yaml instead of v1-inpainting-inference. Selecting the SD 1.5 inpainting model does load the painting config, but when it doesn't maintain it when I select my own inpainting model, it reverts to the non-inpainting config. I get the same problem when I switch to the SD XL 1.0 checkpoint and then switch to an SD 1.5; I have to switch to the base SD 1.5 model so it loads the v1-inference.yaml again and then select the checkpoint I want to use. |
that was the perfect solution for me-- so silly and simple. Thanks! |
I tried this and it works but now the whole thing is much slower than before, any ideas on how to solve this? edit: I git reset to the previous version, and everything works fine and fast. Lesson learned - don't git pull anything that's not broken |
@cursed-github As said multiple times in this thread, |
I had the same issue and temporarily resolved it by switching models. |
working for me |
can u show in video and how to fix this issue.plz |
在设置里,勾选 ☑️将交叉关注层向上转换到 float32 |
BASED AND GOATED |
+1 it works fine |
yes 我遇到了同样的问题 但是在切换模型后能够正常使用,每次启动的第一个模型都会保存,我很难理解这是为什么 |
I tried the solutions given This works for me. 2 steps:
More Info: Work·around: FYI. I got the solution base on this hint when I click 'Generate' button in the 'txt2img' tab : |
I also encountered the same issue, and I believe it was caused by loading the model during the initial startup. Here's how I resolved it:
|
Switching the models can also solve this problem temporarily. |
Yes, it works for me. |
这是有些模型加载的时候,出现的错误,导致无法加载。 |
Worked for me too. The error is gone. |
hey, but if I get this error? "source venv/bin/activate |
it works for me too, thanks. |
Mey I ask what is |
@ukateki a1111 = Automatic1111 = this repo |
Is there an existing issue for this?
What happened?
启动webui后无法生成图像
Steps to reproduce the problem
1.启动webui
2. 输入提示词
3.开始生成图片
What should have happened?
正常生成图片
Sysinfo
{
"date": "Thu Aug 31 22:36:37 2023",
"timestamp": "22:36:52",
"uptime": "Thu Aug 31 22:18:16 2023",
"version": {
"app": "stable-diffusion-webui",
"updated": "2023-08-31",
"hash": "5ef669de",
"url": "/~https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master"
},
"torch": "2.0.1 autocast half",
"gpu": {},
"state": {
"started": "Thu Aug 31 22:36:52 2023",
"step": "0 / 0",
"jobs": "0 / 0",
"flags": "",
"job": "",
"text-info": ""
},
"memory": {
"ram": {
"free": 29.98,
"used": 2.02,
"total": 32
}
},
"optimizations": [
"none"
],
"libs": {
"xformers": "",
"diffusers": "",
"transformers": "4.30.2"
},
"repos": {
"Stable Diffusion": "[cf1d67a] 2023-03-25",
"Stable Diffusion XL": "[45c443b] 2023-07-26",
"CodeFormer": "[c5b4593] 2022-09-09",
"BLIP": "[48211a1] 2022-06-07",
"k_diffusion": "[ab527a9] 2023-08-12"
},
"device": {
"active": "mps",
"dtype": "torch.float16",
"vae": "torch.float32",
"unet": "torch.float16"
},
}
What browsers do you use to access the UI ?
Google Chrome
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: