novelai吧 关注:362,320贴子:1,817,800
  • 1回复贴,共1
Error completing request
Arguments: ('a2', '0.005', 1, 'E:\\AI\\train\\Z3', 'textual_inversion', 512, 512, 100000, 500, 500, 'E:\\AI\\textual_inversion_templates\\style_filewords.txt', True, True, 'masterpiece, best quality,', 'lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry', 28, 0, 7, -1.0, 512, 512) {}
Traceback (most recent call last):
File "E:\AI\modules\ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "E:\AI\webui.py", line 57, in f
res = func(*args, **kwargs)
File "E:\AI\modules\textual_inversion\ui.py", line 33, in train_embedding
embedding, filename = modules.textual_inversion.textual_inversion.train_embedding(*args)
File "E:\AI\modules\textual_inversion\textual_inversion.py", line 309, in train_embedding
loss.backward()
File "E:\AI\py310\lib\site-packages\torch\_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\AI\py310\lib\site-packages\torch\autograd\__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "E:\AI\py310\lib\site-packages\torch\autograd\function.py", line 253, in apply
return user_fn(self, *args)
File "E:\AI\repositories\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 139, in backward
input_grads = torch.autograd.grad(
File "E:\AI\py310\lib\site-packages\torch\autograd\__init__.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 5.10 GiB already allocated; 0 bytes free; 5.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF


1楼2022-11-22 21:33回复


    2楼2022-11-22 22:04
    回复