You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
运行的时候Janus Model Loader报错
got prompt
Python version is above 3.10, patching the collections module.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Some kwargs in processor config are unused and will not have any effect: sft_format, add_special_token, ignore_id, mask_prompt, image_tag, num_image_tokens.
请按任意键继续. . .
Service started failed.
The text was updated successfully, but these errors were encountered:
运行的时候Janus Model Loader报错
got prompt
Python version is above 3.10, patching the collections module.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the
legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.Some kwargs in processor config are unused and will not have any effect: sft_format, add_special_token, ignore_id, mask_prompt, image_tag, num_image_tokens.
请按任意键继续. . .
Service started failed.
The text was updated successfully, but these errors were encountered: