from transformers import LLaMAForConditionalGeneration, LLaMATokenizer # Load pre-trained model and tokenizer model = LLaMAForConditionalGeneration.from_pretrained('decapoda-research/llama-3.2-hf') tokenizer = LLaMATokenizer.from_pretrained('decapoda-res
No models have been pushed.
Readme
No readme