206 Downloads Updated 2 years ago
ollama run eas/neuralbeagle14
curl http://localhost:11434/api/chat \ -d '{ "model": "eas/neuralbeagle14", "messages": [{"role": "user", "content": "Hello!"}] }'
from ollama import chat response = chat( model='eas/neuralbeagle14', messages=[{'role': 'user', 'content': 'Hello!'}], ) print(response.message.content)
import ollama from 'ollama' const response = await ollama.chat({ model: 'eas/neuralbeagle14', messages: [{role: 'user', content: 'Hello!'}], }) console.log(response.message.content)
Updated 2 years ago
2 years ago
f07bf0818961 · 4.4GB ·
q4_K_M, q6_K and q8_0 quantized versions of mlabonne/NeuralBeagle14-7B
Supports up to 8K context. Modelfile is configured for 4K.