Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.
2.7b
427.3K Pulls Updated 11 months ago
Updated 11 months ago
11 months ago
79d7065d8c0f · 1.8GB
model
archphi2
·
parameters2.78B
·
quantizationQ4_K_M
1.8GB
params
{
"stop": [
"User:",
"Assistant:",
"System:"
]
}
42B
template
{{ if .System }}System: {{ .System }}{{ end }}
User: {{ .Prompt }}
Assistant:
77B
system
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful
132B
license
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this so
1.0kB
Readme
Phi-2 is a small language model capable of common-sense reasoning and language understanding. It showcases “state-of-the-art performance” among language models with less than 13 billion parameters.
Example prompt
By default, phi
includes a chat prompt template designed for multi-turn conversations:
% ollama run phi
>>> Hello, can you help me find my way to Toronto?
Certainly! What is the exact location in Toronto that you are looking for?
>>> Yonge & Bloor
Sure, Yonge and Bloor is a busy intersection in downtown Toronto. Would you like to take public transportation or drive there?
>>> Public transportation
Great! The easiest way to get there is by taking the TTC subway. You can take Line 1, which runs along Yonge Street and passes through downtown Toronto.
Using Ollama’s API:
curl http://localhost:11434/api/chat -d '{
"model": "phi",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
Example prompts (raw mode)
Phi also responds well to a wide variety of prompt formats when using raw mode in Ollama’s API, which bypasses all default prompt templating:
Instruct
curl http://localhost:11434/api/generate -d '{
"model": "phi",
"prompt": "Instruct: Write a detailed analogy between mathematics and a lighthouse.\nOutput:",
"options": {
"stop": ["Instruct:", "Output:"]
},
"raw": true,
"stream": false
}'
Code Completion
curl http://localhost:11434/api/generate -d '{
"model": "phi",
"prompt": "def print_prime(n):\n ",
"raw": true,
"stream": false
}'
Text completion
curl http://localhost:11434/api/generate -d '{
"model": "phi",
"prompt": "There once was a mouse named",
"raw": true,
"stream": false
}'