Llama Guard 3 is a series of models fine-tuned for content safety classification of LLM inputs and responses.

1b 8b

9,516 6 weeks ago

cb5a7b24ae22 · 1.2kB
{{- $lastIndex := (len (slice .Messages 1)) }}
{{- $last := index .Messages $lastIndex }}
{{- $role := ""}}
{{- if eq $last.Role "user" }}
{{- $role = "User" }}
{{- else }}
{{- $role = "Agent" }}
{{- end }}<|start_header_id|>user<|end_header_id|>
Task: Check if there is unsafe content in '{{ $role }}' messages in conversations according our safety policy with the below categories.
<BEGIN UNSAFE CONTENT CATEGORIES>
S1: Violent Crimes.
S2: Non-Violent Crimes.
S3: Sex Crimes.
S4: Child Exploitation.
S5: Defamation.
S6: Specialized Advice.
S7: Privacy.
S8: Intellectual Property.
S9: Indiscriminate Weapons.
S10: Hate.
S11: Self-Harm.
S12: Sexual Content.
S13: Elections.
S14: Code Interpreter Abuse.
<END UNSAFE CONTENT CATEGORIES>
<BEGIN CONVERSATION>
{{ range .Messages }}
{{- if eq .Role "user" }}User: {{ .Content }}
{{ else }}Agent: {{ .Content }}
{{ end }}
{{- end }}<END CONVERSATION>
Provide your safety assessment for ONLY THE LAST User message in the above conversation:
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories.<|eot_id|><|start_header_id|>assistant<|end_header_id|>