Llama 3 is not very censored

April 19, 2024

Compared to Llama 2, Llama 3 feels much less censored. Meta has substantially lowered false refusal rates. Llama 3 will refuse less than 1/3 of the prompts previously refused by Llama 2.

Llama 3

April 18, 2024

Llama 3 is now available to run on Ollama. This model is the next generation of Meta's state-of-the-art large language model, and is the most capable openly available LLM to date.

Embedding models

April 8, 2024

Embedding models are available in Ollama, making it easy to generate vector embeddings for use in search and retrieval augmented generation (RAG) applications.

Ollama now supports AMD graphics cards

March 14, 2024

Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features of Ollama can now be accelerated by AMD graphics cards on Ollama for Linux and Windows.

Windows preview

February 15, 2024

Ollama is now available on Windows in preview, making it possible to pull, run and create large language models in a new native Windows experience. Ollama on Windows includes built-in GPU acceleration, access to the full model library, and serves the Ollama API including OpenAI compatibility.

OpenAI compatibility

February 8, 2024

Ollama now has initial compatibility with the OpenAI Chat Completions API, making it possible to use existing tooling built for OpenAI with local models via Ollama.

Vision models

February 2, 2024

New vision models are now available: LLaVA 1.6, in 7B, 13B and 34B parameter sizes. These models support higher resolution images, improved text recognition and logical reasoning.

Python & JavaScript Libraries

January 23, 2024

The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code. Both libraries include all the features of the Ollama REST API, are familiar in design, and compatible with new and previous versions of Ollama.

Building LLM-Powered Web Apps with Client-Side Technology

October 13, 2023

Recreate one of the most popular LangChain use-cases with open source, locally running software - a chain that performs Retrieval-Augmented Generation, or RAG for short, and allows you to “chat with your documents”

Ollama is now available as an official Docker image

October 5, 2023

Ollama can now run with Docker Desktop on the Mac, and run inside Docker containers with GPU acceleration on Linux.

Leveraging LLMs in your Obsidian Notes

September 21, 2023

This post walks through how you could incorporate a local LLM using Ollama in Obsidian, or potentially any note taking tool.

How to prompt Code Llama

September 9, 2023

This guide walks through the different ways to structure prompts for Code Llama and its different variations and features including instructions, code completion and fill-in-the-middle (FIM).

Run Code Llama locally

August 24, 2023

Meta's Code Llama is now available on Ollama to try.

Run WizardMath model for math problems

August 14, 2023

Run WizardMath model for math problems

Run Llama 2 uncensored locally

August 1, 2023

This post will give some example comparisons running Llama 2 uncensored model versus its censored model.