MemGPT-Compatible LLM: Optimized language model for MemGPT integration, enabling extended memory and context handling in AI conversations. :D

54 2 months ago

2 months ago

600c46480b3c · 13GB

model
llama
·
12.2B
·
Q8_0

Readme

Welcome to MemGPT!

This repository contains a Large Language Model (LLM) specifically designed and optimized for use with MemGPT (Memory-GPT). Our model leverages the extended capabilities of MemGPT to provide enhanced performance in long-term memory retention, dynamic information updating, and expanded context processing.

Details:

  • This model uses ChatML wrapper, noforce-roles works better with this model than default ChatML:
<|im_start|>system
{system_instruction}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>

Features:

  • MemGPT Optimization: Fine-tuned to work seamlessly with MemGPT
  • Extended Context Window: Capable of processing and generating responses based on much longer contexts
  • Dynamic Memory Integration: Designed to efficiently utilize MemGPT’s external memory mechanisms
  • Persistent Knowledge: Maintains consistent information across multiple interactions
  • Scalable Architecture: Adaptable to various deployment scenarios, from personal assistants to large-scale applications

Contacts:

MemGPT Discord Contact @starsnatched on Discord for any questions! I’ll be in both Ollama Discord and MemGPT Discord servers.