8 2 months ago

embedding

Models

View all →

Readme

Turkish BERT Embedding Model

Turkish language embedding model accessible via Ollama-compatible API.

Quick Start

Installation

pip install sentence-transformers requests

Usage

import requests

response = requests.post(
    "http://your-server:11435/api/embeddings",
    json={"prompt": "Türkçe metin"}
)

embedding = response.json()["embedding"]
print(f"Dimension: {len(embedding)}")

cURL

curl -X POST http://your-server:11435/api/embeddings \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Merhaba dünya"}'

Deployment Options

Option 1: Colab/VM (Bridge Mode)

Run the provided pipeline script. It will: - Download trmteb/turkish-embedding-model from Hugging Face - Start Flask bridge API on port 11435 - Expose /api/embeddings endpoint

Endpoint: http://<your-ip>:11435/api/embeddings

Option 2: Ollama Native (Fallback Mode)

If bridge mode is not suitable, use GGUF fallback: - Model registered in Ollama as turkish-embed - Standard Ollama API on port 11434

curl -X POST http://your-server:11434/api/embeddings \
  -d '{"model": "turkish-embed", "prompt": "Türkçe metin"}'

API Specification

Endpoint

POST /api/embeddings

Request

{
  "prompt": "Your text here"
}

Response

{
  "embedding": [0.123, -0.456, 0.789, ...]
}

Status Codes

  • 200: Success
  • 400: Missing prompt
  • 500: Server error

Integration Examples

Python - Semantic Search

import requests
import numpy as np

def get_embedding(text):
    resp = requests.post(
        "http://your-server:11435/api/embeddings",
        json={"prompt": text}
    )
    return np.array(resp.json()["embedding"])

# Calculate similarity
query_emb = get_embedding("Yapay zeka nedir?")
doc_emb = get_embedding("Yapay zeka, makinelerin öğrenmesidir.")

similarity = np.dot(query_emb, doc_emb) / (
    np.linalg.norm(query_emb) * np.linalg.norm(doc_emb)
)
print(f"Similarity: {similarity:.4f}")

JavaScript/Node.js

const axios = require('axios');

async function getEmbedding(text) {
  const response = await axios.post(
    'http://your-server:11435/api/embeddings',
    { prompt: text }
  );
  return response.data.embedding;
}

// Usage
const embedding = await getEmbedding('Merhaba dünya');
console.log(`Dimension: ${embedding.length}`);

LangChain

from langchain.embeddings.base import Embeddings
import requests

class TurkishBERTEmbeddings(Embeddings):
    def __init__(self, api_url="http://your-server:11435/api/embeddings"):
        self.api_url = api_url
    
    def embed_documents(self, texts):
        return [self.embed_query(text) for text in texts]
    
    def embed_query(self, text):
        resp = requests.post(
            self.api_url,
            json={"prompt": text}
        )
        return resp.json()["embedding"]

# Usage
embeddings = TurkishBERTEmbeddings()
vector = embeddings.embed_query("Türkçe metin")

LlamaIndex

from llama_index.embeddings import BaseEmbedding
import requests

class TurkishBERTEmbedding(BaseEmbedding):
    def __init__(self, api_url="http://your-server:11435/api/embeddings"):
        self.api_url = api_url
        super().__init__()
    
    def _get_query_embedding(self, query: str):
        resp = requests.post(self.api_url, json={"prompt": query})
        return resp.json()["embedding"]
    
    def _get_text_embedding(self, text: str):
        return self._get_query_embedding(text)

# Usage
embed_model = TurkishBERTEmbedding()
vector = embed_model.get_query_embedding("Türkçe sorgu")

Performance

Latency

  • CPU: 100-500ms per request
  • GPU: 20-100ms per request (if available)

Throughput

  • Single request processing (no batching)
  • For high-throughput needs, consider batching wrapper

Memory

  • Model weights: ~1-2GB
  • Runtime: ~500MB

Production Considerations

Scaling

  • Run multiple instances behind load balancer
  • Each instance handles one request at a time
  • Consider adding request queue for high load

Monitoring

import time

start = time.time()
response = requests.post(url, json={"prompt": text})
latency = time.time() - start
print(f"Latency: {latency:.3f}s")

Error Handling

try:
    response = requests.post(url, json={"prompt": text}, timeout=10)
    response.raise_for_status()
    embedding = response.json()["embedding"]
except requests.exceptions.Timeout:
    print("Request timed out")
except requests.exceptions.RequestException as e:
    print(f"Request failed: {e}")

Docker Deployment

FROM python:3.11-slim

WORKDIR /app

RUN pip install sentence-transformers flask huggingface_hub

COPY bridge_api.py .

ENV HF_TOKEN=your_token_here
ENV MODEL_NAME=trmteb/turkish-embedding-model

EXPOSE 11435

CMD ["python", "bridge_api.py"]

Troubleshooting

Port 11435 not accessible

  • Check firewall rules
  • Verify service is running: netstat -tuln | grep 11435
  • Check Colab/VM external IP settings

Slow response times

  • Model running on CPU (expected in Colab free tier)
  • Consider upgrading to GPU runtime
  • Reduce context length if possible

Memory errors

  • Model requires ~2GB RAM
  • Close other applications
  • Use smaller batch sizes

Model Details

  • Base: trmteb/turkish-embedding-model
  • Architecture: BERT (encoder-only)
  • Embedding Dimension: 768
  • Max Sequence Length: 512 tokens
  • Language: Turkish

Alternatives

If bridge API doesn’t fit your use case:

Direct Usage

from sentence_transformers import SentenceTransformer

model = SentenceTransformer('trmteb/turkish-embedding-model')
embedding = model.encode("Türkçe metin")

GGUF Fallback

Use Ollama’s native embedding models:

ollama run turkish-embed "Türkçe metin"

License

Check the original model repository for licensing terms.

Support

For issues related to: - Model quality: Contact original model authors - API/Bridge: Open issue in this repository - Ollama integration: Check Ollama documentation

Changelog

v1.0.0

  • Initial release with bridge API
  • Support for Turkish BERT model
  • GGUF fallback mode
  • Ollama-compatible API