pleroma/supplemental/search/fastembed-api
2024-05-27 14:15:04 +04:00
..
compose.yml B FastembedAPI: Move to more appropriate folder 2024-05-19 12:47:08 +04:00
Dockerfile B FastembedAPI: Add requirements.txt 2024-05-19 12:59:03 +04:00
fastembed-server.py Fastembed Server: Add health check endpoint 2024-05-27 14:15:04 +04:00
README.md B FastembedAPI: Add readme 2024-05-19 13:04:27 +04:00
requirements.txt B FastembedAPI: Add requirements.txt 2024-05-19 12:59:03 +04:00

About

This is a minimal implementation of the OpenAI Embeddings API meant to be used with the QdrantSearch backend.

Usage

The easiest way to run it is to just use docker compose with docker compose up. This starts the server on the default configured port. Different models can be used, for a full list of supported models, check the fastembed documentation. The first time a model is requested it will be downloaded, which can take a few seconds.