NekoBot supports multiple LLM (Large Language Model) providers, including OpenAI, Google Gemini, Zhipu AI, etc.
| Provider | Status | Description |
|---|---|---|
| OpenAI | ✅ | GPT-4, GPT-3.5, etc. |
| Google Gemini | ✅ | Gemini Pro, Gemini Ultra, etc. |
| Zhipu AI | ✅ | GLM-4, GLM-3, etc. |
| DeepSeek | ✅ | DeepSeek series |
| Moonshot AI | ✅ | Kimi series |
| Ollama | ✅ | Local deployment |
| LM Studio | ✅ | Local deployment |
| Other compatible APIs | ✅ | Custom Base URL |
LLM configuration is stored in data/llm_providers.json:
{
"openai": {
"type": "openai",
"enable": true,
"id": "openai",
"api_key": "your-api-key-here",
"base_url": "https://api.openai.com/v1",
"model": "gpt-4"
},
"gemini": {
"type": "gemini",
"enable": true,
"id": "gemini",
"api_key": "your-gemini-key",
"model": "gemini-pro"
}
}{
"type": "openai",
"enable": true,
"id": "openai",
"api_key": "sk-...",
"base_url": "https://api.openai.com/v1",
"model": "gpt-4"
}| Model | Description |
|---|---|
| gpt-4 | Most powerful model |
| gpt-4-turbo | Cost-effective |
| gpt-3.5-turbo | Fast and economical |
Supports using OpenAI API-compatible services:
{
"type": "openai",
"enable": true,
"id": "azure-openai",
"api_key": "your-azure-key",
"base_url": "https://your-resource.openai.azure.com",
"model": "gpt-4"
}{
"type": "gemini",
"enable": true,
"id": "gemini",
"api_key": "your-gemini-key",
"model": "gemini-pro"
}| Model | Description |
|---|---|
| gemini-pro | General purpose model |
| gemini-pro-vision | Multimodal model |
| gemini-ultra | Most powerful model |
{
"type": "glm",
"enable": true,
"id": "glm",
"api_key": "your-glm-key",
"model": "glm-4"
}| Model | Description |
|---|---|
| glm-4 | Latest model |
| glm-3-turbo | Fast model |
| glm-3-130b | Large parameter model |
{
"type": "openai",
"enable": true,
"id": "deepseek",
"api_key": "your-deepseek-key",
"base_url": "https://api.deepseek.com/v1",
"model": "deepseek-chat"
}Ollama is a tool for running large models locally.
ollama pull llama2{
"type": "openai",
"enable": true,
"id": "ollama",
"api_key": "ollama",
"base_url": "http://localhost:11434/v1",
"model": "llama2"
}LM Studio provides GUI and local API service.
{
"type": "openai",
"enable": true,
"id": "lm-studio",
"api_key": "lm-studio",
"base_url": "http://localhost:1234/v1",
"model": "local-model"
}Call LLM in plugin:
from packages.backend.llm import llm_manager
class MyPlugin(BasePlugin):
async def on_load(self):
llm_manager.register_provider(...)
async def ask_llm(self, prompt, message):
result = await llm_manager.text_chat(
provider_id="openai",
prompt=prompt,
session_id=str(message["user_id"])
)
return result.get("text", "")async def ask_llm_stream(self, prompt, message):
async for chunk in llm_manager.text_chat_stream(
provider_id="openai",
prompt=prompt,
session_id=str(message["user_id"])
):
await self.send_group_message(
message['group_id'],
message['user_id'],
chunk
)async def ask_llm_with_image(self, prompt, image_url, message):
result = await llm_manager.text_chat(
provider_id="gemini",
prompt=prompt,
image_urls=[image_url],
session_id=str(message["user_id"])
)
return result.get("text", "")session_id = f"user_{user_id}_{group_id}"NekoBot automatically manages conversation context, keeping the last 10 messages by default.
result = await llm_manager.text_chat(
provider_id="openai",
prompt=prompt,
system_prompt="You are a helpful assistant.",
session_id=session_id
)You can configure different personalities in the Web dashboard:
result = await llm_manager.text_chat(
provider_id="openai",
prompt=prompt,
persona_id="helpful_assistant",
session_id=session_id
)