AshnaAI Agent Platform API
The AshnaAI HTTP API is OpenAI-compatible: you can call it the same way you would the OpenAI API—same authentication style (Bearer API key), chat completions, streaming, and request/response shapes you already know. Use it to run Ashna agents and models by passing the appropriate model identifier (for example your agent or model id from the Ashna app).
Base URL (OpenAI-compatible root)
Use this value as baseURL (JavaScript) or base_url (Python) for OpenAI-compatible clients. Endpoints such as /chat/completions are resolved under this root.
https://api.ashna.ai/v1/apiAPI keys
Create and manage keys in Account → API. Send the key in the Authorization header as Bearer <your_key>.
Platform Overview
Beyond the OpenAI-compatible surface, Ashna provides an agent-based architecture so you can:
- Select from multiple AI models (GPT-4, GPT-3.5, Gemini Pro, Gemini Flash)
- Create custom agents with specific instructions and behaviors
- Attach tools and capabilities to agents for specialized tasks
- Configure memory and context management for conversations
- Stream responses in real-time for better UX
Quickstart
Get started with the AshnaAI API in under 5 minutes.
1. Get Your API Key
Open Account → API on app.ashna.ai and create an API key. Use that key with any OpenAI-compatible client and the base URL shown in the introduction.
2. Make Your First Request
OpenAI-compatible chat completion (same route shape as OpenAI /v1/chat/completions under the base URL above):
▸ curl https://api.ashna.ai/v1/api/chat/completions \▸ -H "Authorization: Bearer YOUR_API_KEY" \▸ -H "Content-Type: application/json" \▸ -d '{▸ "model": "your-agent-or-model-id",▸ "messages": [▸ { "role": "user", "content": "Hello, can you help me?" }▸ ]▸ }'Authentication
All API requests require authentication using an API key passed in the Authorization header.
Headers
AuthorizationstringBearer token with your API key. Format: Bearer YOUR_API_KEY
Content-TypestringShould be set to application/json
▸ curl https://api.ashna.ai/v1/api/chat \▸ -H "Authorization: Bearer sk_live_abc123..." \▸ -H "Content-Type: application/json"Errors
The API uses standard HTTP response codes. Error responses include a JSON body with details.
400Invalid request parameters or malformed JSON
401Missing or invalid API key
404Resource does not exist
429Too many requests in a given time period
500Something went wrong on our end
Error Response Format
{ "error": { "type": "invalid_request", "message": "Agent ID is required", "code": "missing_parameter" }}Webhooks & Events
COMING SOON
Configure webhooks to receive real-time notifications when events occur in your agents.
Supported Events
agent.created- New agent createdagent.execution.started- Agent execution beganagent.execution.completed- Agent execution finishedtool.invoked- Tool was called by agent
{ "event": "agent.execution.completed", "timestamp": "2026-01-04T10: 30: 00Z", "data": { "agent_id": "agent_abc123", "execution_id": "exec_xyz789", "status": "success", "duration_ms": 1450 }}API Endpoints
AshnaAI provides multiple endpoints for different use cases, from high-level chat interfaces to low-level completion APIs.
Chat API
The high-level, stateful chat endpoint designed for UI-based conversations. This endpoint manages conversation state, injects bot configurations, and handles streaming responses.
Endpoint
/v1/api/chatKey Parameters
chatIdstringUnique identifier for the conversation. Used to maintain state across multiple messages.
messagesUIMessage[]Array of UIMessage objects representing the conversation history.
modelstringModel identifier (e.g., "ashnaai", "gemini-pro").
streambooleanEnable streaming responses for real-time output.
streamTypetext | dataStream format: "text" for simple text streaming, "data" for structured event streaming.
temperaturenumberControls randomness (0.0-2.0). Lower values are more deterministic.
maxTokensnumberMaximum number of tokens to generate in the response.
Request Example
▸ curl https://api.ashna.ai/v1/api/chat \▸ -H "Authorization: Bearer YOUR_API_KEY" \▸ -H "Content-Type: application/json" \▸ -d '{▸ "chatId": "chat_abc123",▸ "messages": [▸ {▸ "id": "msg_1",▸ "role": "user",▸ "parts": [▸ {▸ "type": "text",▸ "text": "What is machine learning?"▸ }▸ ]▸ }▸ ],▸ "model": "ashnaai",▸ "temperature": 0.7,▸ "maxTokens": 500,▸ "stream": true,▸ "streamType": "data"▸ }'Response Example
{ "id": "msg_2", "role": "assistant", "parts": [ { "type": "text", "text": "Machine learning is a subset of artificial intelligence..." } ], "metadata": { "model": "ashnaai", "tokensUsed": 245, "finishReason": "stop" }}Chat Completions
Same idea as OpenAI's /v1/chat/completions: a stateless chat completion with messages, model, and optional sampling parameters. Point the official OpenAI SDK or Vercel AI SDK at the base URL above and use this path (or let the SDK add /chat/completions under that base). Set model to your agent or model id.
Endpoint
/v1/api/chat/completionsKey Parameters
messagesarrayArray of message objects with "role" (system, user, assistant) and "content".
modelstringModel identifier (e.g., "ashnaai", "gemini-pro").
temperaturenumberSampling temperature between 0 and 2.
Request Example
▸ curl https://api.ashna.ai/v1/api/chat/completions \▸ -H "Authorization: Bearer YOUR_API_KEY" \▸ -H "Content-Type: application/json" \▸ -d '{▸ "model": "ashnaai",▸ "messages": [▸ {▸ "role": "system",▸ "content": "You are a helpful assistant."▸ },▸ {▸ "role": "user",▸ "content": "Explain quantum computing"▸ }▸ ],▸ "temperature": 0.7,▸ "max_tokens": 500▸ }'Response Example
{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1704369000, "model": "ashnaai", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Quantum computing leverages quantum mechanics..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 25, "completion_tokens": 245, "total_tokens": 270 }}Text Completions
Legacy single-prompt text completion endpoint. Included for backward compatibility with simple one-shot text generation use cases.
Endpoint
/v1/api/completionsKey Parameters
promptstringThe text prompt to generate completion for.
modelstringModel identifier (e.g., "ashnaai").
Request Example
▸ curl https://api.ashna.ai/v1/api/completions \▸ -H "Authorization: Bearer YOUR_API_KEY" \▸ -H "Content-Type: application/json" \▸ -d '{▸ "model": "ashnaai",▸ "prompt": "Explain quantum computing in simple terms",▸ "max_tokens": 500,▸ "temperature": 0.7▸ }'Response Example
{ "id": "cmpl-abc123", "object": "text_completion", "created": 1704369000, "model": "ashnaai", "choices": [ { "text": "Quantum computing is a revolutionary approach...", "index": 0, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 245, "total_tokens": 257 }}Streaming Responses
Stream responses in real-time for better user experience.
▸ curl https://api.ashna.ai/v1/api/chat \▸ -H "Authorization: Bearer YOUR_API_KEY" \▸ -H "Content-Type: application/json" \▸ -d '{▸ "message": "Write a story about AI",▸ "stream": true▸ }'Stream Event Format
data: {"type": "content", "delta": "Once upon"}data: {"type": "content", "delta": " a time"}data: {"type": "content", "delta": " there was"}data: {"type": "done", "execution_id": "exec_123"}SDKs & Integrations
Because the API is OpenAI-compatible, use the same clients you would for OpenAI—only the base URL and API key change. Keys come from Account → API. Below: Vercel AI SDK (ai + @ai-sdk/openai) with createOpenAI, then the official OpenAI SDKs for Python and Node.
Vercel AI SDK (TypeScript / Next.js)
npm: ai, @ai-sdk/openai
Use createOpenAI from @ai-sdk/openai with baseURL set to the Ashna base URL and your key. The returned provider works with generateText, streamText, etc. from the ai package.
▸ npm install ai @ai-sdk/openaiimport { createOpenAI } from '@ai-sdk/openai';import { generateText, streamText } from 'ai'; const ashna = createOpenAI({ baseURL: 'https://api.ashna.ai/v1/api'//api.ashna.ai/v1/api', apiKey: process.env.ASHNA_API_KEY, // key from app.ashna.ai → Account → API}); // Replace with your agent or model id from Ashnaconst modelId = 'your-agent-or-model-id'; // Non-streamingconst { text } = await generateText({ model: ashna(modelId), prompt: 'Hello!',}); // Chat-style messages (OpenAI-compatible)const result = streamText({ model: ashna(modelId), messages: [{ role: 'user', content: 'Hello!' }],});OpenAI Python SDK
from openai import OpenAI client= OpenAI( api_key="YOUR_ASHNA_API_KEY", base_url="https://api.ashna.ai/v1/api",) response= client.chat.completions.create( model="your-agent-or-model-id", messages=[ {"role": "user", "content": "Hello!"}, ],)OpenAI JavaScript / TypeScript SDK
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: process.env.ASHNA_API_KEY, baseURL: 'https://api.ashna.ai/v1/api'//api.ashna.ai/v1/api',}); const response = await client.chat.completions.create({ model: 'your-agent-or-model-id', messages: [{ role: 'user', content: 'Hello!' }],});