AshnaAI Agent Platform API

The AshnaAI HTTP API is OpenAI-compatible: you can call it the same way you would the OpenAI API—same authentication style (Bearer API key), chat completions, streaming, and request/response shapes you already know. Use it to run Ashna agents and models by passing the appropriate model identifier (for example your agent or model id from the Ashna app).

Base URL (OpenAI-compatible root)

Use this value as baseURL (JavaScript) or base_url (Python) for OpenAI-compatible clients. Endpoints such as /chat/completions are resolved under this root.

https://api.ashna.ai/v1/api

API keys

Create and manage keys in Account → API. Send the key in the Authorization header as Bearer <your_key>.

Platform Overview

Beyond the OpenAI-compatible surface, Ashna provides an agent-based architecture so you can:

  • Select from multiple AI models (GPT-4, GPT-3.5, Gemini Pro, Gemini Flash)
  • Create custom agents with specific instructions and behaviors
  • Attach tools and capabilities to agents for specialized tasks
  • Configure memory and context management for conversations
  • Stream responses in real-time for better UX

Quickstart

Get started with the AshnaAI API in under 5 minutes.

1. Get Your API Key

Open Account → API on app.ashna.ai and create an API key. Use that key with any OpenAI-compatible client and the base URL shown in the introduction.

2. Make Your First Request

OpenAI-compatible chat completion (same route shape as OpenAI /v1/chat/completions under the base URL above):

cURL
curl https://api.ashna.ai/v1/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "your-agent-or-model-id",
"messages": [
{ "role": "user", "content": "Hello, can you help me?" }
]
}'

Authentication

All API requests require authentication using an API key passed in the Authorization header.

Headers

Authorizationstring

Bearer token with your API key. Format: Bearer YOUR_API_KEY

Content-Typestring

Should be set to application/json

Example Request
curl https://api.ashna.ai/v1/api/chat \
-H "Authorization: Bearer sk_live_abc123..." \
-H "Content-Type: application/json"

Errors

The API uses standard HTTP response codes. Error responses include a JSON body with details.

400
Bad Request

Invalid request parameters or malformed JSON

401
Unauthorized

Missing or invalid API key

404
Not Found

Resource does not exist

429
Rate Limit Exceeded

Too many requests in a given time period

500
Internal Server Error

Something went wrong on our end

Error Response Format

JSON
{
"error": {
"type": "invalid_request",
"message": "Agent ID is required",
"code": "missing_parameter"
}
}

Webhooks & Events

COMING SOON

Configure webhooks to receive real-time notifications when events occur in your agents.

Supported Events

  • agent.created - New agent created
  • agent.execution.started - Agent execution began
  • agent.execution.completed - Agent execution finished
  • tool.invoked - Tool was called by agent
Webhook Payload Example
{
"event": "agent.execution.completed",
"timestamp": "2026-01-04T10: 30: 00Z",
"data": {
"agent_id": "agent_abc123",
"execution_id": "exec_xyz789",
"status": "success",
"duration_ms": 1450
}
}

API Endpoints

AshnaAI provides multiple endpoints for different use cases, from high-level chat interfaces to low-level completion APIs.

Chat API

The high-level, stateful chat endpoint designed for UI-based conversations. This endpoint manages conversation state, injects bot configurations, and handles streaming responses.

Endpoint

POST/v1/api/chat

Key Parameters

chatIdstring

Unique identifier for the conversation. Used to maintain state across multiple messages.

messagesUIMessage[]

Array of UIMessage objects representing the conversation history.

modelstring

Model identifier (e.g., "ashnaai", "gemini-pro").

streamboolean

Enable streaming responses for real-time output.

streamTypetext | data

Stream format: "text" for simple text streaming, "data" for structured event streaming.

temperaturenumber

Controls randomness (0.0-2.0). Lower values are more deterministic.

maxTokensnumber

Maximum number of tokens to generate in the response.

Request Example

cURL
curl https://api.ashna.ai/v1/api/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"chatId": "chat_abc123",
"messages": [
{
"id": "msg_1",
"role": "user",
"parts": [
{
"type": "text",
"text": "What is machine learning?"
}
]
}
],
"model": "ashnaai",
"temperature": 0.7,
"maxTokens": 500,
"stream": true,
"streamType": "data"
}'

Response Example

JSON
{
"id": "msg_2",
"role": "assistant",
"parts": [
{
"type": "text",
"text": "Machine learning is a subset of artificial intelligence..."
}
],
"metadata": {
"model": "ashnaai",
"tokensUsed": 245,
"finishReason": "stop"
}
}

Chat Completions

Same idea as OpenAI's /v1/chat/completions: a stateless chat completion with messages, model, and optional sampling parameters. Point the official OpenAI SDK or Vercel AI SDK at the base URL above and use this path (or let the SDK add /chat/completions under that base). Set model to your agent or model id.

Endpoint

POST/v1/api/chat/completions

Key Parameters

messagesarray

Array of message objects with "role" (system, user, assistant) and "content".

modelstring

Model identifier (e.g., "ashnaai", "gemini-pro").

temperaturenumber

Sampling temperature between 0 and 2.

Request Example

cURL
curl https://api.ashna.ai/v1/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "ashnaai",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain quantum computing"
}
],
"temperature": 0.7,
"max_tokens": 500
}'

Response Example

JSON
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1704369000,
"model": "ashnaai",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing leverages quantum mechanics..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 245,
"total_tokens": 270
}
}

Text Completions

Legacy single-prompt text completion endpoint. Included for backward compatibility with simple one-shot text generation use cases.

Endpoint

POST/v1/api/completions

Key Parameters

promptstring

The text prompt to generate completion for.

modelstring

Model identifier (e.g., "ashnaai").

Request Example

cURL
curl https://api.ashna.ai/v1/api/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "ashnaai",
"prompt": "Explain quantum computing in simple terms",
"max_tokens": 500,
"temperature": 0.7
}'

Response Example

JSON
{
"id": "cmpl-abc123",
"object": "text_completion",
"created": 1704369000,
"model": "ashnaai",
"choices": [
{
"text": "Quantum computing is a revolutionary approach...",
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 245,
"total_tokens": 257
}
}

Streaming Responses

Stream responses in real-time for better user experience.

cURL
curl https://api.ashna.ai/v1/api/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Write a story about AI",
"stream": true
}'

Stream Event Format

Server-Sent Events
data: {"type": "content", "delta": "Once upon"}
data: {"type": "content", "delta": " a time"}
data: {"type": "content", "delta": " there was"}
data: {"type": "done", "execution_id": "exec_123"}

SDKs & Integrations

Because the API is OpenAI-compatible, use the same clients you would for OpenAI—only the base URL and API key change. Keys come from Account → API. Below: Vercel AI SDK (ai + @ai-sdk/openai) with createOpenAI, then the official OpenAI SDKs for Python and Node.

AI

Vercel AI SDK (TypeScript / Next.js)

npm: ai, @ai-sdk/openai

Use createOpenAI from @ai-sdk/openai with baseURL set to the Ashna base URL and your key. The returned provider works with generateText, streamText, etc. from the ai package.

npm install ai @ai-sdk/openai
import { createOpenAI } from '@ai-sdk/openai';
import { generateText, streamText } from 'ai';
const ashna = createOpenAI({
baseURL: 'https://api.ashna.ai/v1/api'//api.ashna.ai/v1/api',
apiKey: process.env.ASHNA_API_KEY, // key from app.ashna.ai → Account → API
});
// Replace with your agent or model id from Ashna
const modelId = 'your-agent-or-model-id';
// Non-streaming
const { text } = await generateText({
model: ashna(modelId),
prompt: 'Hello!',
});
// Chat-style messages (OpenAI-compatible)
const result = streamText({
model: ashna(modelId),
messages: [{ role: 'user', content: 'Hello!' }],
});
Py

OpenAI Python SDK

from openai import OpenAI
client= OpenAI(
api_key="YOUR_ASHNA_API_KEY",
base_url="https://api.ashna.ai/v1/api",
)
response= client.chat.completions.create(
model="your-agent-or-model-id",
messages=[
{"role": "user", "content": "Hello!"},
],
)
JS

OpenAI JavaScript / TypeScript SDK

import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.ASHNA_API_KEY,
baseURL: 'https://api.ashna.ai/v1/api'//api.ashna.ai/v1/api',
});
const response = await client.chat.completions.create({
model: 'your-agent-or-model-id',
messages: [{ role: 'user', content: 'Hello!' }],
});