Waymore Docs

Guides

Step-by-step tutorials to help you get the most out of LLM Portal.

Quick Start

5 min

Create your account and start chatting with AI in under 5 minutes.

1

Sign Up

Go to https://chat.waymore.ai/register and create your account using email, Google, or GitHub.

2

Verify Your Email

Check your inbox for a verification email and click the link to activate your account. If you signed up with Google or GitHub, this step is automatic.

3

Start a Conversation

After signing in, you will see the chat interface. Type a message in the input box and press Enter. The AI will respond in real-time with streaming text.

4

Explore Features

Try attaching a file to your message, creating a new chat session from the sidebar, or enabling two-factor authentication in your profile settings.

Making Your First API Call

10 min

Generate an API key and send your first chat completion request programmatically.

1

Generate an API Key

Navigate to your API Keys page in the dashboard. Click "Create New Key" and give it a descriptive name.

POST /api/keys
1{
2 "name": "My First API Key",
3 "permissions": ["chat"]
4}

Important: Copy the API key immediately. It will only be shown once.

2

Send a Chat Request

Use cURL or any HTTP client to send a chat completion request:

cURL
1curl https://chat.waymore.ai/api/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "model": "Waymore-A1-Instruct-1011",
6 "messages": [
7 {"role": "system", "content": "You are a helpful assistant."},
8 {"role": "user", "content": "What is machine learning?"}
9 ]
10 }'
3

Parse the Response

The response follows the OpenAI-compatible format. The assistant's message is in choices[0].message.content:

Response
1{
2 "id": "chatcmpl-abc123",
3 "model": "Waymore-A1-Instruct-1011",
4 "choices": [
5 {
6 "message": {
7 "role": "assistant",
8 "content": "Machine learning is a subset of artificial intelligence..."
9 },
10 "finish_reason": "stop"
11 }
12 ],
13 "usage": {
14 "prompt_tokens": 25,
15 "completion_tokens": 150,
16 "total_tokens": 175
17 }
18}
4

Track Your Usage

The usage field in the response shows token consumption. Monitor your overall usage from the dashboard or via the /api/usage/summary endpoint.

Migrate from OpenAI

5 min

LLM Portal provides an OpenAI-compatible API. If you are using the OpenAI SDK or REST API, you can switch with minimal code changes.

1

What Stays the Same

The following are fully compatible — no code changes needed for these:

  • Request format: messages, model, stream, temperature
  • Response format: choices, usage, finish_reason
  • Streaming via Server-Sent Events (SSE)
  • Function calling / Tools (tools, tool_choice, parallel tool calls)
  • Message roles: system, user, assistant, tool
  • Bearer token authentication
2

Change the Base URL

Replace the OpenAI base URL with the LLM Portal endpoint:

Before (OpenAI)
1curl https://api.openai.com/v1/chat/completions \
2 -H "Authorization: Bearer sk-openai-key..." \
3 -H "Content-Type: application/json" \
4 -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
After (LLM Portal)
1curl https://chat.waymore.ai/api/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{"model": "Waymore-A1-Instruct-1011", "messages": [{"role": "user", "content": "Hello"}]}'
3

Update the OpenAI SDK Configuration

If you use the OpenAI Python or Node.js SDK, override the base_url and api_key:

Python SDK
1from openai import OpenAI
2 
3client = OpenAI(
4 base_url="https://chat.waymore.ai/api",
5 api_key="YOUR_API_KEY"
6)
7 
8response = client.chat.completions.create(
9 model="Waymore-A1-Instruct-1011",
10 messages=[{"role": "user", "content": "Hello"}]
11)
Node.js SDK
1import OpenAI from "openai";
2 
3const client = new OpenAI({
4 baseURL: "https://chat.waymore.ai/api",
5 apiKey: "YOUR_API_KEY"
6});
7 
8const response = await client.chat.completions.create({
9 model: "Waymore-A1-Instruct-1011",
10 messages: [{ role: "user", content: "Hello" }]
11});
4

Update the Model Name

Replace OpenAI model names (gpt-4o, gpt-4-turbo, gpt-3.5-turbo) with Waymore-A1-Instruct-1011. Use the /api/chat/models endpoint to list all available models.

Migration Checklist
  • ☐ Base URL: https://chat.waymore.ai/api
  • ☐ API key: Generate from LLM Portal dashboard
  • ☐ Model name: Waymore-A1-Instruct-1011
  • ☐ Remove any OpenAI-specific parameters not listed in our API Reference

Migrate from Claude (Anthropic)

5 min

LLM Portal natively supports the Anthropic Claude API format. You can use Claude-style requests — including the system field, input_schema tool definitions, and tool_result messages — with no format conversion. The API auto-detects the format and responds accordingly.

1

Change the Endpoint and Auth

The only changes required are the URL, auth header, and model name. Your existing request body works as-is:

Before (Anthropic)
1curl https://api.anthropic.com/v1/messages \
2 -H "x-api-key: sk-ant-..." \
3 -H "anthropic-version: 2023-06-01" \
4 -H "Content-Type: application/json" \
5 -d '{
6 "model": "claude-sonnet-4-5-20250929",
7 "max_tokens": 1024,
8 "system": "You are a helpful assistant.",
9 "messages": [{"role": "user", "content": "Hello"}]
10 }'
After (LLM Portal)
1curl https://chat.waymore.ai/api/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "model": "Waymore-A1-Instruct-1011",
6 "max_tokens": 1024,
7 "system": "You are a helpful assistant.",
8 "messages": [{"role": "user", "content": "Hello"}]
9 }'
2

Claude Format Is Natively Supported

The API auto-detects the Anthropic format and responds in the same style. All of the following Claude features work without any conversion:

  • Top-level system field for system prompts
  • Anthropic tool format with input_schema
  • Tool use responses with tool_use content blocks
  • Tool results via tool_result content blocks
  • Anthropic response format (content[], stop_reason, input_tokens/output_tokens)
3

Tools Work in Claude Format

You can use Anthropic-style tool definitions directly. The API detects the format and returns tool calls in Anthropic style:

Request (Anthropic format)
1{
2 "model": "Waymore-A1-Instruct-1011",
3 "max_tokens": 256,
4 "messages": [
5 {"role": "user", "content": "What is the weather in Athens?"}
6 ],
7 "tools": [
8 {
9 "name": "get_weather",
10 "description": "Get the current weather for a location",
11 "input_schema": {
12 "type": "object",
13 "properties": {
14 "location": {"type": "string"}
15 },
16 "required": ["location"]
17 }
18 }
19 ]
20}
Response (Anthropic format)
1{
2 "type": "message",
3 "role": "assistant",
4 "model": "Waymore-A1-Instruct-1011",
5 "content": [
6 {"type": "text", "text": "I'll check the weather in Athens for you."},
7 {
8 "type": "tool_use",
9 "id": "toolu_01bt6yap31yeflmplapypx4",
10 "name": "get_weather",
11 "input": {"location": "Athens, Greece"}
12 }
13 ],
14 "stop_reason": "tool_use",
15 "usage": {"input_tokens": 855, "output_tokens": 30}
16}
4

Submit Tool Results (Claude Format)

Send tool results back using the Anthropic tool_result content block:

Follow-up request
1{
2 "model": "Waymore-A1-Instruct-1011",
3 "max_tokens": 256,
4 "messages": [
5 {"role": "user", "content": "What is the weather in Athens?"},
6 {"role": "assistant", "content": [
7 {"type": "text", "text": "I'll check the weather in Athens for you."},
8 {"type": "tool_use", "id": "toolu_01bt6yap31yeflmplapypx4", "name": "get_weather", "input": {"location": "Athens, Greece"}}
9 ]},
10 {"role": "user", "content": [
11 {"type": "tool_result", "tool_use_id": "toolu_01bt6yap31yeflmplapypx4", "content": "{"temperature": 22, "condition": "Sunny"}"}
12 ]}
13 ],
14 "tools": [...]
15}
Response
1{
2 "type": "message",
3 "role": "assistant",
4 "content": [
5 {"type": "text", "text": "The current weather in Athens is sunny with a temperature of 22°C (about 72°F)."}
6 ],
7 "stop_reason": "end_turn"
8}
5

Update the Anthropic SDK (Optional)

If you use the Anthropic Python SDK, override the base_url and use your LLM Portal API key:

Python SDK
1import anthropic
2 
3client = anthropic.Anthropic(
4 base_url="https://chat.waymore.ai/api",
5 api_key="YOUR_API_KEY"
6)
7 
8response = client.messages.create(
9 model="Waymore-A1-Instruct-1011",
10 max_tokens=1024,
11 system="You are a helpful assistant.",
12 messages=[{"role": "user", "content": "Hello"}]
13)
Migration Checklist
  • ☐ Base URL: https://chat.waymore.ai/api
  • ☐ Auth: Authorization: Bearer YOUR_API_KEY
  • ☐ Model name: Waymore-A1-Instruct-1011
  • ☐ Remove anthropic-version header (not required)
  • ☑ Message format — no changes needed
  • ☑ Tool definitions — no changes needed
  • ☑ Tool results — no changes needed
  • ☑ Response parsing — no changes needed

Streaming Responses

10 min

Enable real-time streaming to receive tokens as they are generated, reducing perceived latency.

1

Enable Streaming

Set stream: true in your request body:

cURL
1curl https://chat.waymore.ai/api/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "model": "Waymore-A1-Instruct-1011",
6 "messages": [{"role": "user", "content": "Write a poem about coding."}],
7 "stream": true
8 }'
2

Process the Event Stream

The response is a Server-Sent Events (SSE) stream. Each event contains a JSON chunk with the next token:

Event Stream
1data: {"id":"chatcmpl-abc123","choices":[{"delta":{"role":"assistant"},"index":0}]}
2 
3data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":"In"},"index":0}]}
4 
5data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":" lines"},"index":0}]}
6 
7data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":" of"},"index":0}]}
8 
9data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":" code"},"index":0}]}
10 
11data: [DONE]
3

Concatenate Tokens

Each delta.content field contains a text fragment. Concatenate all fragments to build the complete response. The stream ends with data: [DONE].

Function Calling (Tools)

12 min

Give the model the ability to call custom functions, enabling it to fetch real-time data, interact with external services, or perform calculations. The API supports both OpenAI and Anthropic tool formats — the format is auto-detected from your request.

1

Define Your Tools

Create a tool definition describing your function's name, purpose, and parameters using JSON Schema:

Tool Definition
1{
2 "type": "function",
3 "function": {
4 "name": "get_weather",
5 "description": "Get the current weather for a given location",
6 "parameters": {
7 "type": "object",
8 "properties": {
9 "location": {
10 "type": "string",
11 "description": "The city and country, e.g. Athens, Greece"
12 },
13 "unit": {
14 "type": "string",
15 "enum": ["celsius", "fahrenheit"],
16 "description": "Temperature unit"
17 }
18 },
19 "required": ["location"]
20 }
21 }
22}

Tip: Write clear descriptions for both the function and each parameter. This helps the model decide when and how to use the tool.

2

Send a Request with Tools

Include the tools array and set tool_choice in your completion request:

cURL
1curl https://chat.waymore.ai/api/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "model": "Waymore-A1-Instruct-1011",
6 "messages": [
7 {"role": "user", "content": "What is the weather in Athens, Greece?"}
8 ],
9 "tools": [
10 {
11 "type": "function",
12 "function": {
13 "name": "get_weather",
14 "description": "Get the current weather for a given location",
15 "parameters": {
16 "type": "object",
17 "properties": {
18 "location": {"type": "string"},
19 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
20 },
21 "required": ["location"]
22 }
23 }
24 }
25 ],
26 "tool_choice": "auto"
27 }'
3

Handle the Tool Call Response

When the model wants to use a tool, the response has finish_reason: "tool_calls" and includes a tool_calls array:

Response
1{
2 "choices": [
3 {
4 "message": {
5 "role": "assistant",
6 "content": "I'll check the current weather in Athens, Greece for you.",
7 "tool_calls": [
8 {
9 "id": "functions.get_weather:0",
10 "type": "function",
11 "function": {
12 "name": "get_weather",
13 "arguments": "{"location": "Athens, Greece"}"
14 }
15 }
16 ]
17 },
18 "finish_reason": "tool_calls"
19 }
20 ]
21}

Parse the arguments field (it's a JSON string) and execute your function with those parameters.

4

Execute the Function

Run your function using the arguments the model provided. In this example, you would call your weather API with location: "Athens, Greece" and get back the current conditions.

5

Submit the Tool Result

Send the function result back to the model in a follow-up request. Include the full conversation history plus a tool role message with the matching tool_call_id:

POST /api/chat/completions
1{
2 "model": "Waymore-A1-Instruct-1011",
3 "messages": [
4 {"role": "user", "content": "What is the weather in Athens, Greece?"},
5 {
6 "role": "assistant",
7 "content": "I'll check the current weather in Athens, Greece for you.",
8 "tool_calls": [{
9 "id": "functions.get_weather:0",
10 "type": "function",
11 "function": {
12 "name": "get_weather",
13 "arguments": "{"location": "Athens, Greece"}"
14 }
15 }]
16 },
17 {
18 "role": "tool",
19 "tool_call_id": "functions.get_weather:0",
20 "content": "{"temperature": 18, "unit": "celsius", "condition": "Partly cloudy"}"
21 }
22 ],
23 "tools": [...]
24}
6

Receive the Final Response

The model uses the tool result to generate a natural language answer:

Response
1{
2 "choices": [
3 {
4 "message": {
5 "role": "assistant",
6 "content": "In Athens, Greece it's currently partly cloudy with a temperature of 18°C — quite pleasant conditions overall."
7 },
8 "finish_reason": "stop"
9 }
10 ]
11}
Parallel Tool Calls

The model can call multiple tools in one response (e.g., fetching weather for two cities at once). Each call has a unique id. Execute all functions and submit all results as separate tool messages before making the next request.

tool_choice Options

"auto" (default) — model decides. "none" — never call tools. "required" — must call at least one tool.

Managing Chat Sessions

8 min

Organize conversations into sessions to maintain context and history.

1

Create a Session

Create a named session to group related messages:

cURL
1curl -X POST https://chat.waymore.ai/api/chat/sessions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "title": "Project Research",
6 "model": "Waymore-A1-Instruct-1011"
7 }'
Response 201
1{
2 "id": "clx_session_abc123",
3 "title": "Project Research",
4 "model": "Waymore-A1-Instruct-1011",
5 "messages": []
6}
2

Send Messages in a Session

Include the session_id in your completion request to associate it with the session:

POST /api/chat/completions
1{
2 "model": "Waymore-A1-Instruct-1011",
3 "messages": [
4 {"role": "user", "content": "Summarize the latest trends in AI."}
5 ],
6 "session_id": "clx_session_abc123"
7}
3

Retrieve Session History

List all messages in a session to review the conversation history:

cURL
1curl https://chat.waymore.ai/api/chat/sessions/clx_session_abc123/messages \
2 -H "Authorization: Bearer YOUR_API_KEY"
4

Update or Delete Sessions

Rename a session with PATCH or delete it entirely. Deleting a session removes all its messages.

cURL
1# Rename a session
2curl -X PATCH https://chat.waymore.ai/api/chat/sessions/clx_session_abc123 \
3 -H "Authorization: Bearer YOUR_API_KEY" \
4 -H "Content-Type: application/json" \
5 -d '{"title": "AI Research 2025"}'
6 
7# Delete a session
8curl -X DELETE https://chat.waymore.ai/api/chat/sessions/clx_session_abc123 \
9 -H "Authorization: Bearer YOUR_API_KEY"

API Key Best Practices

8 min

Secure your API keys and follow best practices for production deployments.

1

Use Separate Keys per Environment

Create separate API keys for development, staging, and production. This limits the blast radius if a key is compromised and makes it easy to revoke access for a single environment.

2

Set Appropriate Permissions

Only grant the permissions each key actually needs. A key used only for chat completions should not have image or vision permissions:

Minimal Permissions
1{
2 "name": "Chat Only Key",
3 "permissions": ["chat"],
4 "rpmLimit": 30
5}
3

Enable IP Whitelisting

For production keys, restrict access to your server's IP addresses:

IP Restricted Key
1{
2 "name": "Production Server",
3 "permissions": ["chat", "vision"],
4 "ipWhitelist": ["203.0.113.10", "203.0.113.11"],
5 "monthlyLimit": 500000
6}
4

Set Rate Limits and Expiration

Configure rate limits to prevent runaway costs and set expiration dates for temporary access. Use rpmLimit for requests per minute, dailyLimit and monthlyLimit for token caps, and expiresInDays for auto-expiration.

5

Rotate Keys Regularly

Regenerate API key secrets periodically. The regenerate endpoint gives you a new secret while keeping the same configuration:

cURL
1curl -X POST https://chat.waymore.ai/api/keys/YOUR_KEY_ID/regenerate \
2 -H "Authorization: Bearer YOUR_API_KEY"

Note: The old key secret is immediately invalidated. Update your applications before or immediately after regenerating.

6

Monitor Key Usage

Regularly review each key's usage to detect anomalies:

cURL
1# Check usage for a specific key
2curl "https://chat.waymore.ai/api/keys/YOUR_KEY_ID/usage?days=7" \
3 -H "Authorization: Bearer YOUR_API_KEY"
4 
5# Check request history
6curl "https://chat.waymore.ai/api/keys/YOUR_KEY_ID/requests?limit=20" \
7 -H "Authorization: Bearer YOUR_API_KEY"

Working with File Attachments

8 min

Attach files to your chat messages for the AI to analyze, summarize, or extract information from.

1

Upload a File

Upload the file using the attachments endpoint. This returns a file URL you can reference in a message:

cURL
1curl -X POST https://chat.waymore.ai/api/chat/attachments \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -F "files=@report.pdf"
2

Create a Message with Attachments

Include the attachment metadata when creating a message:

POST /api/chat/messages
1{
2 "sessionId": "clx_session_abc123",
3 "role": "user",
4 "content": "Summarize the key points from this report.",
5 "model": "Waymore-A1-Instruct-1011",
6 "attachments": [
7 {
8 "name": "report.pdf",
9 "type": "application/pdf",
10 "size": 245000,
11 "url": "/uploads/abc123/report.pdf"
12 }
13 ]
14}
3

File Size and Format Limits

Each file can be up to 50MB. You can attach up to 5 files per message. Supported formats include images (PNG, JPG, GIF, WebP), documents (PDF, DOC, DOCX, TXT), data files (CSV, JSON, XML), and code files.

Organizing Content with Collections

10 min

Save content from conversations and organize it into collections for easy retrieval.

1

Save Content to Your Library

Save images, code snippets, notes, or any valuable content from your conversations:

POST /api/stuff
1{
2 "title": "Python Data Processing Script",
3 "type": "CODE",
4 "content": "import pandas as pd\n\ndf = pd.read_csv('data.csv')\nresult = df.groupby('category').sum()",
5 "description": "Pandas script for aggregating data by category",
6 "tags": ["python", "pandas", "data"]
7}
2

Create a Collection

Group related items into a collection with a custom name and color:

POST /api/collections
1{
2 "name": "Data Science",
3 "description": "Code snippets and notes for data analysis",
4 "color": "#8B5CF6",
5 "icon": "database"
6}
3

Add Items to the Collection

Add content items to your new collection:

cURL
1curl -X POST "https://chat.waymore.ai/api/collections/COLLECTION_ID/items" \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{"itemId": "CONTENT_ITEM_ID"}'
4

Search and Filter Your Library

Use powerful search and filtering to find content:

cURL
1# Search by keyword
2curl "https://chat.waymore.ai/api/stuff?search=pandas&type=CODE" \
3 -H "Authorization: Bearer YOUR_API_KEY"
4 
5# Filter by collection
6curl "https://chat.waymore.ai/api/stuff?collectionId=COLLECTION_ID&sortBy=newest" \
7 -H "Authorization: Bearer YOUR_API_KEY"
8 
9# Get only favorites
10curl "https://chat.waymore.ai/api/stuff?favorite=true&limit=10" \
11 -H "Authorization: Bearer YOUR_API_KEY"

Monitoring Your Usage

5 min

Track your API consumption, costs, and performance to stay within budget.

1

Check Overall Usage

Query the usage summary endpoint for an overview of your consumption:

cURL
1curl "https://chat.waymore.ai/api/usage/summary?period=30d" \
2 -H "Authorization: Bearer YOUR_API_KEY"
Response
1{
2 "stats": {
3 "totalRequests": 1250,
4 "totalTokens": 320000,
5 "inputTokens": 120000,
6 "outputTokens": 200000,
7 "totalCost": 4.85,
8 "avgResponseTime": 920,
9 "errorRate": 0.8
10 }
11}
2

Filter by Model or API Key

Narrow down usage data using query parameters:

cURL
1# Filter by model
2curl "https://chat.waymore.ai/api/usage/summary?period=7d&model=Waymore-A1-Instruct-1011" \
3 -H "Authorization: Bearer YOUR_API_KEY"
4 
5# Get available filter options
6curl "https://chat.waymore.ai/api/usage/filters" \
7 -H "Authorization: Bearer YOUR_API_KEY"
3

Review Cost Breakdown

The usage summary includes a cost breakdown by token type (input vs output) and by model. Use this to understand where your budget is going and optimize your prompts for cost efficiency.

4

Set Up Key-Level Monitoring

Monitor individual API keys to track which integrations consume the most tokens:

cURL
1# Per-key usage
2curl "https://chat.waymore.ai/api/keys/KEY_ID/usage?days=7" \
3 -H "Authorization: Bearer YOUR_API_KEY"
4 
5# Per-key request history
6curl "https://chat.waymore.ai/api/keys/KEY_ID/requests?limit=10" \
7 -H "Authorization: Bearer YOUR_API_KEY"
Updated February 2026