Waymore Docs

Documentation

Everything you need to integrate and use the LLM Portal platform.

LLM Portal APIv1.0

Launch production-ready AI experiences

Reference documentation, live examples, and platform policies—all in one place. Authenticate, select a model, and ship in minutes.

Base URL

https://chat.waymore.ai/api

REST + SSE endpoints

Authentication

Bearer & OAuth SSO

API keys, Google, GitHub

Uptime

99.9% SLA

Monitored 24/7

Compliance

SOC 2 Type II

Data residency EU & US

Status

All systems normal

Last incident 21 days ago

SDKs

Node.js · Python · Go

REST + SSE compatible

Requests

40k / min

Burst limit per org

Support

Enterprise SLA

24/7 pager rotation


id: getting-started group: Setup title: Getting Started summary: Get up and running with a hosted chat or the OpenAI-compatible API in a few minutes. keywords:

  • quickstart
  • signup
  • api key
  • onboarding featured: true quickLinkDescription: Launch chat + send your first completion. search:
  • Step-by-step onboarding: sign up, chat, create an API key, and send your first completion.
  • Includes cURL, Node.js, and Python examples for calling the chat completions endpoint.

Getting Started

Get up and running with LLM Portal in a few simple steps. Use the hosted chat or integrate via REST.

Prerequisites

Create an account, verify email, and issue your first API key.

Recommended

Install the CLI or SDK to manage keys and sessions locally.

Step 1

Create an account

Sign up at https://chat.waymore.ai/register or use Google/GitHub SSO, then verify your email.

Step 2

Start chatting

You're dropped into the chat UI with streaming answers, conversation history, and ratings out of the box.

Step 3

Generate an API key

Visit the API Keys page, issue a scoped key, and copy it immediately — it is only shown once.

Step 4

Ship your first request

Use the OpenAI-compatible endpoint and the language tabs below to test.

cURL
1curl https://chat.waymore.ai/api/chat/completions \
2-H "Authorization: Bearer YOUR_API_KEY" \
3-H "Content-Type: application/json" \
4-d '{
5 "model": "Waymore-A1-Instruct-1011",
6 "messages": [
7 {"role": "user", "content": "Hello, how are you?"}
8 ]
9}'
Response
1{
2"id": "chatcmpl-abc123",
3"model": "Waymore-A1-Instruct-1011",
4"choices": [
5 {
6 "message": {
7 "role": "assistant",
8 "content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
9 },
10 "finish_reason": "stop"
11 }
12],
13"usage": {
14 "prompt_tokens": 12,
15 "completion_tokens": 18,
16 "total_tokens": 30
17}
18}

id: migration group: Setup title: Migration summary: Drop-in compatible with OpenAI and Anthropic clients — swap only the base URL, model ID, and auth header. keywords:

  • openai
  • anthropic
  • compatibility
  • base_url search:
  • Explains the minimal changes required when moving from OpenAI or Anthropic endpoints.
  • Comparison tables cover base URLs, auth headers, endpoints, and model identifiers.

Migration

LLM Portal provides an OpenAI-compatible API. If you are migrating from OpenAI or Anthropic Claude, the transition requires minimal changes.

OpenAI clients

Swap base URL, API key, and model ID. No schema changes.

Anthropic clients

Native Claude payloads are supported, including tool_use blocks.

From OpenAI

The API is fully compatible with the OpenAI format. You only need to change three things:

SettingOpenAILLM Portal
Base URLhttps://api.openai.com/v1https://chat.waymore.ai/api
API Keysk-...Generate from dashboard
Modelgpt-4oWaymore-A1-Instruct-1011

Everything else — request/response format, streaming, function calling, message roles — works identically. You can use the OpenAI Python or Node.js SDK by simply overriding the base_url.

From Claude (Anthropic)

LLM Portal natively supports the Anthropic Claude API format. The API auto-detects the format and responds accordingly. Your existing Claude request bodies — including the top-level system field, input_schema tool definitions, tool_use/tool_result content blocks, and Anthropic-style responses — work without any conversion. You only need to change:

SettingAnthropicLLM Portal
Base URLhttps://api.anthropic.comhttps://chat.waymore.ai/api
Authx-api-key: sk-ant-...Authorization: Bearer ...
Endpoint/v1/messages/api/chat/completions
Modelclaude-sonnet-4-5-*Waymore-A1-Instruct-1011

See the Migrate from OpenAI and Migrate from Claude step-by-step guides for detailed examples and code samples.


id: authentication group: Setup title: Authentication summary: API keys, cookie-based sessions, OAuth, and TOTP-based two-factor authentication. keywords:

  • oauth
  • '2fa'
  • sso
  • session search:
  • Details API key bearer auth plus session cookies managed by NextAuth.
  • Covers OAuth providers, TOTP setup commands, and password requirements.

Authentication

LLM Portal supports multiple authentication methods depending on your use case.

API

Use Bearer tokens scoped per key, with optional IP rules.

Web access

Sessions rely on secure cookies, OAuth SSO, and optional TOTP.

API Key Authentication

For programmatic access, include your API key in the Authorization header as a Bearer token. API keys can be scoped with specific permissions and rate limits.

Authorization Header
1Authorization: Bearer YOUR_API_KEY

Session Authentication

Browser-based access uses secure HTTP-only cookies managed by NextAuth. Sessions are created automatically when you sign in through the web interface. Session tokens are JWT-based and refresh automatically.

OAuth Providers

Sign in with your existing Google or GitHub account. OAuth accounts are automatically linked if the email address matches an existing account.

Two-Factor Authentication (2FA)

Enhance your account security by enabling TOTP-based two-factor authentication. Once enabled, you will need to provide a 6-digit code from your authenticator app (e.g., Google Authenticator, Authy) each time you sign in. Backup codes are provided during setup for account recovery.

Enable 2FA
1# Step 1: Generate TOTP secret and QR code
2curl -X POST https://chat.waymore.ai/api/auth/2fa/setup \
3-H "Authorization: Bearer YOUR_API_KEY"
4 
5# Step 2: Scan the QR code with your authenticator app, then verify
6curl -X POST https://chat.waymore.ai/api/auth/2fa/verify \
7-H "Authorization: Bearer YOUR_API_KEY" \
8-H "Content-Type: application/json" \
9-d '{"token": "123456"}'

Password Requirements

Passwords must be at least 12 characters and include uppercase letters, lowercase letters, numbers, and special characters. Password changes require your current password for verification.


id: chat-interface group: Features title: Chat Interface summary: "Explore the hosted chat features: streaming responses, code highlighting, attachments, ratings, and voice input." keywords:

  • ui
  • messages
  • streaming
  • attachments featured: true quickLinkDescription: Real-time streaming, uploads, ratings. search:
  • Describes conversations, streaming tokens, syntax highlighting, and attachments.
  • Mentions rating controls and microphone-based voice input.

Chat Interface

The web-based chat interface provides a rich conversational experience with real-time streaming, code highlighting, file attachments, and more.

Best for

Teams that need live previews, streaming tokens, and quick file drops.

Highlights

Auto-titling, ratings, voice input, and syntax-highlighted responses.

Conversations

Messages are organized into sessions (conversations). Each session maintains its own context and message history. You can create new sessions, rename them, and switch between them from the sidebar. Sessions are automatically titled based on the first message.

Streaming Responses

Responses stream in real-time using Server-Sent Events (SSE). You see each token as it is generated, providing immediate feedback. Streaming can be used both in the web interface and via the API by setting stream: true.

Code Highlighting

Code blocks in AI responses are automatically syntax-highlighted with a VS Code-inspired theme. Supported languages include JavaScript, TypeScript, Python, Java, Go, Rust, SQL, HTML, CSS, and many more. Each code block includes a copy button and line numbers.

File Attachments

Attach files to your messages for the AI to analyze. Supported file types include images (PNG, JPG, GIF, WebP), PDFs, documents, spreadsheets, and code files. Each file can be up to 50MB, with a maximum of 5 files per message.

Message Ratings

Rate AI responses with thumbs up or thumbs down to help improve response quality. Ratings are tracked and can be used for analytics.

Voice Input

Use the microphone button to dictate messages using speech-to-text. Voice input uses a WebSocket connection for real-time transcription.


id: api-keys group: Features title: API Keys summary: Manage scoped API credentials with permissions, rate limits, and IP whitelists. keywords:

  • keys
  • scopes
  • rate limits
  • rotation featured: true quickLinkDescription: Create scoped keys and rotate safely. search:
  • Shows JSON payload for creating keys with permissions and limits.
  • Explains permissions matrix, RPM limits, IP whitelisting, and lifecycle states.

API Keys

API keys provide programmatic access to the LLM Portal API. Each key can be configured with specific permissions, rate limits, and IP restrictions.

Rotation

Instantly revoke or regenerate keys without redeploying clients.

Controls

Fine-grained scopes, RPM caps, IP whitelists, and expirations.

Creating Keys

Create API keys from the web dashboard or via the API. The number of keys you can create depends on your subscription plan (1 for Free, 10 for Pro, unlimited for Enterprise). The full key value is shown only once at creation — store it securely.

POST /api/keys
1{
2"name": "Production Key",
3"description": "Backend integration key",
4"permissions": ["chat", "images", "vision"],
5"ipWhitelist": ["203.0.113.0/24"],
6"dailyLimit": 10000,
7"monthlyLimit": 250000,
8"rpmLimit": 60,
9"expiresInDays": 90
10}

Permissions

API keys support granular permissions to restrict what operations the key can perform:

PermissionDescription
chatSend chat completion requests
imagesGenerate and process images
visionAnalyze images and visual content

Rate Limits

Each key can have configurable rate limits: requests per minute (RPM), daily token limit, and monthly token limit. When a limit is exceeded, the API returns a 429 Too Many Requests response.

IP Whitelisting

Restrict API key usage to specific IP addresses or CIDR ranges. Requests from non-whitelisted IPs will be rejected with a 403 Forbidden response.

Key Lifecycle

API keys can be in one of three states: Active, Revoked, or Expired. You can regenerate a key to get a new secret while keeping the same configuration. Revoked keys can be reactivated if needed. Keys can also be set to auto-expire after a specified number of days.


id: models group: Features title: Models summary: List available model IDs and tune sampling parameters for completions. keywords:

  • models
  • temperature
  • sampling featured: true quickLinkDescription: Available IDs and sampling parameters. search:
  • Includes cURL example for listing models and JSON response sample.
  • Table documents parameters like temperature, messages, stream flag, and session_id.

Models

LLM Portal provides access to AI models for text generation, analysis, and conversation.

Available Models

Query the models endpoint to see all available models. Each model has an ID and provider.

cURL
1curl https://chat.waymore.ai/api/chat/models \
2-H "Authorization: Bearer YOUR_API_KEY"
Response
1{
2"data": [
3 {
4 "id": "Waymore-A1-Instruct-1011",
5 "owned_by": "waymore"
6 }
7]
8}

Model Parameters

When sending a chat completion request, you can configure the following parameters:

ParameterTypeDescription
modelstringModel ID to use for completion
messagesarrayArray of message objects with role and content
streambooleanEnable SSE streaming (default: false)
temperaturenumberSampling temperature (0-2, default: 0.7)
session_idstringOptional session to associate the completion with

id: function-calling group: Features title: Function Calling summary: Define tools the model can invoke, with both OpenAI- and Anthropic-style payloads. keywords:

  • tools
  • functions
  • tool_choice search:
  • Explains tool definition differences between OpenAI and Anthropic formats.
  • Covers tool_choice options, parallel tool calls, and multi-turn workflow.

Function Calling

Extend the model's capabilities by defining custom functions (tools) that it can invoke during a conversation. The API supports both OpenAI and Anthropic tool formats natively — the format is auto-detected based on your request structure.

How It Works

You define tools in your API request. When the model determines a tool should be used, it returns the function name and structured arguments. You execute the function on your side, then send the result back in a follow-up request. The model then generates a final response using the tool output.

Supported Formats

The API auto-detects which format you are using and responds in the same style:

FeatureOpenAI FormatAnthropic Format
Tool definition{type: "function", function: {parameters: ...}}{name: ..., input_schema: ...}
Tool call responsetool_calls arraytool_use content block
Tool resultrole: "tool" messagetool_result content block
Stop reasonfinish_reason: "tool_calls"stop_reason: "tool_use"

Tool Definitions

Each tool has a type of "function" and a function object containing the name, description, and JSON Schema parameters:

OpenAI Tool Format
1{
2"tools": [
3 {
4 "type": "function",
5 "function": {
6 "name": "get_weather",
7 "description": "Get the current weather for a given location",
8 "parameters": {
9 "type": "object",
10 "properties": {
11 "location": {
12 "type": "string",
13 "description": "City and country, e.g. Athens, Greece"
14 }
15 },
16 "required": ["location"]
17 }
18 }
19 }
20],
21"tool_choice": "auto"
22}

Controlling Tool Use

OptionBehavior
"auto"Model decides whether to call a tool (default)
"none"Model will not call any tools
"required"Model must call at least one tool

Parallel Tool Calls

The model can call multiple tools in a single response. For example, asking "What's the weather in Athens and London?" may produce two tool calls in the same response. Each has a unique ID — submit results for all of them before making the next completion request.

Multi-Turn Flow

The complete function calling flow involves three steps:

  1. Send a request with tools defined. The model returns tool calls with function names and arguments.
  2. Execute the function(s) on your side using the arguments provided by the model.
  3. Send the results back (as tool role messages in OpenAI format, or tool_result content blocks in Anthropic format). The model generates a natural language response.

See the API Reference for complete request/response examples, and the Function Calling Guide for a step-by-step tutorial.


id: usage-analytics group: Account title: Usage & Analytics summary: Track token consumption, latency, and trends via dashboards or the usage API. keywords:

  • metrics
  • charts
  • dashboard search:
  • Describes dashboard charts plus usage API filters by model and key.
  • Lists metrics such as tokens, cost, latency, and shows per-key usage endpoint.

Usage & Analytics

Monitor your API consumption, track costs, and analyze usage patterns through the usage dashboard or API endpoints.

Usage Dashboard

The web dashboard provides visual charts showing your request volume, token consumption, cost breakdown, and error rates over time. Filter by date range, API key, model, and request status.

Querying Usage via API

Use the usage API to programmatically retrieve your consumption data. Supports filtering by time period, API key, and model.

cURL
1# Get usage summary for the last 30 days
2curl "https://chat.waymore.ai/api/usage/summary?period=30d" \
3-H "Authorization: Bearer YOUR_API_KEY"
4 
5# Filter by model
6curl "https://chat.waymore.ai/api/usage/summary?period=7d&model=Waymore-A1-Instruct-1011" \
7-H "Authorization: Bearer YOUR_API_KEY"

Metrics Tracked

MetricDescription
Total RequestsNumber of API calls made
Input TokensTokens sent in prompts
Output TokensTokens generated by the model
CostEstimated spend for the selected period
LatencyP50/P95 response times

Alerts

Set up usage alerts to receive notifications when you approach or exceed token or cost thresholds. Alerts can be sent via email or Slack.

Per-Key Usage

View usage statistics for individual API keys, including daily breakdowns and request history:

cURL
1curl "https://chat.waymore.ai/api/keys/YOUR_KEY_ID/usage?days=30" \
2-H "Authorization: Bearer YOUR_API_KEY"

id: billing group: Account title: Billing & Plans summary: Compare plans, manage payment methods, and retrieve invoices via the API. keywords:

  • pricing
  • plans
  • invoice search:
  • Plan comparison table plus Stripe-based payment method guidance.
  • API endpoints for listing invoices and previewing plan changes, plus cancellation policy.

Billing & Plans

LLM Portal offers flexible subscription plans to match your usage needs. Billing is securely handled through Stripe.

Subscription Plans

Choose between Free, Pro, and Enterprise plans. Each plan includes a different allocation of monthly tokens, API keys, and features. Plans can be changed at any time with prorated billing.

FeatureFreeProEnterprise
Monthly Tokens1,00050,000Custom
API Keys110Unlimited
SupportCommunityPriorityDedicated
Billing-Monthly/YearlyCustom

Payment Methods

Add credit or debit cards as payment methods through the billing dashboard. You can have multiple cards on file and set a default for recurring charges. All payment processing is handled securely by Stripe — card details never touch our servers.

Invoices

View and download invoices for all past payments. Invoices include a breakdown of token usage, plan charges, and any prorated adjustments from plan changes.

cURL
1# List recent invoices
2curl "https://chat.waymore.ai/api/billing/invoices" \
3-H "Authorization: Bearer YOUR_API_KEY"
4 
5# Preview a plan change
6curl "https://chat.waymore.ai/api/billing/preview?plan=pro" \
7-H "Authorization: Bearer YOUR_API_KEY"

Cancellation

You can cancel your subscription at any time. Cancellation takes effect at the end of your current billing period — you retain access to Pro features until then. If you exceed the included tokens for your plan, overage is billed at the end of the billing cycle. Configure alerts to be notified before you reach your token allotment.


id: content-library group: Content title: Content Library summary: Archive assets from conversations—images, files, code, and notes—with tags and search. keywords:

  • files
  • canvas
  • library search:
  • Outlines content types (images, video, documents, code, notes) and tagging.
  • Shows search and stats API calls for filtering favorites and checking storage usage.

Content Library

Save and organize content from your AI conversations into a personal library. Content items include images, videos, files, code snippets, conversations, and notes.

Content Types

TypeDescription
IMAGEGenerated or uploaded images
VIDEOVideo files and recordings
FILEDocuments, PDFs, spreadsheets
CODECode snippets with syntax highlighting
CONVERSATIONSaved chat conversations
NOTEText notes and annotations

Organizing Content

Use tags, favorites, and collections to keep your library organized. Content items support full-text search across titles, descriptions, and tags. Filter by type, source, favorite status, and archive status.

cURL
1# Search for favorite images
2curl "https://chat.waymore.ai/api/stuff?type=IMAGE&favorite=true&search=landscape" \
3-H "Authorization: Bearer YOUR_API_KEY"
4 
5# Get content statistics
6curl "https://chat.waymore.ai/api/stuff/stats" \
7-H "Authorization: Bearer YOUR_API_KEY"

Storage

Each account has a storage limit based on your subscription plan. Track your storage usage through the content statistics endpoint. File uploads are limited to 50MB per file.


id: collections group: Content title: Collections summary: Group related content items with custom names, colors, and icons. keywords:

  • organization
  • tags
  • folders search:
  • Covers creation payload for collections and managing membership via REST calls.
  • Mentions sharing options with role-based permissions.

Collections

Group related content items into collections for better organization. Each collection can have a custom name, description, color, and icon.

Creating Collections

Create collections from the content library page or via the API. A default collection is created automatically for each new account.

POST /api/collections
1{
2"name": "Research Papers",
3"description": "Academic papers and references",
4"color": "#3B82F6",
5"icon": "book"
6}

Managing Items

Add or remove content items from collections. An item can belong to multiple collections. Default collections cannot be deleted.

cURL
1# Add an item to a collection
2curl -X POST "https://chat.waymore.ai/api/collections/COLLECTION_ID/items" \
3-H "Authorization: Bearer YOUR_API_KEY" \
4-H "Content-Type: application/json" \
5-d '{"itemId": "ITEM_ID"}'
6 
7# Remove an item from a collection
8curl -X DELETE "https://chat.waymore.ai/api/collections/COLLECTION_ID/items?itemId=ITEM_ID" \
9-H "Authorization: Bearer YOUR_API_KEY"

Sharing

Collections can be shared with teammates on Enterprise plans. Shared collections support role-based permissions (viewer, editor, owner) and activity history.


id: canvas group: Content title: Canvas Documents summary: Pair each chat session with a collaborative Canvas document that supports version history. keywords:

  • canvas
  • documents
  • editor search:
  • Explains real-time collaboration, presence, and version snapshot limits.
  • Provides API requests for fetching, updating, and snapshotting canvas documents.

Canvas Documents

Canvas is a collaborative document editor linked to chat sessions. Use it to draft, edit, and iterate on content alongside your AI conversations.

How Canvas Works

Each chat session can have an associated canvas document. The canvas stores content alongside the conversation, allowing you to work on a document while discussing it with the AI. Canvas documents support version history with up to 50 snapshots.

Live Collaboration

Multiple collaborators can edit the same canvas in real time. Presence indicators show who is currently editing, along with their cursor position. Changes are synced instantly through our WebSocket infrastructure.

Version History

Canvas automatically tracks changes through version snapshots. You can view, compare, and restore previous versions. When the maximum of 50 versions is reached, the oldest versions are automatically removed.

cURL
1# Get canvas document for a session
2curl "https://chat.waymore.ai/api/canvas/SESSION_ID" \
3-H "Authorization: Bearer YOUR_API_KEY"
4 
5# Create or update canvas document
6curl -X PUT "https://chat.waymore.ai/api/canvas/SESSION_ID" \
7-H "Authorization: Bearer YOUR_API_KEY" \
8-H "Content-Type: application/json" \
9-d '{"title": "Draft Report", "content": "# Report\n\nContent here..."}'
10 
11# Create a version snapshot
12curl -X POST "https://chat.waymore.ai/api/canvas/SESSION_ID/versions" \
13-H "Authorization: Bearer YOUR_API_KEY"

id: files group: Content title: Files & Uploads summary: Supported file types, upload limits, and multipart endpoints for attaching assets. keywords:

  • uploads
  • multipart
  • storage search:
  • Lists supported formats across images, documents, data, code, and video.
  • Shows multipart cURL examples for uploading to chat and the content library.

Files & Uploads

Upload files for AI processing or attach them to chat messages. The platform supports a wide range of file types.

Supported File Types

CategoryFormats
ImagesPNG, JPG, JPEG, GIF, WebP, SVG
DocumentsPDF, DOC, DOCX, TXT, RTF
DataCSV, JSON, XML, XLSX
CodeJS, TS, PY, JAVA, GO, RS, and more
VideoMP4, WebM, MOV

Upload Limits

Maximum file size is 50MB per file. Chat attachments are limited to 5 files per message. Files are processed through the LLM backend for AI analysis.

cURL
1# Upload a file
2curl -X POST "https://chat.waymore.ai/api/files/upload" \
3-H "Authorization: Bearer YOUR_API_KEY" \
4-F "file=@document.pdf"
5 
6# Upload to content library
7curl -X POST "https://chat.waymore.ai/api/stuff/upload" \
8-H "Authorization: Bearer YOUR_API_KEY" \
9-F "file=@image.png"

id: security group: Security title: Security summary: Transport encryption, strict headers, rate limiting, and account-hardening defaults. keywords:

  • tls
  • hsts
  • headers featured: true quickLinkDescription: Transport encryption and headers overview. search:
  • Details TLS/HSTS posture, custom headers, and session hardening guidance.
  • Covers account security best practices and rate limiting rules.

Security

LLM Portal is built with security as a priority. Here is an overview of the security measures in place.

Transport Security

All traffic is encrypted with TLS (HTTPS). HTTP Strict Transport Security (HSTS) is enforced with a max-age of 2 years and includeSubDomains. API keys and session tokens are only transmitted over encrypted connections.

Security Headers

The platform sets comprehensive security headers on all responses:

HeaderValue
X-Content-Type-Optionsnosniff
X-Frame-OptionsDENY
X-XSS-Protection1; mode=block
Referrer-Policystrict-origin-when-cross-origin
Permissions-Policycamera=(self), microphone=(self)

Account Security

Protect your account with strong passwords (minimum 12 characters), two-factor authentication, and API key IP whitelisting. Session tokens use secure, HTTP-only cookies that are not accessible to JavaScript.

Rate Limiting

Authentication endpoints are rate-limited to prevent brute-force attacks. API keys support configurable per-minute, daily, and monthly rate limits. Excessive requests return a 429 status code with a Retry-After header.

Updated February 2026