Waymore Docs
Everything you need to integrate and use the LLM Portal platform.
Reference documentation, live examples, and platform policies—all in one place. Authenticate, select a model, and ship in minutes.
Base URL
https://chat.waymore.ai/api
REST + SSE endpoints
Authentication
Bearer & OAuth SSO
API keys, Google, GitHub
Uptime
99.9% SLA
Monitored 24/7
Compliance
SOC 2 Type II
Data residency EU & US
Status
All systems normal
Last incident 21 days ago
SDKs
Node.js · Python · Go
REST + SSE compatible
Requests
40k / min
Burst limit per org
Support
Enterprise SLA
24/7 pager rotation
id: getting-started group: Setup title: Getting Started summary: Get up and running with a hosted chat or the OpenAI-compatible API in a few minutes. keywords:
Get up and running with LLM Portal in a few simple steps. Use the hosted chat or integrate via REST.
Prerequisites
Create an account, verify email, and issue your first API key.
Recommended
Install the CLI or SDK to manage keys and sessions locally.
Step 1
Sign up at https://chat.waymore.ai/register or use Google/GitHub SSO, then verify your email.
Step 2
You're dropped into the chat UI with streaming answers, conversation history, and ratings out of the box.
Step 3
Visit the API Keys page, issue a scoped key, and copy it immediately — it is only shown once.
Step 4
Use the OpenAI-compatible endpoint and the language tabs below to test.
| 1 | curl https://chat.waymore.ai/api/chat/completions \ |
| 2 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 3 | -H "Content-Type: application/json" \ |
| 4 | -d '{ |
| 5 | "model": "Waymore-A1-Instruct-1011", |
| 6 | "messages": [ |
| 7 | {"role": "user", "content": "Hello, how are you?"} |
| 8 | ] |
| 9 | }' |
| 1 | { |
| 2 | "id": "chatcmpl-abc123", |
| 3 | "model": "Waymore-A1-Instruct-1011", |
| 4 | "choices": [ |
| 5 | { |
| 6 | "message": { |
| 7 | "role": "assistant", |
| 8 | "content": "Hello! I'm doing well, thank you for asking. How can I help you today?" |
| 9 | }, |
| 10 | "finish_reason": "stop" |
| 11 | } |
| 12 | ], |
| 13 | "usage": { |
| 14 | "prompt_tokens": 12, |
| 15 | "completion_tokens": 18, |
| 16 | "total_tokens": 30 |
| 17 | } |
| 18 | } |
id: migration group: Setup title: Migration summary: Drop-in compatible with OpenAI and Anthropic clients — swap only the base URL, model ID, and auth header. keywords:
LLM Portal provides an OpenAI-compatible API. If you are migrating from OpenAI or Anthropic Claude, the transition requires minimal changes.
OpenAI clients
Swap base URL, API key, and model ID. No schema changes.
Anthropic clients
Native Claude payloads are supported, including tool_use blocks.
The API is fully compatible with the OpenAI format. You only need to change three things:
| Setting | OpenAI | LLM Portal |
|---|---|---|
| Base URL | https://api.openai.com/v1 | https://chat.waymore.ai/api |
| API Key | sk-... | Generate from dashboard |
| Model | gpt-4o | Waymore-A1-Instruct-1011 |
Everything else — request/response format, streaming, function calling, message roles — works identically. You can use the OpenAI Python or Node.js SDK by simply overriding the base_url.
LLM Portal natively supports the Anthropic Claude API format. The API auto-detects the format and responds accordingly. Your existing Claude request bodies — including the top-level system field, input_schema tool definitions, tool_use/tool_result content blocks, and Anthropic-style responses — work without any conversion. You only need to change:
| Setting | Anthropic | LLM Portal |
|---|---|---|
| Base URL | https://api.anthropic.com | https://chat.waymore.ai/api |
| Auth | x-api-key: sk-ant-... | Authorization: Bearer ... |
| Endpoint | /v1/messages | /api/chat/completions |
| Model | claude-sonnet-4-5-* | Waymore-A1-Instruct-1011 |
See the Migrate from OpenAI and Migrate from Claude step-by-step guides for detailed examples and code samples.
id: authentication group: Setup title: Authentication summary: API keys, cookie-based sessions, OAuth, and TOTP-based two-factor authentication. keywords:
LLM Portal supports multiple authentication methods depending on your use case.
API
Use Bearer tokens scoped per key, with optional IP rules.
Web access
Sessions rely on secure cookies, OAuth SSO, and optional TOTP.
For programmatic access, include your API key in the Authorization header as a Bearer token. API keys can be scoped with specific permissions and rate limits.
| 1 | Authorization: Bearer YOUR_API_KEY |
Browser-based access uses secure HTTP-only cookies managed by NextAuth. Sessions are created automatically when you sign in through the web interface. Session tokens are JWT-based and refresh automatically.
Sign in with your existing Google or GitHub account. OAuth accounts are automatically linked if the email address matches an existing account.
Enhance your account security by enabling TOTP-based two-factor authentication. Once enabled, you will need to provide a 6-digit code from your authenticator app (e.g., Google Authenticator, Authy) each time you sign in. Backup codes are provided during setup for account recovery.
| 1 | # Step 1: Generate TOTP secret and QR code |
| 2 | curl -X POST https://chat.waymore.ai/api/auth/2fa/setup \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" |
| 4 | |
| 5 | # Step 2: Scan the QR code with your authenticator app, then verify |
| 6 | curl -X POST https://chat.waymore.ai/api/auth/2fa/verify \ |
| 7 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 8 | -H "Content-Type: application/json" \ |
| 9 | -d '{"token": "123456"}' |
Passwords must be at least 12 characters and include uppercase letters, lowercase letters, numbers, and special characters. Password changes require your current password for verification.
id: chat-interface group: Features title: Chat Interface summary: "Explore the hosted chat features: streaming responses, code highlighting, attachments, ratings, and voice input." keywords:
The web-based chat interface provides a rich conversational experience with real-time streaming, code highlighting, file attachments, and more.
Best for
Teams that need live previews, streaming tokens, and quick file drops.
Highlights
Auto-titling, ratings, voice input, and syntax-highlighted responses.
Messages are organized into sessions (conversations). Each session maintains its own context and message history. You can create new sessions, rename them, and switch between them from the sidebar. Sessions are automatically titled based on the first message.
Responses stream in real-time using Server-Sent Events (SSE). You see each token as it is generated, providing immediate feedback. Streaming can be used both in the web interface and via the API by setting stream: true.
Code blocks in AI responses are automatically syntax-highlighted with a VS Code-inspired theme. Supported languages include JavaScript, TypeScript, Python, Java, Go, Rust, SQL, HTML, CSS, and many more. Each code block includes a copy button and line numbers.
Attach files to your messages for the AI to analyze. Supported file types include images (PNG, JPG, GIF, WebP), PDFs, documents, spreadsheets, and code files. Each file can be up to 50MB, with a maximum of 5 files per message.
Rate AI responses with thumbs up or thumbs down to help improve response quality. Ratings are tracked and can be used for analytics.
Use the microphone button to dictate messages using speech-to-text. Voice input uses a WebSocket connection for real-time transcription.
id: api-keys group: Features title: API Keys summary: Manage scoped API credentials with permissions, rate limits, and IP whitelists. keywords:
API keys provide programmatic access to the LLM Portal API. Each key can be configured with specific permissions, rate limits, and IP restrictions.
Rotation
Instantly revoke or regenerate keys without redeploying clients.
Controls
Fine-grained scopes, RPM caps, IP whitelists, and expirations.
Create API keys from the web dashboard or via the API. The number of keys you can create depends on your subscription plan (1 for Free, 10 for Pro, unlimited for Enterprise). The full key value is shown only once at creation — store it securely.
| 1 | { |
| 2 | "name": "Production Key", |
| 3 | "description": "Backend integration key", |
| 4 | "permissions": ["chat", "images", "vision"], |
| 5 | "ipWhitelist": ["203.0.113.0/24"], |
| 6 | "dailyLimit": 10000, |
| 7 | "monthlyLimit": 250000, |
| 8 | "rpmLimit": 60, |
| 9 | "expiresInDays": 90 |
| 10 | } |
API keys support granular permissions to restrict what operations the key can perform:
| Permission | Description |
|---|---|
| chat | Send chat completion requests |
| images | Generate and process images |
| vision | Analyze images and visual content |
Each key can have configurable rate limits: requests per minute (RPM), daily token limit, and monthly token limit. When a limit is exceeded, the API returns a 429 Too Many Requests response.
Restrict API key usage to specific IP addresses or CIDR ranges. Requests from non-whitelisted IPs will be rejected with a 403 Forbidden response.
API keys can be in one of three states: Active, Revoked, or Expired. You can regenerate a key to get a new secret while keeping the same configuration. Revoked keys can be reactivated if needed. Keys can also be set to auto-expire after a specified number of days.
id: models group: Features title: Models summary: List available model IDs and tune sampling parameters for completions. keywords:
LLM Portal provides access to AI models for text generation, analysis, and conversation.
Query the models endpoint to see all available models. Each model has an ID and provider.
| 1 | curl https://chat.waymore.ai/api/chat/models \ |
| 2 | -H "Authorization: Bearer YOUR_API_KEY" |
| 1 | { |
| 2 | "data": [ |
| 3 | { |
| 4 | "id": "Waymore-A1-Instruct-1011", |
| 5 | "owned_by": "waymore" |
| 6 | } |
| 7 | ] |
| 8 | } |
When sending a chat completion request, you can configure the following parameters:
| Parameter | Type | Description |
|---|---|---|
| model | string | Model ID to use for completion |
| messages | array | Array of message objects with role and content |
| stream | boolean | Enable SSE streaming (default: false) |
| temperature | number | Sampling temperature (0-2, default: 0.7) |
| session_id | string | Optional session to associate the completion with |
id: function-calling group: Features title: Function Calling summary: Define tools the model can invoke, with both OpenAI- and Anthropic-style payloads. keywords:
Extend the model's capabilities by defining custom functions (tools) that it can invoke during a conversation. The API supports both OpenAI and Anthropic tool formats natively — the format is auto-detected based on your request structure.
You define tools in your API request. When the model determines a tool should be used, it returns the function name and structured arguments. You execute the function on your side, then send the result back in a follow-up request. The model then generates a final response using the tool output.
The API auto-detects which format you are using and responds in the same style:
| Feature | OpenAI Format | Anthropic Format |
|---|---|---|
| Tool definition | {type: "function", function: {parameters: ...}} | {name: ..., input_schema: ...} |
| Tool call response | tool_calls array | tool_use content block |
| Tool result | role: "tool" message | tool_result content block |
| Stop reason | finish_reason: "tool_calls" | stop_reason: "tool_use" |
Each tool has a type of "function" and a function object containing the name, description, and JSON Schema parameters:
| 1 | { |
| 2 | "tools": [ |
| 3 | { |
| 4 | "type": "function", |
| 5 | "function": { |
| 6 | "name": "get_weather", |
| 7 | "description": "Get the current weather for a given location", |
| 8 | "parameters": { |
| 9 | "type": "object", |
| 10 | "properties": { |
| 11 | "location": { |
| 12 | "type": "string", |
| 13 | "description": "City and country, e.g. Athens, Greece" |
| 14 | } |
| 15 | }, |
| 16 | "required": ["location"] |
| 17 | } |
| 18 | } |
| 19 | } |
| 20 | ], |
| 21 | "tool_choice": "auto" |
| 22 | } |
| Option | Behavior |
|---|---|
| "auto" | Model decides whether to call a tool (default) |
| "none" | Model will not call any tools |
| "required" | Model must call at least one tool |
The model can call multiple tools in a single response. For example, asking "What's the weather in Athens and London?" may produce two tool calls in the same response. Each has a unique ID — submit results for all of them before making the next completion request.
The complete function calling flow involves three steps:
tools defined. The model returns tool calls with function names and arguments.tool role messages in OpenAI format, or tool_result content blocks in Anthropic format). The model generates a natural language response.See the API Reference for complete request/response examples, and the Function Calling Guide for a step-by-step tutorial.
id: usage-analytics group: Account title: Usage & Analytics summary: Track token consumption, latency, and trends via dashboards or the usage API. keywords:
Monitor your API consumption, track costs, and analyze usage patterns through the usage dashboard or API endpoints.
The web dashboard provides visual charts showing your request volume, token consumption, cost breakdown, and error rates over time. Filter by date range, API key, model, and request status.
Use the usage API to programmatically retrieve your consumption data. Supports filtering by time period, API key, and model.
| 1 | # Get usage summary for the last 30 days |
| 2 | curl "https://chat.waymore.ai/api/usage/summary?period=30d" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" |
| 4 | |
| 5 | # Filter by model |
| 6 | curl "https://chat.waymore.ai/api/usage/summary?period=7d&model=Waymore-A1-Instruct-1011" \ |
| 7 | -H "Authorization: Bearer YOUR_API_KEY" |
| Metric | Description |
|---|---|
| Total Requests | Number of API calls made |
| Input Tokens | Tokens sent in prompts |
| Output Tokens | Tokens generated by the model |
| Cost | Estimated spend for the selected period |
| Latency | P50/P95 response times |
Set up usage alerts to receive notifications when you approach or exceed token or cost thresholds. Alerts can be sent via email or Slack.
View usage statistics for individual API keys, including daily breakdowns and request history:
| 1 | curl "https://chat.waymore.ai/api/keys/YOUR_KEY_ID/usage?days=30" \ |
| 2 | -H "Authorization: Bearer YOUR_API_KEY" |
id: billing group: Account title: Billing & Plans summary: Compare plans, manage payment methods, and retrieve invoices via the API. keywords:
LLM Portal offers flexible subscription plans to match your usage needs. Billing is securely handled through Stripe.
Choose between Free, Pro, and Enterprise plans. Each plan includes a different allocation of monthly tokens, API keys, and features. Plans can be changed at any time with prorated billing.
| Feature | Free | Pro | Enterprise |
|---|---|---|---|
| Monthly Tokens | 1,000 | 50,000 | Custom |
| API Keys | 1 | 10 | Unlimited |
| Support | Community | Priority | Dedicated |
| Billing | - | Monthly/Yearly | Custom |
Add credit or debit cards as payment methods through the billing dashboard. You can have multiple cards on file and set a default for recurring charges. All payment processing is handled securely by Stripe — card details never touch our servers.
View and download invoices for all past payments. Invoices include a breakdown of token usage, plan charges, and any prorated adjustments from plan changes.
| 1 | # List recent invoices |
| 2 | curl "https://chat.waymore.ai/api/billing/invoices" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" |
| 4 | |
| 5 | # Preview a plan change |
| 6 | curl "https://chat.waymore.ai/api/billing/preview?plan=pro" \ |
| 7 | -H "Authorization: Bearer YOUR_API_KEY" |
You can cancel your subscription at any time. Cancellation takes effect at the end of your current billing period — you retain access to Pro features until then. If you exceed the included tokens for your plan, overage is billed at the end of the billing cycle. Configure alerts to be notified before you reach your token allotment.
id: content-library group: Content title: Content Library summary: Archive assets from conversations—images, files, code, and notes—with tags and search. keywords:
Save and organize content from your AI conversations into a personal library. Content items include images, videos, files, code snippets, conversations, and notes.
| Type | Description |
|---|---|
| IMAGE | Generated or uploaded images |
| VIDEO | Video files and recordings |
| FILE | Documents, PDFs, spreadsheets |
| CODE | Code snippets with syntax highlighting |
| CONVERSATION | Saved chat conversations |
| NOTE | Text notes and annotations |
Use tags, favorites, and collections to keep your library organized. Content items support full-text search across titles, descriptions, and tags. Filter by type, source, favorite status, and archive status.
| 1 | # Search for favorite images |
| 2 | curl "https://chat.waymore.ai/api/stuff?type=IMAGE&favorite=true&search=landscape" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" |
| 4 | |
| 5 | # Get content statistics |
| 6 | curl "https://chat.waymore.ai/api/stuff/stats" \ |
| 7 | -H "Authorization: Bearer YOUR_API_KEY" |
Each account has a storage limit based on your subscription plan. Track your storage usage through the content statistics endpoint. File uploads are limited to 50MB per file.
id: collections group: Content title: Collections summary: Group related content items with custom names, colors, and icons. keywords:
Group related content items into collections for better organization. Each collection can have a custom name, description, color, and icon.
Create collections from the content library page or via the API. A default collection is created automatically for each new account.
| 1 | { |
| 2 | "name": "Research Papers", |
| 3 | "description": "Academic papers and references", |
| 4 | "color": "#3B82F6", |
| 5 | "icon": "book" |
| 6 | } |
Add or remove content items from collections. An item can belong to multiple collections. Default collections cannot be deleted.
| 1 | # Add an item to a collection |
| 2 | curl -X POST "https://chat.waymore.ai/api/collections/COLLECTION_ID/items" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 4 | -H "Content-Type: application/json" \ |
| 5 | -d '{"itemId": "ITEM_ID"}' |
| 6 | |
| 7 | # Remove an item from a collection |
| 8 | curl -X DELETE "https://chat.waymore.ai/api/collections/COLLECTION_ID/items?itemId=ITEM_ID" \ |
| 9 | -H "Authorization: Bearer YOUR_API_KEY" |
Collections can be shared with teammates on Enterprise plans. Shared collections support role-based permissions (viewer, editor, owner) and activity history.
id: canvas group: Content title: Canvas Documents summary: Pair each chat session with a collaborative Canvas document that supports version history. keywords:
Canvas is a collaborative document editor linked to chat sessions. Use it to draft, edit, and iterate on content alongside your AI conversations.
Each chat session can have an associated canvas document. The canvas stores content alongside the conversation, allowing you to work on a document while discussing it with the AI. Canvas documents support version history with up to 50 snapshots.
Multiple collaborators can edit the same canvas in real time. Presence indicators show who is currently editing, along with their cursor position. Changes are synced instantly through our WebSocket infrastructure.
Canvas automatically tracks changes through version snapshots. You can view, compare, and restore previous versions. When the maximum of 50 versions is reached, the oldest versions are automatically removed.
| 1 | # Get canvas document for a session |
| 2 | curl "https://chat.waymore.ai/api/canvas/SESSION_ID" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" |
| 4 | |
| 5 | # Create or update canvas document |
| 6 | curl -X PUT "https://chat.waymore.ai/api/canvas/SESSION_ID" \ |
| 7 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 8 | -H "Content-Type: application/json" \ |
| 9 | -d '{"title": "Draft Report", "content": "# Report\n\nContent here..."}' |
| 10 | |
| 11 | # Create a version snapshot |
| 12 | curl -X POST "https://chat.waymore.ai/api/canvas/SESSION_ID/versions" \ |
| 13 | -H "Authorization: Bearer YOUR_API_KEY" |
id: files group: Content title: Files & Uploads summary: Supported file types, upload limits, and multipart endpoints for attaching assets. keywords:
Upload files for AI processing or attach them to chat messages. The platform supports a wide range of file types.
| Category | Formats |
|---|---|
| Images | PNG, JPG, JPEG, GIF, WebP, SVG |
| Documents | PDF, DOC, DOCX, TXT, RTF |
| Data | CSV, JSON, XML, XLSX |
| Code | JS, TS, PY, JAVA, GO, RS, and more |
| Video | MP4, WebM, MOV |
Maximum file size is 50MB per file. Chat attachments are limited to 5 files per message. Files are processed through the LLM backend for AI analysis.
| 1 | # Upload a file |
| 2 | curl -X POST "https://chat.waymore.ai/api/files/upload" \ |
| 3 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 4 | -F "file=@document.pdf" |
| 5 | |
| 6 | # Upload to content library |
| 7 | curl -X POST "https://chat.waymore.ai/api/stuff/upload" \ |
| 8 | -H "Authorization: Bearer YOUR_API_KEY" \ |
| 9 | -F "file=@image.png" |
id: security group: Security title: Security summary: Transport encryption, strict headers, rate limiting, and account-hardening defaults. keywords:
LLM Portal is built with security as a priority. Here is an overview of the security measures in place.
All traffic is encrypted with TLS (HTTPS). HTTP Strict Transport Security (HSTS) is enforced with a max-age of 2 years and includeSubDomains. API keys and session tokens are only transmitted over encrypted connections.
The platform sets comprehensive security headers on all responses:
| Header | Value |
|---|---|
| X-Content-Type-Options | nosniff |
| X-Frame-Options | DENY |
| X-XSS-Protection | 1; mode=block |
| Referrer-Policy | strict-origin-when-cross-origin |
| Permissions-Policy | camera=(self), microphone=(self) |
Protect your account with strong passwords (minimum 12 characters), two-factor authentication, and API key IP whitelisting. Session tokens use secure, HTTP-only cookies that are not accessible to JavaScript.
Authentication endpoints are rate-limited to prevent brute-force attacks. API keys support configurable per-minute, daily, and monthly rate limits. Excessive requests return a 429 status code with a Retry-After header.