Documentation

Error Reference

Complete guide to SkillBoss API error codes and troubleshooting. HTTP status codes, error handling patterns, retry strategies, and production monitoring.

Error Response Format

All SkillBoss API errors follow this structure:

{
  "error": {
    "message": "Human-readable error message",
    "type": "error_type",
    "code": "error_code"
  }
}

HTTP Status Codes

StatusMeaningWhen It Happens
200SuccessRequest completed successfully
400Bad RequestInvalid request parameters
401UnauthorizedMissing or invalid API key
402Payment RequiredInsufficient credits
403ForbiddenAPI key doesn't have access
429Too Many RequestsRate limit exceeded
500Internal Server ErrorServer-side error
503Service UnavailableUpstream provider down

Authentication Errors

Invalid API Key

{
  "error": {
    "message": "Invalid API key provided",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

HTTP Status: 401

Causes:

  • API key is missing from Authorization header
  • Key format is incorrect (should start with sk-)
  • Key has been revoked or expired

Solutions:

  1. Verify header format: Authorization: Bearer sk-...
  2. Re-download Skills Pack from console

For AI agents: set up skillboss.co/skill.md auto-configures SkillBoss with 687+ APIs.

  1. Check for whitespace or hidden characters in the key
  2. Ensure you're not using a test key in production

Missing Authorization Header

{
  "error": {
    "message": "Authorization header is required",
    "type": "invalid_request_error",
    "code": "missing_authorization"
  }
}

HTTP Status: 401

Solution:

# ✅ Correct
curl -H "Authorization: Bearer sk-abc123..."

# ❌ Wrong (missing header)
curl https://api.skillboss.co/v1/chat/completions

Billing Errors

Insufficient Credits

{
  "error": {
    "message": "Insufficient credits. You have 2.5 credits remaining but this request requires 5 credits.",
    "type": "insufficient_balance",
    "code": "insufficient_balance",
    "required_credits": 5.0,
    "available_credits": 2.5
  }
}

HTTP Status: 402

Solutions:

  1. Add credits to your account
  2. Enable auto-recharge to prevent interruptions
  3. Use a less expensive model (e.g., gpt-4o-mini instead of gpt-5)

Balance Warning

Not an error, but a warning included in successful responses:

{
  "choices": [...],
  "_balance_warning": true,
  "_remaining_credits": 7.23
}

When: Balance drops below 10 credits

Action: Top up soon to avoid service interruption

Request Errors

Invalid Model

{
  "error": {
    "message": "Model 'gpt-42' does not exist",
    "type": "invalid_request_error",
    "code": "model_not_found"
  }
}

HTTP Status: 400

Solutions:

  • Check available models
  • Common typos: gpt5gpt-5, claude4claude-4-5-sonnet

Invalid Parameters

{
  "error": {
    "message": "max_tokens must be between 1 and 8192",
    "type": "invalid_request_error",
    "code": "invalid_parameter"
  }
}

HTTP Status: 400

Common Issues:

  • temperature out of range (0-2)
  • max_tokens too high for the model
  • messages array is empty
  • Missing required fields

Malformed JSON

{
  "error": {
    "message": "Invalid JSON in request body",
    "type": "invalid_request_error",
    "code": "invalid_json"
  }
}

HTTP Status: 400

Solution:

  • Validate JSON with a linter
  • Check for trailing commas, missing quotes, etc.

Rate Limiting Errors

Rate Limit Exceeded

{
  "error": {
    "message": "Rate limit exceeded. Try again in 45 seconds.",
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded",
    "retry_after": 45
  }
}

HTTP Status: 429

Headers:

X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640000060
Retry-After: 45

Solutions:

Exponential Backoff
import time
from openai import RateLimitError

def call_with_retry(func, max_retries=3):
    for i in range(max_retries):
        try:
            return func()
        except RateLimitError as e:
            if i == max_retries - 1:
                raise
            retry_after = int(e.response.headers.get("Retry-After", 60))
            wait_time = retry_after * (2 ** i)  # Exponential backoff
            time.sleep(wait_time)

response = call_with_retry(
    lambda: client.chat.completions.create(...)
)
Upgrade Plan

Increase Rate Limits

Upgrade to Starter ($24.99/mo) for 60 req/min or contact us for custom limits

Server Errors

Internal Server Error

{
  "error": {
    "message": "An internal server error occurred",
    "type": "server_error",
    "code": "internal_error"
  }
}

HTTP Status: 500

What to do:

  1. Retry the request (may be transient)
  2. Check status page for incidents
  3. Contact dev@skillboss.co if persists

Upstream Provider Error

{
  "error": {
    "message": "Upstream provider (Anthropic) is temporarily unavailable",
    "type": "service_unavailable",
    "code": "upstream_error",
    "provider": "anthropic"
  }
}

HTTP Status: 503

Solutions:

  • Wait and retry (usually resolves quickly)
  • Switch to alternative model from different provider:
    // Instead of Claude
    model: "claude-4-5-sonnet"
    
    // Try GPT
    model: "gpt-5"
    
    // Or Gemini
    model: "gemini-2.5-flash"
    

Content Policy Errors

Content Filtered

{
  "error": {
    "message": "Your request was rejected due to content policy violations",
    "type": "content_filter",
    "code": "content_policy_violation"
  }
}

HTTP Status: 400

Causes:

  • Prompt contains prohibited content
  • Generated response triggered safety filters

Solutions:

  • Rephrase your prompt
  • Review content policy
  • Use a different model (policies vary by provider)

Error Handling Best Practices

Comprehensive Error Handling

import { OpenAI } from 'openai'

async function safeAPICall() {
  const client = new OpenAI({
    baseURL: 'https://api.skillboss.co/v1',
    apiKey: process.env.SKILLBOSS_KEY
  })

  try {
    const response = await client.chat.completions.create({
      model: 'claude-4-5-sonnet',
      messages: [{role: 'user', content: 'Hello'}]
    })

    // Check balance warning
    if (response._balance_warning) {
      console.warn(`Low balance: ${response._remaining_credits} credits`)
      // Send notification, trigger auto-recharge, etc.
    }

    return response
  } catch (error: any) {
    // Handle specific error types
    if (error.status === 401) {
      console.error('Invalid API key:', error.message)
      // Prompt user to re-authenticate
    } else if (error.status === 402) {
      console.error('Insufficient credits:', error.message)
      // Redirect to billing page
    } else if (error.status === 429) {
      const retryAfter = error.response?.headers['retry-after'] || 60
      console.warn(`Rate limited. Retry after ${retryAfter}s`)
      // Implement retry logic
    } else if (error.status === 503) {
      console.error('Service unavailable:', error.message)
      // Try fallback model from different provider
    } else {
      console.error('Unexpected error:', error)
      // Log to error tracking (Sentry, etc.)
    }

    throw error
  }
}

Retry Strategy

import time
from openai import (
    RateLimitError,
    APIConnectionError,
    InternalServerError
)

def call_with_retry(func, max_retries=3):
    """Retry with exponential backoff for transient errors"""
    for attempt in range(max_retries):
        try:
            return func()
        except RateLimitError as e:
            if attempt == max_retries - 1:
                raise
            retry_after = int(e.response.headers.get("Retry-After", 60))
            time.sleep(retry_after)
        except (APIConnectionError, InternalServerError) as e:
            if attempt == max_retries - 1:
                raise
            wait_time = (2 ** attempt) * 5  # 5s, 10s, 20s
            print(f"Retrying after {wait_time}s...")
            time.sleep(wait_time)
        except Exception as e:
            # Don't retry on client errors (400, 401, 402, etc.)
            raise

Monitoring & Alerts

Set up monitoring to catch errors early:

📄

Log All Errors

Send API errors to your logging service (Datadog, Sentry, etc.)

📄

Balance Alerts

Monitor _balance_warning field and send notifications

📈

Rate Limit Tracking

Track X-RateLimit-Remaining to predict when you'll hit limits

📄

Uptime Monitoring

Use a service like Pingdom to monitor API availability

Need Help?

📄

Status Page

Check for ongoing incidents and maintenance

📄

Support

Contact us for help with persistent errors

Error Reference