Skip to main content
POST
/
chat
/
completions
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.routeway.ai/v1",
    api_key=os.getenv("Routeway_API_KEY")
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Write a short story about a robot and a cat."}
    ]
)

print(response.choices[0].message.content)
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "choices": [
    {
      "index": 123,
      "messages": [
        {
          "role": "system",
          "content": "<string>"
        }
      ]
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.routeway.ai/llms.txt

Use this file to discover all available pages before exploring further.

This API endpoint allows you to send text and images as inputs, and the model will generate the next message in the conversation.

Create Chat Completion

To create a chat completion, use the following endpoint: POST /chat/completions

Request Body

model
string
required
The model id to use for completion (“gpt-4o”, “gpt-4o-mini”, “deepseek-r”, )
messages
array
required
An array of message objects that form the conversation
max_tokens
integer
The maximum number of tokens to generate
temperature
string
Sampling temperature, a value between 0 and 2.
stream
boolean
Whether to enable streaming responses
frequency_penalty
integer
Tells the model not to repeat a word that has already been used multiple times in the conversation.
presence_penalty
integer
Prevents the model from repeating a word, even if it’s only been used once
stop
array
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.routeway.ai/v1",
    api_key=os.getenv("Routeway_API_KEY")
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Write a short story about a robot and a cat."}
    ]
)

print(response.choices[0].message.content)

Body

application/json
model
string
required
messages
Message · object[]
required
temperature
number | null
default:0.7
max_tokens
integer | null
top_p
integer | null
default:1
frequency_penalty
integer | null
default:0
presence_penalty
integer | null
default:0
stop
any[] | null

Response

Successful Response

id
string
required
object
string
required
created
integer
required
model
string
required
choices
ChatCompletionChoice · object[]
required
usage
ChatCompletionUsage · object
required