API Reference - ChatbotsPlace API

Table of Contents

API Authorization

In order to use ChatbotsPlace API, you must first create an API key. Do NOT share you API key with people.

Once you have obtained an API key, you can authenticate to the APIs below using the Authorization request header as followed:

Authorization: Bearer {API_key}

POST /api/public/chat (Send chat request)

Send a chat message to our chatbot. This operation is can be synchronous or asynchronous. By default, this operation is asynchronous, and you can retrieve the chatbot response using the messageId in the response. If you set wait: true, then this operation will be synchronous, and it will wait until the chatbot response is ready before returning.

Please note that there are different versions of our chatbot. Each has a different price and context length. Refers to the version pricing table below for more details. The cost of the API will be returned in the response as well.

Passing in the same conversationId will allow users to continue the previous conversation. However, if the conversation length exceeds the max context length available for the selected chatbot version, the older messages in the conversation will be dropped. If you do not pass in a conversationId, a random one will be generated for you.

Messages in a conversation will be retained on ChatbotsPlace for 90 days. Afterward, they will be automatically removed.

POST Body
name type data type description
message required string The user message to send to chatbot.
version required string chatbot version. Accepted values are "v3.5", "v3.5_16K", "v4.0_sm", and "v4.0"
wait optional boolean If true, wait for the chat response to finish before returning. Otherwise, this will return a messageId that can be used to retrieve the chat response later using the GET /api/public/chat API.
conversationId optional string The conversation ID. Leave blank to generate a random one.
systemMessage optional string System message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation.
temperature optional number What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Responses
http code content-type response
200 application/json {"messageId": "abcd-1234", "cost": 1, "conversationId": "efgh-6789"}
400 text/plain The request is invalid. Please make sure that the POST body is correct.
401 text/plain Invalid API key
403 text/plain Insufficient gems
Example cURL
 curl -X POST -H "Content-Type: application/json" --data @post.json https://chatbotsplace.com/api/public/chat
Example Javascript
async function run() {
  const { data } = await axios.post(
    `https://chatbotsplace.com/api/public/chat`,
    {
      version: 'v3.5',
      message: 'Tell me a joke',
      wait: true
    },
    {
      headers: {
        authorization: `Bearer ${apiKey}`
      }
    }
  );
  console.log(data);
}
Example Python
import requests
import json

apiKey = '<your-api-key>'  # Define your apiKey

headers = {
    'content-type': 'application/json',
    'authorization': f'Bearer {apiKey}'
}

data_post = {
    'version': 'v3.5',
    'message': 'Tell me a joke',
    'wait': True
}
response = requests.post('https://chatbotsplace.com/api/public/chat', headers=headers, json=data_post)

data = response.json()
print(data)

GET /api/public/chat (Retrieve chat response)

Retrieve the chat response. This operation is free to call.

The chatbot response may not be finished when this API returns. You may check the isFinished boolean field in the response to figure out if the chatbot response is complete. Even if isFinished is false, the message field may be partially filled in, allowing you to show a partial response to the user. However, we recommend that you wait at least 1 second between each call to this API to avoid overloading our system.

If the chatbot failed to respond, isError field will be set to true. If the failure is due to the chatbot getting too many requests, then the tooManyRequests field will be set to true. In most cases, ChatbotsPlace will catch the error and refund the gems spent at the beginning of the POST request. If so, isRefunded field will be set to true.

isFinished field will be set to true either when the chatbot's response is complete or when there is an error. Calling this API again after isFinished is true will return the same result.

A messageId may be used to retrieve chat response for up to 5 minutes. Afterward, this API may return an empty response.

Parameters
name type data type description
messageId required string messageId from the POST request in order to retrieve the chat repsonse.
Responses
http code content-type response
200 application/json { "message": "chatbot response", "isFinished": true, "isError": false, "tooManyRequests": false, "isRefunded": false }
Example cURL
 curl -X GET -H "Content-Type: application/json" https://chatbotsplace.com/api/public/chat?messageId=abcd-1234
Example Javascript
async function run() {
  const { data } = await axios.post(
    `https://chatbotsplace.com/api/public/chat`,
    {
      version: 'v3.5',
      message: 'Tell me a joke'
    },
    {
      headers: {
        authorization: `Bearer ${apiKey}`
      }
    }
  );
  while (true) {
    const getRes = await fetch(
      `https://chatbotsplace.com/api/public/chat?messageId=${data.messageId}`
    );
    const botMessage = await getRes.json();
    if (botMessage.isFinished) {
      console.log('Final message:', botMessage.message);
      return;
    }
    console.log('Partial message:', botMessage.message);
    await delay(2000);
  }
}
Example Python
import requests
import json
import time

apiKey = '<your-api-key>'  # Define your apiKey

headers = {
    'content-type': 'application/json',
    'authorization': f'Bearer {apiKey}'
}

def run():
    session = requests.Session()

    # Post request
    url_post = 'https://chatbotsplace.com/api/public/chat'
    data_post = {
        'version': 'v3.5',
        'message': 'Tell me a joke'
    }
    response = session.post(url_post, headers=headers, json=data_post)

    data = response.json()

    while True:
        url_get = f'https://chatbotsplace.com/api/public/chat?messageId={data.get("messageId")}'
        response_get = session.get(url_get, headers=headers)
        bot_message = response_get.json()

        # Check if it is finished
        if bot_message.get('isFinished'):
            print('Final message:', bot_message.get('message'))
            return
        print('Partial message:', bot_message.get('message'))

        # Sleep for 2 seconds before next call
        time.sleep(2)

run()

Chatbot versions and pricings

Version Name AI Model Max input Max output Pricing (per conversation)
v3.5 ChatGPT v3.5 gpt-3.5-turbo-0125 3000 tokens 1000 tokens -0.1馃拵
v4.0 ChatGPT v4.0 gpt-4-turbo 15000 tokens 4000 tokens -15馃拵
chatgpt-4-pro ChatGPT v4 Plus 30000 tokens 4000 tokens -15馃拵
claude_v3_haiku Claude V3 Haiku claude-3-haiku-20240307 3000 tokens 1000 tokens -0.1馃拵
claude_v3 Claude V3 Sonnet claude-3-sonnet-20240229 15000 tokens 4000 tokens -15馃拵
claude_v3_opus Claude V3 Opus claude-3-opus-20240229 6000 tokens 2000 tokens -20馃拵
jurassic_2_ultra AI21 Labs Jurassic-2 Ultra ai21.j2-ultra-v1 8000 tokens 4000 tokens -15馃拵
llama2_13b Llama 2 Small meta.llama2-13b-chat-v1 4000 tokens 1000 tokens -0.1馃拵
llama2_70b Llama 2 Medium meta.llama2-70b-chat-v1 4000 tokens 1000 tokens -2馃拵
llama3_8b Llama 3 Small meta.llama3-8b-instruct-v1:0 4000 tokens 1000 tokens -0.1馃拵
llama3_70b Llama 3 Medium meta.llama3-70b-instruct-v1:0 4000 tokens 1000 tokens -3馃拵
mistral_7b_instruct Mistral 7B Instruct mistral.mistral-7b-instruct-v0:2 4000 tokens 1000 tokens -0.1馃拵
mixtral_8x7b_instruct Mistral 8X7B Instruct mistral.mixtral-8x7b-instruct-v0:1 4000 tokens 1000 tokens -0.1馃拵
mistral_large Mistral Large mistral.mistral-large-2402-v1:0 4000 tokens 1000 tokens -5馃拵
gemini_pro Gemini Pro 1.0 gemini-pro 15000 tokens 4000 tokens -1馃拵
gemini_pro_1_5 Gemini 1.5 Pro gemini-1.5-pro-preview-0409 25000 tokens 4000 tokens -5馃拵
gemini_flash_1_5 Gemini 1.5 Flash gemini-1.5-flash-preview-0514 15000 tokens 4000 tokens -1馃拵