API Reference
This reference describes the API endpoints that you can use to interact with the Tiny Talk.
Chat Completions API
The Chat Completions API offers powerful way to interact with your chatbots using POST
requests. You can create a natural back-and-forth conversation experience by sending a series of messages as input and receiving model-generated messages as output. The API handles both user messages and system-level instructions, giving you fine-grained control over the conversation flow.
This API is exclusively available to users on our paid plan, granting you the ability to programmatically engage with your chatbot and bring conversation AI to your applications.
Endpoint
URL | Description |
---|---|
https://api.tinytalk.ai/v1/chat/completions | Completions endpoint that creates a model response for the given chat conversation. |
Request Headers
In order to perform a request to the API, you must include the following headers. Tiny Talk Secret API key can be found or generated in your Tiny Talk Dashboard.
Do not share your secret key in publicly accessible areas such as GitHub, client-side code, and so forth. Treat it like a password, ff you believe your API key has been compromised, you can revoke current key and regenerate it in the dashboard.
Key | Value | Description |
---|---|---|
Api-Key | tiny_server_sk_example123 | Tiny Talk Secret API key to use for authentication. |
Origin | https://example.com | Configure this in your Bot Details page to allow the origins you wish to accept requets from. |
Content-Type | application/json | The content type of the request. |
Request Body
Your request body should be a JSON object with the following properties.
Key | Type | Required | Description |
---|---|---|---|
botId | string | Yes | ID of the bot you want to use, you can find this in your Bot Details page in the dashboard. |
messages | array | Yes | List of messages between the user and the assistant. Each message must have at least role and content properties. |
messages.role | string | Yes | Role of the author of this message. One of system , user or assistant . |
messages.content | string | Yes | Content of the message. |
temperature | number | No | Sampling temperature between 0 and 2 to shape the output of your chatbot. Higher values like 0.8 add randomness, while lower values like 0.2 bring more focus and predictability.. |
Example Body
json
{ "botId": "a3e8f869-0c00-4861-8c24-dc06098520f0", "messages": [ { "role": "user", "content": "Hi" }, { "role": "assistant", "content": "Hello, how can I help you?" } ], "temperature": 0.1}
Example Requests
Chat Completions API by default returns a stream
response, this is great because an average reply from OpenAI API might take around ~15 seconds, we wouldn't like to keep people awaiting for that long to see a response. By using stream, partial message deltas will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]
message.
JavaScript – Fetch
Remember to replace Origin
, Api-Key
and botId
with your own values.
javascript
const myHeaders = { Origin: 'https://dashboard.tinytalk.ai', 'Api-Key': '{API_KEY}', 'Content-Type': 'application/json',};const myBody = JSON.stringify({ botId: '{BOT_ID}', messages: [ { role: 'user', content: 'Hi', }, ], temperature: 0.1,});const requestOptions = { method: 'POST', headers: myHeaders, body: myBody, redirect: 'follow',};const response = await fetch('https://api.tinytalk.ai/v1/chat/completions', requestOptions);if (!response.ok) { const errorData = await response.json(); throw Error(errorData.message);}const data = response.body;if (!data) { // handle behaviour}const reader = data.getReader();const decoder = new TextDecoder();let done = false;while (!done) { const { value, done: doneReading } = await reader.read(); done = doneReading; const chunk = decoder.decode(value); // Log the chunk of data received from the server until we are done console.log(chunk);}
cURL
Remember to replace values of Origin
, Api-Key
and botId
with your own values.
bash
curl --location 'https://api.tinytalk.ai/v1/chat/completions' \--header 'Origin: https://dashboard.tinytalk.ai' \--header 'Api-Key: {API_KEY}' \--header 'Content-Type: application/json' \--data '{ "botId": "{BOT_ID}", "messages": [ { "role": "user", "content": "Hi" } ], "temperature": 0.1}'
Context Window
We recommend including previous messages in the conversation history in your requests. By including previous messages in the messages
array, your bot can consider a wider context window in its responses. This not only enhances the conversation's depth but also makes it more engaging and natural. If you choose not to include previous messages, the conversation will resemble more of a Q&A format.
Error Handling
Errors can happen and best we can do is to handle them properly to provide a smooth user experience and graceful error resolution. If there are any errors occur during the API request, the server will return appropriate HTTP status codes to indicate the nature of the error. This can be due to OpenAI API being overloaded, bad request and such.