Skip to main content

Documentation Index

Fetch the complete documentation index at: https://tinytalk.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Tools turn your agent from a question-answering assistant into one that can take action. When a visitor asks “what’s the status of order 12345?”, an agent with the right tool attached can call your order API, parse the response, and reply with the live answer — instead of saying “I don’t have that information.” The agent decides when to invoke a tool based on its description. You define the tool once; the AI model handles the orchestration.
Two tool types are available: Custom Tools (HTTP endpoints you define yourself) and Platform Tools (first-party integrations Tiny Talk maintains for you).

How tools work

A tool is something your agent can call during a conversation to do work outside the chat. When the AI model decides a tool is relevant to the current message, it:
  1. Picks the tool based on its name, description, and “when to use” hint
  2. Collects the inputs by extracting them from the conversation or asking follow-up questions
  3. Runs the tool through Tiny Talk’s secure runtime
  4. Reads the result and weaves it into a natural reply
Tools are attached at the agent level — every conversation that agent handles has access to its enabled tools. You can disable a tool without deleting it; disabled tools will count against your plan limit.

Tool types

A Custom Tool points Tiny Talk at your own HTTPS endpoint. You define parameters, headers, and secrets, and the agent calls the endpoint whenever the conversation calls for it. Best when the action you want lives behind your own API. A Platform Tool is a first-party integration Tiny Talk maintains for you. You install it in a few clicks, configure what the agent should offer, and let the runtime handle the integration plumbing. Best when the third-party service is one we already support.

Available tools

Custom Tool

Point the agent at your own HTTPS endpoint to look up records, update systems, or trigger workflows.

Calendly

Let the agent offer the right Calendly event type and embed the booking flow inline in the chat.

Cal.com

Let the agent offer the right Cal.com event type and embed the booking flow inline in the chat.

Plan availability

PlanTools per agent
Free
Basic AI5
Standard AI10
Pro AI15
The per agent limit caps how many tools a single agent can have enabled. Custom Tools and Platform Tools share this limit — for example, on Standard AI you could mix 1 Platform Tool and 9 Custom Tools.
Hitting the per-agent cap usually means it’s time to consolidate: one well-described tool with clear parameters often outperforms two narrow tools the model has to disambiguate between.

Security and limits

Tiny Talk’s tool runtime is built to keep your network and your secrets safe. These guarantees apply to every tool type. Network safety
  • HTTPS-only. The runtime refuses any non-HTTPS URL.
  • IP literals (https://203.0.113.10/...) are rejected. Hostnames must be fully qualified domain names.
  • Private, loopback, link-local, and cloud-metadata IP ranges are blocked. Tools cannot reach localhost, internal cluster addresses, or AWS/GCP metadata endpoints.
  • Redirects (3xx responses) are not followed. The endpoint you configure is the endpoint the request hits.
Execution limits
  • Request timeout — 10 seconds.
  • Response size — 20 KB (responses are truncated past this).
  • Steps per turn — Up to 8 model-tool round trips per visitor message. Most replies use 1–2.
Secrets handling
  • Encrypted at rest with AES-256 in the same vault that stores other workspace secrets.
  • Decrypted only at request time, in memory, scoped to the single tool invocation.
  • Never returned to the browser — the dashboard sees only a masked preview.
  • Logger redaction strips Authorization, X-API-Key, and similar headers from request logs.
Failure modes If a tool times out, returns an error status, or hits a security guard, the model receives a deterministic message it can react to (“Tool call timed out”, “Tool could not be reached”, “Tool returned status 404”). The agent can retry, fall back to another tool, or tell the visitor it couldn’t complete the action.

Credit cost

Tools do not consume extra credits. Credit cost is determined by the AI model selection. Reasoning models that decide to call tools don’t pay extra credits per tool. Network and execution limits (timeouts, response size) apply per tool call but don’t appear on your billing.
Heads up on token usage. Each tool you attach adds its name, description, and parameter schema to every prompt the model sees, and each tool call sends the response back through the model. This increases token consumption per conversation — invisible on credit-based plans (you only pay per response), but worth knowing if you’re tracking provider-side usage. Disabling unused tools and switching to Selected fields for chatty endpoints keeps the overhead small.

Roles and permissions

Tool management follows your workspace’s role-based access control:
ActionOwnerAdminEditorViewer
View tools
Create tools
Update tools
Test tools
Delete tools
See Roles & Permissions for the full matrix.

Next steps

Set up a Custom Tool

Walk through the editor, parameters, secrets, response handling, and test runner.

Install Calendly

Let your agent book meetings with an inline scheduler.

Install Cal.com

Connect Cal.com event types and offer EU-hosted scheduling on cal.eu if you need it.