Providers

Vercel AI SDK

View as markdown

Vercel AI SDK allows you to configure an optional async execute function that the framework uses to execute the tool calls.

The Vercel provider for Composio formats the Composio tools and adds this execute function to the tool calls.

Usage with Tools

Use Composio Tool Router as a native tool with the Vercel AI SDK.

Installation

npm install @composio/core @composio/vercel ai @ai-sdk/anthropic

Usage

Create a Tool Router session and use it as a native tool with Vercel AI SDK:

  • Set COMPOSIO_API_KEY environment variable with your API key from Settings.
  • Set ANTHROPIC_API_KEY environment variable with your Anthropic API key.
import "dotenv/config";
import { const anthropic: AnthropicProvider
Default Anthropic provider instance.
anthropic
} from "@ai-sdk/anthropic";
import { class Composio<TProvider extends BaseComposioProvider<unknown, unknown, unknown> = OpenAIProvider>
This is the core class for Composio. It is used to initialize the Composio SDK and provide a global configuration.
Composio
} from "@composio/core";
import { class VercelProviderVercelProvider } from "@composio/vercel"; import { function stepCountIs(stepCount: number): StopCondition<any>stepCountIs,
function streamText<TOOLS extends ToolSet, OUTPUT extends Output<any, any, any> = Output<string, string, never>>({ model, tools, toolChoice, system, prompt, messages, maxRetries, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, prepareStep, providerOptions, experimental_activeTools, activeTools, experimental_repairToolCall: repairToolCall, experimental_transform: transform, experimental_download: download, includeRawChunks, onChunk, onError, onFinish, onAbort, onStepFinish, experimental_context, _internal: { now, generateId }, ...settings }: CallSettings & Prompt & {
    model: LanguageModel;
    tools?: TOOLS;
    toolChoice?: ToolChoice<TOOLS>;
    stopWhen?: StopCondition<NoInfer<TOOLS>> | Array<StopCondition<NoInfer<TOOLS>>>;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    experimental_activeTools?: Array<keyof NoInfer<TOOLS>>;
    activeTools?: Array<keyof NoInfer<TOOLS>>;
    output?: OUTPUT;
    experimental_output?: OUTPUT;
    prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    experimental_repairToolCall?: ToolCallRepairFunction<TOOLS>;
    experimental_transform?: StreamTextTransform<TOOLS> | Array<StreamTextTransform<TOOLS>>;
    experimental_download?: DownloadFunction | undefined;
    includeRawChunks?: boolean;
    onChunk?: StreamTextOnChunkCallback<TOOLS>;
    onError?: StreamTextOnErrorCallback;
    onFinish?: StreamTextOnFinishCallback<TOOLS>;
    onAbort?: StreamTextOnAbortCallback<...>;
    onStepFinish?: StreamTextOnStepFinishCallback<TOOLS>;
    experimental_context?: unknown;
    _internal?: {
        now?: () => number;
        generateId?: IdGenerator;
    };
}): StreamTextResult<TOOLS, OUTPUT>
Generate a text and call tools for a given prompt using a language model. This function streams the output. If you do not want to stream the output, use `generateText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramonChunk - Callback that is called for each chunk of the stream. The stream processing will pause until the callback promise is resolved.@paramonError - Callback that is called when an error occurs during streaming. You can use it to log errors.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnA result object for accessing different stream types and additional information.
streamText
} from "ai";
// Initialize Composio with Vercel provider (API key from env var COMPOSIO_API_KEY) const const composio: Composio<VercelProvider>composio = new new Composio<VercelProvider>(config?: ComposioConfig<VercelProvider> | undefined): Composio<VercelProvider>
Creates a new instance of the Composio SDK. The constructor initializes the SDK with the provided configuration options, sets up the API client, and initializes all core models (tools, toolkits, etc.).
@paramconfig - Configuration options for the Composio SDK@paramconfig.apiKey - The API key for authenticating with the Composio API@paramconfig.baseURL - The base URL for the Composio API (defaults to production URL)@paramconfig.allowTracking - Whether to allow anonymous usage analytics@paramconfig.provider - The provider to use for this Composio instance (defaults to OpenAIProvider)@example```typescript // Initialize with default configuration const composio = new Composio(); // Initialize with custom API key and base URL const composio = new Composio({ apiKey: 'your-api-key', baseURL: 'https://api.composio.dev' }); // Initialize with custom provider const composio = new Composio({ apiKey: 'your-api-key', provider: new CustomProvider() }); ```
Composio
({ provider?: VercelProvider | undefined
The tool provider to use for this Composio instance.
@examplenew OpenAIProvider()
provider
: new
new VercelProvider({ strict }?: {
    strict?: boolean;
}): VercelProvider
Creates a new instance of the VercelProvider. This provider enables integration with the Vercel AI SDK, allowing Composio tools to be used with Vercel AI applications.
@example```typescript // Initialize the Vercel provider const provider = new VercelProvider(); // Use with Composio const composio = new Composio({ apiKey: 'your-api-key', provider: new VercelProvider() }); // Use the provider to wrap tools for Vercel AI SDK const vercelTools = provider.wrapTools(composioTools, composio.tools.execute); ```
VercelProvider
() });
// Unique identifier of the user const const userId: "user_123"userId = "user_123"; // Create a session and get native tools for the user const const session: ToolRouterSession<unknown, unknown, VercelProvider>session = await const composio: Composio<VercelProvider>composio.Composio<VercelProvider>.create: (userId: string, routerConfig?: ToolRouterCreateSessionConfig) => Promise<ToolRouterSession<unknown, unknown, VercelProvider>>
Creates a new tool router session for a user.
@paramuserId The user id to create the session for@paramconfig The config for the tool router session@returnsThe tool router session@example```typescript import { Composio } from '@composio/core'; const composio = new Composio(); const userId = 'user_123'; const session = await composio.create(userId, { manageConnections: true, }); console.log(session.sessionId); console.log(session.url); console.log(session.tools()); ```
create
(const userId: "user_123"userId);
const const tools: ToolSettools = await const session: ToolRouterSession<unknown, unknown, VercelProvider>session.ToolRouterSession<unknown, unknown, VercelProvider>.tools: (modifiers?: SessionMetaToolOptions) => Promise<ToolSet>
Get the tools available in the session, formatted for your AI framework. Requires a provider to be configured in the Composio constructor.
tools
();
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Fetching GitHub issues from the Composio repository...");
// Stream the response with tool calling const const stream: StreamTextResult<ToolSet, Output<string, string, never>>stream = await
streamText<ToolSet, Output<string, string, never>>({ model, tools, toolChoice, system, prompt, messages, maxRetries, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, prepareStep, providerOptions, experimental_activeTools, activeTools, experimental_repairToolCall: repairToolCall, experimental_transform: transform, experimental_download: download, includeRawChunks, onChunk, onError, onFinish, onAbort, onStepFinish, experimental_context, _internal: { now, generateId }, ...settings }: CallSettings & (Prompt & {
    model: LanguageModel;
    tools?: ToolSet | undefined;
    toolChoice?: ToolChoice<ToolSet> | undefined;
    stopWhen?: StopCondition<NoInfer<ToolSet>> | StopCondition<NoInfer<ToolSet>>[] | undefined;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    ... 15 more ...;
    _internal?: {
        now?: () => number;
        generateId?: IdGenerator;
    };
})): StreamTextResult<...>
Generate a text and call tools for a given prompt using a language model. This function streams the output. If you do not want to stream the output, use `generateText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramonChunk - Callback that is called for each chunk of the stream. The stream processing will pause until the callback promise is resolved.@paramonError - Callback that is called when an error occurs during streaming. You can use it to log errors.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnA result object for accessing different stream types and additional information.
streamText
({
system?: string | SystemModelMessage | SystemModelMessage[] | undefined
System message to include in the prompt. Can be used with `prompt` or `messages`.
system
: "You are a helpful personal assistant. Use Composio tools to take action.",
model: LanguageModel
The language model to use.
model
: function anthropic(modelId: AnthropicMessagesModelId): LanguageModelV3
Creates a model for text generation.
anthropic
("claude-sonnet-4-5"),
prompt: string | ModelMessage[]
A prompt. It can be either a text prompt or a list of messages. You can either use `prompt` or `messages` but not both.
prompt
: "Fetch all the open GitHub issues on the composio repository and group them by bugs/features/docs.",
stopWhen?: StopCondition<NoInfer<ToolSet>> | StopCondition<NoInfer<ToolSet>>[] | undefined
Condition for stopping the generation when there are tool results in the last step. When the condition is an array, any of the conditions can be met to stop the generation.
@defaultstepCountIs(1)
stopWhen
: function stepCountIs(stepCount: number): StopCondition<any>stepCountIs(10),
onStepFinish?: StreamTextOnStepFinishCallback<ToolSet> | undefined
Callback that is called when each step (LLM call) is finished, including intermediate steps.
onStepFinish
: (step: StepResult<ToolSet>step) => {
for (const const toolCall: TypedToolCall<ToolSet>toolCall of step: StepResult<ToolSet>step.toolCalls: TypedToolCall<ToolSet>[]
The tool calls that were made during the generation.
toolCalls
) {
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
(`[Using tool: ${const toolCall: TypedToolCall<ToolSet>toolCall.toolName: stringtoolName}]`);
} }, tools?: ToolSet | undefined
The tools that the model can call. The model needs to support calling tools.
tools
,
}); for await (const const textPart: stringtextPart of const stream: StreamTextResult<ToolSet, Output<string, string, never>>stream.StreamTextResult<ToolSet, Output<string, string, never>>.textStream: AsyncIterableStream<string>
A text stream that returns only the generated text deltas. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.
textStream
) {
var process: NodeJS.Processprocess.
NodeJS.Process.stdout: NodeJS.WriteStream & {
    fd: 1;
}
The `process.stdout` property returns a stream connected to`stdout` (fd `1`). It is a `net.Socket` (which is a `Duplex` stream) unless fd `1` refers to a file, in which case it is a `Writable` stream. For example, to copy `process.stdin` to `process.stdout`: ```js import { stdin, stdout } from 'node:process'; stdin.pipe(stdout); ``` `process.stdout` differs from other Node.js streams in important ways. See `note on process I/O` for more information.
stdout
.Socket.write(buffer: Uint8Array | string, cb?: (err?: Error | null) => void): boolean (+1 overload)
Sends data on the socket. The second parameter specifies the encoding in the case of a string. It defaults to UTF8 encoding. Returns `true` if the entire data was flushed successfully to the kernel buffer. Returns `false` if all or part of the data was queued in user memory.`'drain'` will be emitted when the buffer is again free. The optional `callback` parameter will be executed when the data is finally written out, which may not be immediately. See `Writable` stream `write()` method for more information.
@sincev0.1.90@paramencoding Only used when data is `string`.
write
(const textPart: stringtextPart);
} var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("\n\n---");
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Tip: If prompted to authenticate, complete the auth flow and run again.");

Usage with MCP

Use Composio Tool Router with the Vercel AI SDK for a fully managed MCP experience.

Installation

npm install dotenv @composio/core ai @ai-sdk/anthropic @ai-sdk/mcp

Usage

Use Tool Router with Vercel AI SDK's streamText for completions:

  • Set COMPOSIO_API_KEY environment variable with your API key from Settings.
  • Set ANTHROPIC_API_KEY environment variable with your Anthropic API key.
import "dotenv/config";
import { const anthropic: AnthropicProvider
Default Anthropic provider instance.
anthropic
} from "@ai-sdk/anthropic";
import {
function experimental_createMCPClient(config: MCPClientConfig): Promise<MCPClient>
export experimental_createMCPClient
experimental_createMCPClient
as function createMCPClient(config: MCPClientConfig): Promise<MCPClient>createMCPClient } from "@ai-sdk/mcp";
import { class Composio<TProvider extends BaseComposioProvider<unknown, unknown, unknown> = OpenAIProvider>
This is the core class for Composio. It is used to initialize the Composio SDK and provide a global configuration.
Composio
} from "@composio/core";
import { function stepCountIs(stepCount: number): StopCondition<any>stepCountIs,
function streamText<TOOLS extends ToolSet, OUTPUT extends Output<any, any, any> = Output<string, string, never>>({ model, tools, toolChoice, system, prompt, messages, maxRetries, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, prepareStep, providerOptions, experimental_activeTools, activeTools, experimental_repairToolCall: repairToolCall, experimental_transform: transform, experimental_download: download, includeRawChunks, onChunk, onError, onFinish, onAbort, onStepFinish, experimental_context, _internal: { now, generateId }, ...settings }: CallSettings & Prompt & {
    model: LanguageModel;
    tools?: TOOLS;
    toolChoice?: ToolChoice<TOOLS>;
    stopWhen?: StopCondition<NoInfer<TOOLS>> | Array<StopCondition<NoInfer<TOOLS>>>;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    experimental_activeTools?: Array<keyof NoInfer<TOOLS>>;
    activeTools?: Array<keyof NoInfer<TOOLS>>;
    output?: OUTPUT;
    experimental_output?: OUTPUT;
    prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    experimental_repairToolCall?: ToolCallRepairFunction<TOOLS>;
    experimental_transform?: StreamTextTransform<TOOLS> | Array<StreamTextTransform<TOOLS>>;
    experimental_download?: DownloadFunction | undefined;
    includeRawChunks?: boolean;
    onChunk?: StreamTextOnChunkCallback<TOOLS>;
    onError?: StreamTextOnErrorCallback;
    onFinish?: StreamTextOnFinishCallback<TOOLS>;
    onAbort?: StreamTextOnAbortCallback<...>;
    onStepFinish?: StreamTextOnStepFinishCallback<TOOLS>;
    experimental_context?: unknown;
    _internal?: {
        now?: () => number;
        generateId?: IdGenerator;
    };
}): StreamTextResult<TOOLS, OUTPUT>
Generate a text and call tools for a given prompt using a language model. This function streams the output. If you do not want to stream the output, use `generateText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramonChunk - Callback that is called for each chunk of the stream. The stream processing will pause until the callback promise is resolved.@paramonError - Callback that is called when an error occurs during streaming. You can use it to log errors.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnA result object for accessing different stream types and additional information.
streamText
} from "ai";
// Initialize Composio (API key from env var COMPOSIO_API_KEY or pass explicitly: { apiKey: "your-key" }) const const composio: Composio<OpenAIProvider>composio = new new Composio<OpenAIProvider>(config?: ComposioConfig<OpenAIProvider> | undefined): Composio<OpenAIProvider>
Creates a new instance of the Composio SDK. The constructor initializes the SDK with the provided configuration options, sets up the API client, and initializes all core models (tools, toolkits, etc.).
@paramconfig - Configuration options for the Composio SDK@paramconfig.apiKey - The API key for authenticating with the Composio API@paramconfig.baseURL - The base URL for the Composio API (defaults to production URL)@paramconfig.allowTracking - Whether to allow anonymous usage analytics@paramconfig.provider - The provider to use for this Composio instance (defaults to OpenAIProvider)@example```typescript // Initialize with default configuration const composio = new Composio(); // Initialize with custom API key and base URL const composio = new Composio({ apiKey: 'your-api-key', baseURL: 'https://api.composio.dev' }); // Initialize with custom provider const composio = new Composio({ apiKey: 'your-api-key', provider: new CustomProvider() }); ```
Composio
();
// Unique identifier of the user const const userId: "user_123"userId = "user_123"; // Create a tool router session for the user const {
const mcp: {
    type: "http" | "sse";
    url: string;
    headers?: Record<string, string> | undefined;
}
The MCP server config of the tool router session. Contains the URL, type ('http' or 'sse'), and headers for authentication.
mcp
} = await const composio: Composio<OpenAIProvider>composio.Composio<OpenAIProvider>.create: (userId: string, routerConfig?: ToolRouterCreateSessionConfig) => Promise<ToolRouterSession<unknown, unknown, OpenAIProvider>>
Creates a new tool router session for a user.
@paramuserId The user id to create the session for@paramconfig The config for the tool router session@returnsThe tool router session@example```typescript import { Composio } from '@composio/core'; const composio = new Composio(); const userId = 'user_123'; const session = await composio.create(userId, { manageConnections: true, }); console.log(session.sessionId); console.log(session.url); console.log(session.tools()); ```
create
(const userId: "user_123"userId);
// Create an MCP client to connect to the Composio tool router const const client: MCPClientclient = await function createMCPClient(config: MCPClientConfig): Promise<MCPClient>createMCPClient({ MCPClientConfig.transport: MCPTransportConfig | MCPTransport
Transport configuration for connecting to the MCP server
transport
: {
type: "http" | "sse"type: "http", url: string
The URL of the MCP server.
url
:
const mcp: {
    type: "http" | "sse";
    url: string;
    headers?: Record<string, string> | undefined;
}
The MCP server config of the tool router session. Contains the URL, type ('http' or 'sse'), and headers for authentication.
mcp
.url: stringurl,
headers?: Record<string, string> | undefined
Additional HTTP headers to be sent with requests.
headers
:
const mcp: {
    type: "http" | "sse";
    url: string;
    headers?: Record<string, string> | undefined;
}
The MCP server config of the tool router session. Contains the URL, type ('http' or 'sse'), and headers for authentication.
mcp
.headers?: Record<string, string> | undefinedheaders, // Authentication headers for the Composio MCP server
}, }); const
const tools: Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>
tools
= await const client: MCPClientclient.
MCPClient.tools<"automatic">(options?: {
    schemas?: "automatic" | undefined;
} | undefined): Promise<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>
tools
();
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Summarizing your emails from today");
const
const stream: StreamTextResult<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>, Output<...>>
stream
= await
streamText<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>, Output<...>>({ model, tools, toolChoice, system, prompt, messages, maxRetries, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, prepareStep, providerOptions, experimental_activeTools, activeTools, experimental_repairToolCall: repairToolCall, experimental_transform: transform, experimental_download: download, includeRawChunks, onChunk, onError, onFinish, onAbort, onStepFinish, experimental_context, _internal: { now, generateId }, ...settings }: CallSettings & (Prompt & {
    ...;
})): StreamTextResult<...>
Generate a text and call tools for a given prompt using a language model. This function streams the output. If you do not want to stream the output, use `generateText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramonChunk - Callback that is called for each chunk of the stream. The stream processing will pause until the callback promise is resolved.@paramonError - Callback that is called when an error occurs during streaming. You can use it to log errors.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnA result object for accessing different stream types and additional information.
streamText
({
system?: string | SystemModelMessage | SystemModelMessage[] | undefined
System message to include in the prompt. Can be used with `prompt` or `messages`.
system
: "You are a helpful personal assistant. Use Composio tools to take action.",
model: LanguageModel
The language model to use.
model
: function anthropic(modelId: AnthropicMessagesModelId): LanguageModelV3
Creates a model for text generation.
anthropic
("claude-sonnet-4-5"),
prompt: string | ModelMessage[]
A prompt. It can be either a text prompt or a list of messages. You can either use `prompt` or `messages` but not both.
prompt
: "Summarize my emails from today",
stopWhen?: StopCondition<NoInfer<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>> | StopCondition<...>[] | undefined
Condition for stopping the generation when there are tool results in the last step. When the condition is an array, any of the conditions can be met to stop the generation.
@defaultstepCountIs(1)
stopWhen
: function stepCountIs(stepCount: number): StopCondition<any>stepCountIs(10),
onStepFinish?: StreamTextOnStepFinishCallback<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>> | undefined
Callback that is called when each step (LLM call) is finished, including intermediate steps.
onStepFinish
: (
step: StepResult<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>
step
) => {
for (const
const toolCall: TypedToolCall<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>
toolCall
of
step: StepResult<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>
step
.
toolCalls: TypedToolCall<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>[]
The tool calls that were made during the generation.
toolCalls
) {
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
(`[Using tool: ${
const toolCall: TypedToolCall<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>>
toolCall
.toolName: stringtoolName}]`);
} },
tools?: Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})> | undefined
The tools that the model can call. The model needs to support calling tools.
tools
,
}); for await (const const textPart: stringtextPart of
const stream: StreamTextResult<Record<string, ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
}) | ({
    description?: string;
    title?: string;
    providerOptions?: ProviderOptions;
    inputSchema: FlexibleSchema<unknown>;
    inputExamples?: {
        input: unknown;
    }[] | undefined;
    needsApproval?: boolean | ToolNeedsApprovalFunction<unknown> | undefined;
    strict?: boolean;
    onInputStart?: (options: ToolExecutionOptions) => void | PromiseLike<void>;
    onInputDelta?: (options: {
        inputTextDelta: string;
    } & ToolExecutionOptions) => void | PromiseLike<void>;
    onInputAvailable?: ((options: {
        ...;
    } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined;
} & ... 4 more ... & {
    ...;
})>, Output<...>>
stream
.StreamTextResult<Record<string, ({ description?: string; title?: string; providerOptions?: ProviderOptions; inputSchema: FlexibleSchema<unknown>; ... 5 more ...; onInputAvailable?: ((options: { ...; } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined; } & ... 4 more ... & { ...; }) | ({ description?: string; title?: string; providerOptions?: ProviderOptions; inputSchema: FlexibleSchema<unknown>; ... 5 more ...; onInputAvailable?: ((options: { ...; } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined; } & ... 4 more ... & { ...; }) | ({ description?: string; title?: string; providerOptions?: ProviderOptions; inputSchema: FlexibleSchema<unknown>; ... 5 more ...; onInputAvailable?: ((options: { ...; } & ToolExecutionOptions) => void | PromiseLike<void>) | undefined; } & ... 4 more ... & { ...; })>, Output<...>>.textStream: AsyncIterableStream<string>
A text stream that returns only the generated text deltas. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.
textStream
) {
var process: NodeJS.Processprocess.
NodeJS.Process.stdout: NodeJS.WriteStream & {
    fd: 1;
}
The `process.stdout` property returns a stream connected to`stdout` (fd `1`). It is a `net.Socket` (which is a `Duplex` stream) unless fd `1` refers to a file, in which case it is a `Writable` stream. For example, to copy `process.stdin` to `process.stdout`: ```js import { stdin, stdout } from 'node:process'; stdin.pipe(stdout); ``` `process.stdout` differs from other Node.js streams in important ways. See `note on process I/O` for more information.
stdout
.Socket.write(buffer: Uint8Array | string, cb?: (err?: Error | null) => void): boolean (+1 overload)
Sends data on the socket. The second parameter specifies the encoding in the case of a string. It defaults to UTF8 encoding. Returns `true` if the entire data was flushed successfully to the kernel buffer. Returns `false` if all or part of the data was queued in user memory.`'drain'` will be emitted when the buffer is again free. The optional `callback` parameter will be executed when the data is finally written out, which may not be immediately. See `Writable` stream `write()` method for more information.
@sincev0.1.90@paramencoding Only used when data is `string`.
write
(const textPart: stringtextPart);
} var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("\n\n---");
var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Tip: If prompted to authenticate, complete the auth flow and run again.")

Usage with direct tools

Setup

Vercel AI SDK and the provider are only available for the TypeScript SDK.

npm install @composio/vercel

You can specify and import the provider in the constructor.

import { class Composio<TProvider extends BaseComposioProvider<unknown, unknown, unknown> = OpenAIProvider>
This is the core class for Composio. It is used to initialize the Composio SDK and provide a global configuration.
Composio
} from '@composio/core';
import { class VercelProviderVercelProvider } from '@composio/vercel'; import {
function generateText<TOOLS extends ToolSet, OUTPUT extends Output<any, any, any> = Output<string, string, any>>({ model: modelArg, tools, toolChoice, system, prompt, messages, maxRetries: maxRetriesArg, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, providerOptions, experimental_activeTools, activeTools, experimental_prepareStep, prepareStep, experimental_repairToolCall: repairToolCall, experimental_download: download, experimental_context, _internal: { generateId }, onStepFinish, onFinish, ...settings }: CallSettings & Prompt & {
    model: LanguageModel;
    tools?: TOOLS;
    toolChoice?: ToolChoice<NoInfer<TOOLS>>;
    stopWhen?: StopCondition<NoInfer<TOOLS>> | Array<StopCondition<NoInfer<TOOLS>>>;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    experimental_activeTools?: Array<keyof NoInfer<TOOLS>>;
    activeTools?: Array<keyof NoInfer<TOOLS>>;
    output?: OUTPUT;
    experimental_output?: OUTPUT;
    experimental_download?: DownloadFunction | undefined;
    experimental_prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    experimental_repairToolCall?: ToolCallRepairFunction<NoInfer<TOOLS>>;
    onStepFinish?: GenerateTextOnStepFinishCallback<NoInfer<TOOLS>>;
    onFinish?: GenerateTextOnFinishCallback<NoInfer<TOOLS>>;
    experimental_context?: unknown;
    _internal?: {
        generateId?: IdGenerator;
    };
}): Promise<GenerateTextResult<TOOLS, OUTPUT>>
Generate a text and call tools for a given prompt using a language model. This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText
} from 'ai';
import { const openai: OpenAIProvider
Default OpenAI provider instance.
openai
} from "@ai-sdk/openai";
const const composio: Composio<VercelProvider>composio = new new Composio<VercelProvider>(config?: ComposioConfig<VercelProvider> | undefined): Composio<VercelProvider>
Creates a new instance of the Composio SDK. The constructor initializes the SDK with the provided configuration options, sets up the API client, and initializes all core models (tools, toolkits, etc.).
@paramconfig - Configuration options for the Composio SDK@paramconfig.apiKey - The API key for authenticating with the Composio API@paramconfig.baseURL - The base URL for the Composio API (defaults to production URL)@paramconfig.allowTracking - Whether to allow anonymous usage analytics@paramconfig.provider - The provider to use for this Composio instance (defaults to OpenAIProvider)@example```typescript // Initialize with default configuration const composio = new Composio(); // Initialize with custom API key and base URL const composio = new Composio({ apiKey: 'your-api-key', baseURL: 'https://api.composio.dev' }); // Initialize with custom provider const composio = new Composio({ apiKey: 'your-api-key', provider: new CustomProvider() }); ```
Composio
({
provider?: VercelProvider | undefined
The tool provider to use for this Composio instance.
@examplenew OpenAIProvider()
provider
: new
new VercelProvider({ strict }?: {
    strict?: boolean;
}): VercelProvider
Creates a new instance of the VercelProvider. This provider enables integration with the Vercel AI SDK, allowing Composio tools to be used with Vercel AI applications.
@example```typescript // Initialize the Vercel provider const provider = new VercelProvider(); // Use with Composio const composio = new Composio({ apiKey: 'your-api-key', provider: new VercelProvider() }); // Use the provider to wrap tools for Vercel AI SDK const vercelTools = provider.wrapTools(composioTools, composio.tools.execute); ```
VercelProvider
(),
});

Usage

// create an auth config for gmail
// then create a connected account with an external user id that identifies the user
const const externalUserId: "your-external-user-id"externalUserId = "your-external-user-id";
const const tools: ToolSettools = await const composio: Composio<VercelProvider>composio.Composio<VercelProvider>.tools: Tools<unknown, unknown, VercelProvider>
List, retrieve, and execute tools
tools
.Tools<unknown, unknown, VercelProvider>.get<VercelProvider>(userId: string, slug: string, options?: AgenticToolOptions | undefined): Promise<ToolSet> (+1 overload)
Get a specific tool by its slug. This method fetches the tool from the Composio API and wraps it using the provider.
@paramuserId - The user id to get the tool for@paramslug - The slug of the tool to fetch@paramoptions - Optional provider options including modifiers@returnsThe wrapped tool@example```typescript // Get a specific tool by slug const hackerNewsUserTool = await composio.tools.get('default', 'HACKERNEWS_GET_USER'); // Get a tool with schema modifications const tool = await composio.tools.get('default', 'GITHUB_GET_REPOS', { modifySchema: (toolSlug, toolkitSlug, schema) => { // Customize the tool schema return {...schema, description: 'Custom description'}; } }); ```
get
(const externalUserId: "your-external-user-id"externalUserId, "GMAIL_SEND_EMAIL");
// env: OPENAI_API_KEY const { const text: string
The text that was generated in the last step.
text
} = await
generateText<ToolSet, Output<string, string, any>>({ model: modelArg, tools, toolChoice, system, prompt, messages, maxRetries: maxRetriesArg, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, providerOptions, experimental_activeTools, activeTools, experimental_prepareStep, prepareStep, experimental_repairToolCall: repairToolCall, experimental_download: download, experimental_context, _internal: { generateId }, onStepFinish, onFinish, ...settings }: CallSettings & (Prompt & {
    model: LanguageModel;
    tools?: ToolSet | undefined;
    toolChoice?: ToolChoice<NoInfer<ToolSet>> | undefined;
    stopWhen?: StopCondition<NoInfer<ToolSet>> | StopCondition<NoInfer<ToolSet>>[] | undefined;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    ... 11 more ...;
    _internal?: {
        generateId?: IdGenerator;
    };
})): Promise<...>
Generate a text and call tools for a given prompt using a language model. This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText
({
model: LanguageModel
The language model to use.
model
: function openai(modelId: OpenAIResponsesModelId): LanguageModelV3
Default OpenAI provider instance.
openai
("gpt-5"),
messages: ModelMessage[]
A list of messages. You can either use `prompt` or `messages` but not both.
messages
: [
{ role: "user"role: "user", content: UserContentcontent: `Send an email to soham.g@composio.dev with the subject 'Hello from composio' and the body 'Congratulations on sending your first email using AI Agents and Composio!'`, }, ], tools?: ToolSet | undefined
The tools that the model can call. The model needs to support calling tools.
tools
,
}); var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Email sent successfully!", { text: stringtext });