Skip to content
Go back

Building an AI Coding Agent with Vercel AI SDK and Ollama

Building an AI Coding Agent with Vercel AI SDK and Ollama

We dont need no subscriptions

Play

Here we’ll build a CLI-based AI coding agent that can execute bash commands to help you with development tasks.

Prerequisites

The Code

Here’s the complete implementation in index.ts:

#!/usr/bin/env bun

import { createOpenAI } from "@ai-sdk/openai";
import { generateText, isLoopFinished, tool, zodSchema } from "ai";
import type { ModelMessage } from "ai";
import { z } from "zod";
import { spawnSync } from "child_process";
import * as readline from "readline";

/** CONSTANTS */
const WORKDIR = process.cwd();
const MODEL = "qwen3.5:35b-a3b-coding-nvfp4";
const BLOCKED_COMMANDS = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"];

/** API */
const ollama = createOpenAI({
  baseURL: "http://localhost:11434/v1",
  apiKey: "ollama",
});

/** TOOLS */
const runBash = (command: string): string => {
  if (BLOCKED_COMMANDS.some((c) => command.includes(c))) {
    return "Error: Danger Will Robinson!!!";
  }
  try {
    const result = spawnSync("sh", ["-c", command], {
      cwd: WORKDIR,
      encoding: "utf8",
      timeout: 120000
    });
    return (result.stdout + result.stderr).trim().slice(0, 50000) || "";

  } catch (e) {
    return `Error ${e}`;
  }
};

const TOOLS = {
  bash: tool({
    description: "Run a shell command",
    inputSchema: zodSchema(z.object({ command: z.string() })),
    execute: async ({ command }: { command: string; }) => {
      const output = runBash(command);
      return output;
    }
  })
};


/** AGENT LOOP */

const agentLoop = async (messages: ModelMessage[]): Promise<string> => {
  const { text } = await generateText({
    model: ollama.chat(MODEL),
    system: `You are a coding agents at ${WORKDIR}. Use bash to solve tasks.  Act, dont explain.`,
    messages,
    tools: TOOLS,
    stopWhen: isLoopFinished(),
  });
  return text;
};

/** INTERFACE */

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});
const history: ModelMessage[] = [];

const prompt = (): void => {
  rl.question(" input >> ", async (query) => {
    history.push({ role: "user", content: query });
    const reply = await agentLoop(history);
    history.push({ role: "assistant", content: reply });
    if (reply) console.log(reply);
    console.log();
    prompt();
  });
};

prompt();

How It Works

1. API Configuration

The code creates an OpenAI-compatible client pointing to your local Ollama instance:

const ollama = createOpenAI({
  baseURL: "http://localhost:11434/v1",
  apiKey: "ollama",
});

2. Tool Definition

We define a bash tool that the model can call to execute shell commands. The tool uses Zod schema validation to ensure proper input:

const TOOLS = {
  bash: tool({
    description: "Run a shell command",
    inputSchema: zodSchema(z.object({ command: z.string() })),
    execute: async ({ command }) => runBash(command)
  })
};

3. Security Measures

A blocklist prevents dangerous commands from executing:

const BLOCKED_COMMANDS = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"];

4. Agent Loop

The agentLoop function uses generateText with stopWhen: isLoopFinished() to automatically handle tool execution cycles. The model will keep calling tools until it completes the task without needing more input.

const agentLoop = async (messages: ModelMessage[]): Promise<string> => {
  const { text } = await generateText({
    model: ollama.chat(MODEL),
    system: `You are a coding agents at ${WORKDIR}. Use bash to solve tasks.  Act, dont explain.`,
    messages,
    tools: TOOLS,
    stopWhen: isLoopFinished(),
  });
  return text;
};

5. Interactive CLI

The readline interface provides a simple interactive prompt that maintains conversation history:

const prompt = (): void => {
  rl.question(" input >> ", async (query) => {
    history.push({ role: "user", content: query });
    const reply = await agentLoop(history);
    // ...
  });
};

Running the Agent

  1. Ensure Ollama is running with your chosen model
  2. Run the script:
bun run index.ts
  1. Enter your task at the prompt. For example:
input >> Create a new file called hello.txt with the content "Hello World"

The agent will execute the necessary bash commands to complete your task.

Key Concepts

This implementation provides a foundation for building more sophisticated AI coding assistants!


Share this post on:

Next Post
Build a local RAG system with Ollama, libSQL, and an EPUB