reset readme

This commit is contained in:
Dax Raad 2025-06-13 16:26:34 -04:00
parent f35ce180e2
commit 18ab8faa1d

659
README.md
View file

@ -1,658 +1,5 @@
◧ opencode
![OpenCode Terminal UI](screenshot.png)
> **⚠️ Notice:** We are in progress of a complete overhaul in the `dontlook` branch - should be released mid June. The README below is for the current version
A powerful terminal-based AI assistant for developers, providing intelligent coding assistance directly in your terminal.
## Overview
OpenCode is a Go-based CLI application that brings AI assistance to your terminal. It provides a TUI (Terminal User Interface) for interacting with various AI models to help with coding tasks, debugging, and more.
## Features
- **Interactive TUI**: Built with [Bubble Tea](https://github.com/charmbracelet/bubbletea) for a smooth terminal experience
- **Multiple AI Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, and OpenRouter
- **Session Management**: Save and manage multiple conversation sessions
- **Tool Integration**: AI can execute commands, search files, and modify code
- **Vim-like Editor**: Integrated editor with text input capabilities
- **Persistent Storage**: SQLite database for storing conversations and sessions
- **LSP Integration**: Language Server Protocol support for code intelligence
- **File Change Tracking**: Track and visualize file changes during sessions
- **External Editor Support**: Open your preferred editor for composing messages
- **Named Arguments for Custom Commands**: Create powerful custom commands with multiple named placeholders
## Installation
### Using the Install Script
```bash
# Install the latest version
curl -fsSL https://opencode.ai/install | bash
# Install a specific version
curl -fsSL https://opencode.ai/install | VERSION=0.1.0 bash
```
### Using Homebrew (macOS and Linux)
```bash
brew install sst/tap/opencode
█▀▀█ █▀▀█ █▀▀ █▀▀▄ █▀▀ █▀▀█ █▀▀▄ █▀▀
█░░█ █░░█ █▀▀ █░░█ █░░ █░░█ █░░█ █▀▀
▀▀▀▀ █▀▀▀ ▀▀▀ ▀ ▀ ▀▀▀ ▀▀▀▀ ▀▀▀ ▀▀▀
```
### Using AUR (Arch Linux)
```bash
# Using yay
yay -S opencode-bin
# Using paru
paru -S opencode-bin
```
### Using Go
```bash
go install github.com/sst/opencode@latest
```
## Configuration
OpenCode looks for configuration in the following locations:
- `$HOME/.opencode.json`
- `$XDG_CONFIG_HOME/opencode/.opencode.json`
- `./.opencode.json` (local directory)
### Environment Variables
You can configure OpenCode using environment variables:
| Environment Variable | Purpose |
| -------------------------- | ------------------------------------------------------ |
| `ANTHROPIC_API_KEY` | For Claude models |
| `OPENAI_API_KEY` | For OpenAI models |
| `GEMINI_API_KEY` | For Google Gemini models |
| `VERTEXAI_PROJECT` | For Google Cloud VertexAI (Gemini) |
| `VERTEXAI_LOCATION` | For Google Cloud VertexAI (Gemini) |
| `GROQ_API_KEY` | For Groq models |
| `AWS_ACCESS_KEY_ID` | For AWS Bedrock (Claude) |
| `AWS_SECRET_ACCESS_KEY` | For AWS Bedrock (Claude) |
| `AWS_REGION` | For AWS Bedrock (Claude) |
| `AZURE_OPENAI_ENDPOINT` | For Azure OpenAI models |
| `AZURE_OPENAI_API_KEY` | For Azure OpenAI models (optional when using Entra ID) |
| `AZURE_OPENAI_API_VERSION` | For Azure OpenAI models |
### Configuration File Structure
```json
{
"data": {
"directory": ".opencode"
},
"providers": {
"openai": {
"apiKey": "your-api-key",
"disabled": false
},
"anthropic": {
"apiKey": "your-api-key",
"disabled": false
},
"groq": {
"apiKey": "your-api-key",
"disabled": false
},
"openrouter": {
"apiKey": "your-api-key",
"disabled": false
}
},
"agents": {
"primary": {
"model": "claude-3.7-sonnet",
"maxTokens": 5000
},
"task": {
"model": "claude-3.7-sonnet",
"maxTokens": 5000
},
"title": {
"model": "claude-3.7-sonnet",
"maxTokens": 80
}
},
"mcpServers": {
"example": {
"type": "stdio",
"command": "path/to/mcp-server",
"env": [],
"args": []
}
},
"lsp": {
"go": {
"disabled": false,
"command": "gopls"
}
},
"shell": {
"path": "/bin/zsh",
"args": ["-l"]
},
"debug": false,
"debugLSP": false
}
```
## Supported AI Models
OpenCode supports a variety of AI models from different providers:
### OpenAI
- GPT-4.1 family (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano)
- GPT-4.5 Preview
- GPT-4o family (gpt-4o, gpt-4o-mini)
- O1 family (o1, o1-pro, o1-mini)
- O3 family (o3, o3-mini)
- O4 Mini
### Anthropic
- Claude 3.5 Sonnet
- Claude 3.5 Haiku
- Claude 3.7 Sonnet
- Claude 3 Haiku
- Claude 3 Opus
### Google
- Gemini 2.5
- Gemini 2.5 Flash
- Gemini 2.0 Flash
- Gemini 2.0 Flash Lite
### AWS Bedrock
- Claude 3.7 Sonnet
### Groq
- Llama 4 Maverick (17b-128e-instruct)
- Llama 4 Scout (17b-16e-instruct)
- QWEN QWQ-32b
- Deepseek R1 distill Llama 70b
- Llama 3.3 70b Versatile
### Azure OpenAI
- GPT-4.1 family (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano)
- GPT-4.5 Preview
- GPT-4o family (gpt-4o, gpt-4o-mini)
- O1 family (o1, o1-mini)
- O3 family (o3, o3-mini)
- O4 Mini
### Google Cloud VertexAI
- Gemini 2.5
- Gemini 2.5 Flash
## Using Bedrock Models
To use bedrock models with OpenCode you need three things.
1. Valid AWS credentials (the env vars: `AWS_SECRET_KEY_ID`, `AWS_SECRET_ACCESS_KEY` and `AWS_REGION`)
2. Access to the corresponding model in AWS Bedrock in your region.
a. You can request access in the AWS console on the Bedrock -> "Model access" page.
3. A correct configuration file. You don't need the `providers` key. Instead you have to prefix your models per agent with `bedrock.` and then a valid model. For now only Claude 3.7 is supported.
```json
{
"agents": {
"primary": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 5000,
"reasoningEffort": ""
},
"task": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 5000,
"reasoningEffort": ""
},
"title": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 80,
"reasoningEffort": ""
}
}
}
```
## Interactive Mode Usage
```bash
# Start OpenCode
opencode
# Start with debug logging
opencode -d
# Start with a specific working directory
opencode -c /path/to/project
```
## Non-interactive Prompt Mode
You can run OpenCode in non-interactive mode by passing a prompt directly as a command-line argument or by piping text into the command. This is useful for scripting, automation, or when you want a quick answer without launching the full TUI.
```bash
# Run a single prompt and print the AI's response to the terminal
opencode -p "Explain the use of context in Go"
# Pipe input to OpenCode (equivalent to using -p flag)
echo "Explain the use of context in Go" | opencode
# Get response in JSON format
opencode -p "Explain the use of context in Go" -f json
# Or with piped input
echo "Explain the use of context in Go" | opencode -f json
# Run without showing the spinner
opencode -p "Explain the use of context in Go" -q
# Or with piped input
echo "Explain the use of context in Go" | opencode -q
# Enable verbose logging to stderr
opencode -p "Explain the use of context in Go" --verbose
# Or with piped input
echo "Explain the use of context in Go" | opencode --verbose
# Restrict the agent to only use specific tools
opencode -p "Explain the use of context in Go" --allowedTools=view,ls,glob
# Or with piped input
echo "Explain the use of context in Go" | opencode --allowedTools=view,ls,glob
# Prevent the agent from using specific tools
opencode -p "Explain the use of context in Go" --excludedTools=bash,edit
# Or with piped input
echo "Explain the use of context in Go" | opencode --excludedTools=bash,edit
```
In this mode, OpenCode will process your prompt, print the result to standard output, and then exit. All permissions are auto-approved for the session.
### Tool Restrictions
You can control which tools the AI assistant has access to in non-interactive mode:
- `--allowedTools`: Comma-separated list of tools that the agent is allowed to use. Only these tools will be available.
- `--excludedTools`: Comma-separated list of tools that the agent is not allowed to use. All other tools will be available.
These flags are mutually exclusive - you can use either `--allowedTools` or `--excludedTools`, but not both at the same time.
### Output Formats
OpenCode supports the following output formats in non-interactive mode:
| Format | Description |
| ------ | ------------------------------- |
| `text` | Plain text output (default) |
| `json` | Output wrapped in a JSON object |
The output format is implemented as a strongly-typed `OutputFormat` in the codebase, ensuring type safety and validation when processing outputs.
## Command-line Flags
| Flag | Short | Description |
| ----------------- | ----- | --------------------------------------------------- |
| `--help` | `-h` | Display help information |
| `--debug` | `-d` | Enable debug mode |
| `--cwd` | `-c` | Set current working directory |
| `--prompt` | `-p` | Run a single prompt in non-interactive mode |
| `--output-format` | `-f` | Output format for non-interactive mode (text, json) |
| `--quiet` | `-q` | Hide spinner in non-interactive mode |
| `--verbose` | | Display logs to stderr in non-interactive mode |
| `--allowedTools` | | Restrict the agent to only use specified tools |
| `--excludedTools` | | Prevent the agent from using specified tools |
## Keyboard Shortcuts
### Global Shortcuts
| Shortcut | Action |
| -------- | ------------------------------------------------------- |
| `Ctrl+C` | Quit application |
| `Ctrl+?` | Toggle help dialog |
| `?` | Toggle help dialog (when not in editing mode) |
| `Ctrl+L` | View logs |
| `Ctrl+A` | Switch session |
| `Ctrl+K` | Command dialog |
| `Ctrl+O` | Toggle model selection dialog |
| `Esc` | Close current overlay/dialog or return to previous mode |
### Chat Page Shortcuts
| Shortcut | Action |
| -------- | --------------------------------------- |
| `Ctrl+N` | Create new session |
| `Ctrl+X` | Cancel current operation/generation |
| `i` | Focus editor (when not in writing mode) |
| `Esc` | Exit writing mode and focus messages |
### Editor Shortcuts
| Shortcut | Action |
| ------------------- | ----------------------------------------- |
| `Ctrl+S` | Send message (when editor is focused) |
| `Enter` or `Ctrl+S` | Send message (when editor is not focused) |
| `Ctrl+E` | Open external editor |
| `Esc` | Blur editor and focus messages |
### Session Dialog Shortcuts
| Shortcut | Action |
| ---------- | ---------------- |
| `↑` or `k` | Previous session |
| `↓` or `j` | Next session |
| `Enter` | Select session |
| `Esc` | Close dialog |
### Model Dialog Shortcuts
| Shortcut | Action |
| ---------- | ----------------- |
| `↑` or `k` | Move up |
| `↓` or `j` | Move down |
| `←` or `h` | Previous provider |
| `→` or `l` | Next provider |
| `Esc` | Close dialog |
### Permission Dialog Shortcuts
| Shortcut | Action |
| ----------------------- | ---------------------------- |
| `←` or `left` | Switch options left |
| `→` or `right` or `tab` | Switch options right |
| `Enter` or `space` | Confirm selection |
| `a` | Allow permission |
| `A` | Allow permission for session |
| `d` | Deny permission |
### Logs Page Shortcuts
| Shortcut | Action |
| ------------------ | ------------------- |
| `Backspace` or `q` | Return to chat page |
## AI Assistant Tools
OpenCode's AI assistant has access to various tools to help with coding tasks:
### File and Code Tools
| Tool | Description | Parameters |
| ------------- | --------------------------- | ---------------------------------------------------------------------------------------- |
| `glob` | Find files by pattern | `pattern` (required), `path` (optional) |
| `grep` | Search file contents | `pattern` (required), `path` (optional), `include` (optional), `literal_text` (optional) |
| `ls` | List directory contents | `path` (optional), `ignore` (optional array of patterns) |
| `view` | View file contents | `file_path` (required), `offset` (optional), `limit` (optional) |
| `write` | Write to files | `file_path` (required), `content` (required) |
| `edit` | Edit files | Various parameters for file editing |
| `patch` | Apply patches to files | `file_path` (required), `diff` (required) |
| `diagnostics` | Get diagnostics information | `file_path` (optional) |
### Other Tools
| Tool | Description | Parameters |
| ------- | ------------------------------- | ----------------------------------------------------------- |
| `bash` | Execute shell commands | `command` (required), `timeout` (optional) |
| `fetch` | Fetch data from URLs | `url` (required), `format` (required), `timeout` (optional) |
| `agent` | Run sub-tasks with the AI agent | `prompt` (required) |
### Shell Configuration
OpenCode allows you to configure the shell used by the `bash` tool. By default, it uses:
1. The shell specified in the config file (if provided)
2. The shell from the `$SHELL` environment variable (if available)
3. Falls back to `/bin/bash` if neither of the above is available
To configure a custom shell, add a `shell` section to your `.opencode.json` configuration file:
```json
{
"shell": {
"path": "/bin/zsh",
"args": ["-l"]
}
}
```
You can specify any shell executable and custom arguments:
```json
{
"shell": {
"path": "/usr/bin/fish",
"args": []
}
}
```
## Architecture
OpenCode is built with a modular architecture:
- **cmd**: Command-line interface using Cobra
- **internal/app**: Core application services
- **internal/config**: Configuration management
- **internal/db**: Database operations and migrations
- **internal/llm**: LLM providers and tools integration
- **internal/tui**: Terminal UI components and layouts
- **internal/logging**: Logging infrastructure
- **internal/message**: Message handling
- **internal/session**: Session management
- **internal/lsp**: Language Server Protocol integration
## Custom Commands
OpenCode supports custom commands that can be created by users to quickly send predefined prompts to the AI assistant.
### Creating Custom Commands
Custom commands are predefined prompts stored as Markdown files in one of three locations:
1. **User Commands** (prefixed with `user:`):
```
$XDG_CONFIG_HOME/opencode/commands/
```
(typically `~/.config/opencode/commands/` on Linux/macOS)
or
```
$HOME/.opencode/commands/
```
2. **Project Commands** (prefixed with `project:`):
```
<PROJECT DIR>/.opencode/commands/
```
Each `.md` file in these directories becomes a custom command. The file name (without extension) becomes the command ID.
For example, creating a file at `~/.config/opencode/commands/prime-context.md` with content:
```markdown
RUN git ls-files
READ README.md
```
This creates a command called `user:prime-context`.
### Command Arguments
OpenCode supports named arguments in custom commands using placeholders in the format `$NAME` (where NAME consists of uppercase letters, numbers, and underscores, and must start with a letter).
For example:
```markdown
# Fetch Context for Issue $ISSUE_NUMBER
RUN gh issue view $ISSUE_NUMBER --json title,body,comments
RUN git grep --author="$AUTHOR_NAME" -n .
RUN grep -R "$SEARCH_PATTERN" $DIRECTORY
```
When you run a command with arguments, OpenCode will prompt you to enter values for each unique placeholder. Named arguments provide several benefits:
- Clear identification of what each argument represents
- Ability to use the same argument multiple times
- Better organization for commands with multiple inputs
### Organizing Commands
You can organize commands in subdirectories:
```
~/.config/opencode/commands/git/commit.md
```
This creates a command with ID `user:git:commit`.
### Using Custom Commands
1. Press `Ctrl+K` to open the command dialog
2. Select your custom command (prefixed with either `user:` or `project:`)
3. Press Enter to execute the command
The content of the command file will be sent as a message to the AI assistant.
## MCP (Model Context Protocol)
OpenCode implements the Model Context Protocol (MCP) to extend its capabilities through external tools. MCP provides a standardized way for the AI assistant to interact with external services and tools.
### MCP Features
- **External Tool Integration**: Connect to external tools and services via a standardized protocol
- **Tool Discovery**: Automatically discover available tools from MCP servers
- **Multiple Connection Types**:
- **Stdio**: Communicate with tools via standard input/output
- **SSE**: Communicate with tools via Server-Sent Events
- **Security**: Permission system for controlling access to MCP tools
### Configuring MCP Servers
MCP servers are defined in the configuration file under the `mcpServers` section:
```json
{
"mcpServers": {
"example": {
"type": "stdio",
"command": "path/to/mcp-server",
"env": [],
"args": []
},
"web-example": {
"type": "sse",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer token"
}
}
}
}
```
### MCP Tool Usage
Once configured, MCP tools are automatically available to the AI assistant alongside built-in tools. They follow the same permission model as other tools, requiring user approval before execution.
## LSP (Language Server Protocol)
OpenCode integrates with Language Server Protocol to provide code intelligence features across multiple programming languages.
### LSP Features
- **Multi-language Support**: Connect to language servers for different programming languages
- **Diagnostics**: Receive error checking and linting information
- **File Watching**: Automatically notify language servers of file changes
### Configuring LSP
Language servers are configured in the configuration file under the `lsp` section:
```json
{
"lsp": {
"go": {
"disabled": false,
"command": "gopls"
},
"typescript": {
"disabled": false,
"command": "typescript-language-server",
"args": ["--stdio"]
}
}
}
```
### LSP Integration with AI
The AI assistant can access LSP features through the `diagnostics` tool, allowing it to:
- Check for errors in your code
- Suggest fixes based on diagnostics
While the LSP client implementation supports the full LSP protocol (including completions, hover, definition, etc.), currently only diagnostics are exposed to the AI assistant.
## Development
### Prerequisites
- Go 1.24.0 or higher
### Building from Source
```bash
# Clone the repository
git clone https://github.com/sst/opencode.git
cd opencode
# Build
go build -o opencode
# Run
./opencode
```
## Acknowledgments
OpenCode gratefully acknowledges the contributions and support from these key individuals:
- [@isaacphi](https://github.com/isaacphi) - For the [mcp-language-server](https://github.com/isaacphi/mcp-language-server) project which provided the foundation for our LSP client implementation
- [@adamdottv](https://github.com/adamdottv) - For the design direction and UI/UX architecture
Special thanks to the broader open source community whose tools and libraries have made this project possible.
## License
OpenCode is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Here's how you can contribute:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
Please make sure to update tests as appropriate and follow the existing code style.