SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
AI Development10 min read

Anthropic Computer Use API: Desktop Automation Guide

Automate desktop tasks with Claude Computer Use using current beta headers, tool versions, screenshot costs, coordinate mapping, and VM safety guidance.

Digital Applied Team
October 15, 2025• Updated April 30, 2026
10 min read

Key Takeaways

Revolutionary Desktop Control:: Claude can autonomously interact with computers like humans do—viewing screens, moving cursors, clicking buttons, and typing text to complete multi-step tasks.
Three Core Tools:: Computer tool for mouse/keyboard input, Text Editor for file operations, and Bash tool for system commands work together for comprehensive automation.
Coordinate-Aware Control:: Claude returns screen coordinates that your application must map carefully to the active display, especially after screenshot resizing.
Safety-First Deployment:: Run Computer Use in virtual machines or containers with minimal privileges to mitigate security risks like jailbreaking and prompt injection.
Experimental Beta Status:: Currently cumbersome and error-prone but improving rapidly. Start with low-risk tasks and provide feedback to shape future development.

What Is Computer Use API?

Computer Use represents a paradigm shift in AI capabilities. Rather than building specialized tools for individual tasks, Anthropic is teaching Claude general computer skills—enabling it to use the same interfaces, applications, and workflows that humans use every day.

Released in public beta on October 22, 2024, Computer Use made Claude one of the first frontier AI systems to offer autonomous desktop control through an API tool. As of April 30, 2026, Anthropic's current Computer Use path uses thecomputer-use-2025-11-24 beta header with supported Claude 4.x models, while older model branches use earlier beta headers.

What Makes Computer Use Different?

Traditional AI tools require custom integrations for each application. Computer Use eliminates this bottleneck by teaching Claude to interact with any software interface—web browsers, desktop applications, command-line tools—just like a human user would.

This means Claude can automate complex workflows across multiple applications without needing API access or custom integrations for each tool.

Key Capabilities

  • Visual Understanding: Analyze screenshots to understand UI elements, content, and context
  • Coordinate-Aware Mouse Control: Move cursor and click using coordinates your application maps to the active display
  • Keyboard Input: Type text, use keyboard shortcuts, and navigate interfaces
  • Multi-Step Workflows: Chain actions together to complete complex tasks
  • Error Recovery: Adapt to unexpected UI changes and error conditions

Current API Version

As of April 30, 2026, the latest Computer Use beta uses anthropic-beta: computer-use-2025-11-24 with current Claude 4.x models such as claude-opus-4-7 and claude-sonnet-4-6. The older computer-use-2025-01-24 header remains relevant for older compatible models.

How Computer Use Works

Computer Use operates through a continuous feedback loop where Claude analyzes the current screen state, decides on actions, and observes the results—similar to how a human user interacts with a computer.

The Execution Cycle

Step 1: Screenshot Analysis

Claude captures and analyzes a screenshot of the current desktop state. Using its vision capabilities, it identifies UI elements, reads text, recognizes buttons, and understands the application context.

This visual understanding enables Claude to work with any application, even those without accessibility features or APIs.

Step 2: Action Planning

Based on the screenshot and task objective, Claude determines the next action. This might be moving the mouse to specific coordinates, clicking a button, typing text, or executing a keyboard shortcut.

The planning process considers UI patterns, common workflows, and task requirements to select optimal actions.

Step 3: Coordinate Mapping

Claude returns coordinates for the screenshot it sees, and your application executes those actions in the desktop environment. If you resize screenshots for reliability or cost, map returned coordinates back to the native display before moving the cursor.

This coordinate mapping step is especially important for high-resolution displays, remote desktops, and environments where browser zoom or display scaling can shift UI targets.

Step 4: Action Execution

Claude executes the planned action using the Computer tool's mouse and keyboard functions. The action modifies the desktop state—opening applications, filling forms, navigating menus, etc.

After execution, Claude captures a new screenshot and evaluates whether the action succeeded or requires adjustment.

Step 5: Goal Evaluation

Claude compares the new screen state against the task objective. If the goal is achieved, the workflow completes. If not, Claude plans the next action and continues the cycle.

This iterative approach enables Claude to handle unexpected UI changes, error dialogs, and multi-step workflows dynamically.

Why Pixel Counting Matters

When you ask Claude to click a button, it needs to translate "the blue submit button in the lower right" into exact pixel coordinates like (1245, 867). Traditional computer vision approaches struggle with this translation across different screen sizes and layouts.

Anthropic's solution was to train Claude to count pixels from reference points (screen edges, known UI elements) to target locations. This skill enables reliable cursor positioning regardless of screen resolution, DPI scaling, or application layout.

API Setup & Configuration

Setting up Computer Use requires the Anthropic SDK and proper configuration for desktop automation. The fastest way to get started is using Anthropic's official Docker container with a preconfigured environment.

Quick Start with Docker

Anthropic's current quickstart image spins up a desktop, VNC access, web UI, and example agent loop for Computer Use:

export ANTHROPIC_API_KEY=%your_api_key%

docker run \
  -e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
  -v $HOME/.anthropic:/home/computeruse/.anthropic \
  -p 5900:5900 -p 8501:8501 -p 6080:6080 -p 8080:8080 \
  -it ghcr.io/anthropics/anthropic-quickstarts:computer-use-demo-latest

Use http://localhost:8080 for the combined UI, http://localhost:6080/vnc.html for desktop-only access, or vnc://localhost:5900 for direct VNC. The container includes:

  • Ubuntu 22.04 with XFCE desktop environment
  • Firefox browser for web automation
  • Reference tool implementations for mouse/keyboard control
  • Python environment with Anthropic SDK

Python SDK Installation

For custom implementations, install the required libraries:

pip install anthropic pyautogui pillow

# For screenshot capture
pip install mss

# For image processing
pip install opencv-python numpy

Basic API Configuration

import anthropic
import pyautogui
from PIL import Image
import io
import base64

# Initialize Anthropic client
client = anthropic.Anthropic(
    api_key="your_api_key_here"
)

# Configure Computer Use beta header for current Claude 4.x models
COMPUTER_USE_BETA = "computer-use-2025-11-24"

# Define available tools
tools = [
    {
        "type": "computer_20251124",
        "name": "computer",
        "display_width_px": 1024,
        "display_height_px": 768,
        "display_number": 1,
        "enable_zoom": True
    },
    {
        "type": "text_editor_20250728",
        "name": "str_replace_based_edit_tool"
    },
    {
        "type": "bash_20250124",
        "name": "bash"
    }
]

API Pricing & Limits

Computer Use follows standard Claude API pricing for the selected model plus tool-use and image-token overhead. Additional considerations:

  • System Prompt: 466-499 added tokens for automated tool selection
  • Tool Definitions: 735 input tokens for the Claude 4.x computer tool definition
  • Screenshot Images: approximately width x height / 750 tokens before provider resizing, so resolution choices materially affect cost

Screenshot Analysis

Screenshot analysis is the foundation of Computer Use. Claude needs to understand what's currently visible on screen to decide which actions to take. Let's explore how to capture, encode, and send screenshots to the API.

Capturing Screenshots

import mss
import base64
from PIL import Image
from io import BytesIO

def capture_screenshot():
    """Capture screenshot and encode as base64"""
    with mss.mss() as sct:
        # Capture primary monitor
        monitor = sct.monitors[1]
        screenshot = sct.grab(monitor)

        # Convert to PIL Image
        img = Image.frombytes(
            'RGB',
            screenshot.size,
            screenshot.rgb
        )

        # Prefer XGA/WXGA-style captures for reliability and cost
        # If you downscale, map returned coordinates back to native resolution.
        max_width = 1024
        if img.width > max_width:
            ratio = max_width / img.width
            new_size = (max_width, int(img.height * ratio))
            img = img.resize(new_size, Image.Resampling.LANCZOS)

        # Convert to base64
        buffer = BytesIO()
        img.save(buffer, format='PNG', optimize=True)
        img_str = base64.b64encode(buffer.getvalue()).decode()

        return {
            'type': 'image',
            'source': {
                'type': 'base64',
                'media_type': 'image/png',
                'data': img_str
            }
        }

# Example usage
screenshot = capture_screenshot()

Sending Screenshots to Claude

def analyze_screen(task_description: str):
    """Send screenshot to Claude for analysis"""
    screenshot = capture_screenshot()

    response = client.beta.messages.create(
        model="claude-opus-4-7",
        max_tokens=1024,
        tools=tools,
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": f"Task: {task_description}\n\nAnalyze this screenshot and determine the next action."
                    },
                    screenshot
                ]
            }
        ],
        betas=[COMPUTER_USE_BETA]
    )

    return response

# Example: Analyze a login screen
result = analyze_screen("Fill out the login form with username 'demo' and click submit")

Understanding Claude's Analysis

Claude's vision model analyzes screenshots to identify:

  • UI Elements: Buttons, text fields, dropdowns, menus, checkboxes
  • Text Content: Labels, instructions, error messages, form fields
  • Visual Context: Application state, active windows, loaded pages
  • Spatial Layout: Element positions, sizes, relationships
Performance Optimization

Screenshots consume significant tokens. Optimize by:

  • Starting with 1024x768 or WXGA captures for reliability, then mapping coordinates back to the native display
  • Using PNG compression with optimize=True
  • Capturing only relevant screen regions when possible
  • Reducing screenshot frequency in repeated workflows

Mouse & Keyboard Control

The Computer tool provides mouse and keyboard functions that Claude invokes based on screenshot analysis. Let's explore how to implement and handle these control mechanisms.

Mouse Operations

Claude uses tool calls to control the mouse. Here's how to implement the handlers:

import pyautogui
import time

def handle_mouse_move(x: int, y: int):
    """Move cursor to specific coordinates"""
    pyautogui.moveTo(x, y, duration=0.2)
    time.sleep(0.1)
    return {"success": True, "action": f"moved to ({x}, {y})"}

def handle_left_click():
    """Perform left mouse click"""
    pyautogui.click()
    time.sleep(0.2)
    return {"success": True, "action": "left click"}

def handle_right_click():
    """Perform right mouse click"""
    pyautogui.rightClick()
    time.sleep(0.2)
    return {"success": True, "action": "right click"}

def handle_double_click():
    """Perform double click"""
    pyautogui.doubleClick()
    time.sleep(0.2)
    return {"success": True, "action": "double click"}

def handle_mouse_drag(start_x: int, start_y: int, end_x: int, end_y: int):
    """Drag from start to end coordinates"""
    pyautogui.moveTo(start_x, start_y)
    time.sleep(0.1)
    pyautogui.dragTo(end_x, end_y, duration=0.5)
    time.sleep(0.2)
    return {"success": True, "action": f"dragged from ({start_x}, {start_y}) to ({end_x}, {end_y})"}

Keyboard Operations

def handle_type_text(text: str):
    """Type text with natural typing speed"""
    pyautogui.write(text, interval=0.05)
    time.sleep(0.2)
    return {"success": True, "action": f"typed text: {text[:50]}..."}

def handle_key_press(key: str):
    """Press a single key or key combination"""
    pyautogui.press(key)
    time.sleep(0.1)
    return {"success": True, "action": f"pressed key: {key}"}

def handle_hotkey(*keys):
    """Press key combination (e.g., Ctrl+C)"""
    pyautogui.hotkey(*keys)
    time.sleep(0.2)
    return {"success": True, "action": f"hotkey: {'+'.join(keys)}"}

# Example keyboard shortcuts
def common_shortcuts():
    return {
        "copy": lambda: handle_hotkey('ctrl', 'c'),
        "paste": lambda: handle_hotkey('ctrl', 'v'),
        "save": lambda: handle_hotkey('ctrl', 's'),
        "undo": lambda: handle_hotkey('ctrl', 'z'),
        "select_all": lambda: handle_hotkey('ctrl', 'a'),
        "tab": lambda: handle_key_press('tab'),
        "enter": lambda: handle_key_press('enter'),
        "escape": lambda: handle_key_press('escape')
    }

Tool Call Handler

Process Claude's tool calls to execute the requested actions:

def process_tool_call(tool_use):
    """Execute tool calls from Claude's response"""
    tool_name = tool_use.name
    tool_input = tool_use.input

    # Computer tool actions
    if tool_name == "computer":
        action = tool_input.get("action")

        if action == "mouse_move":
            return handle_mouse_move(
                tool_input["coordinate"][0],
                tool_input["coordinate"][1]
            )
        elif action == "left_click":
            return handle_left_click()
        elif action == "right_click":
            return handle_right_click()
        elif action == "double_click":
            return handle_double_click()
        elif action == "type":
            return handle_type_text(tool_input["text"])
        elif action == "key":
            return handle_key_press(tool_input["text"])
        elif action == "screenshot":
            return capture_screenshot()

    # Text editor tool
    elif tool_name == "str_replace_based_edit_tool":
        command = tool_input.get("command")
        if command == "view":
            with open(tool_input["path"], 'r') as f:
                return {"content": f.read()}
        elif command == "str_replace":
            # Implement file editing logic
            pass

    # Bash tool
    elif tool_name == "bash":
        import subprocess
        result = subprocess.run(
            tool_input["command"],
            shell=True,
            capture_output=True,
            text=True
        )
        return {
            "stdout": result.stdout,
            "stderr": result.stderr,
            "exit_code": result.returncode
        }

    return {"error": "Unknown tool or action"}
Challenges with UI Elements

Some UI elements are trickier for Claude to manipulate using mouse movements:

  • Dropdowns: May require multiple clicks or hovering
  • Scrollbars: Dragging can be imprecise
  • Sliders: Fine-tuning values is difficult

Solution: Prompt Claude to use keyboard shortcuts when available (Tab, Arrow keys, Enter) for more reliable interactions.

Workflow Automation Examples

Let's explore real-world automation workflows that demonstrate Computer Use capabilities and best practices.

Example 1: Form Automation

Automatically fill out web forms with data from a structured source:

def automate_form_filling(form_data: dict):
    """Fill web form with provided data"""

    # Initial prompt with form data
    task = f"""
    Fill out the registration form with this data:
    - First Name: {form_data['first_name']}
    - Last Name: {form_data['last_name']}
    - Email: {form_data['email']}
    - Phone: {form_data['phone']}

    Then click the Submit button.
    """

    messages = [{
        "role": "user",
        "content": [
            {"type": "text", "text": task},
            capture_screenshot()
        ]
    }]

    # Automation loop
    max_iterations = 20
    for i in range(max_iterations):
        response = client.beta.messages.create(
            model="claude-opus-4-7",
            max_tokens=2048,
            tools=tools,
            messages=messages,
            betas=[COMPUTER_USE_BETA]
        )

        # Process tool calls
        if response.stop_reason == "tool_use":
            tool_results = []
            for content in response.content:
                if content.type == "tool_use":
                    result = process_tool_call(content)
                    tool_results.append({
                        "type": "tool_result",
                        "tool_use_id": content.id,
                        "content": str(result)
                    })

            # Add assistant response and tool results
            messages.append({"role": "assistant", "content": response.content})
            messages.append({"role": "user", "content": tool_results})

        # Check completion
        elif response.stop_reason == "end_turn":
            print("Form submission complete")
            break

    return True
Example 2: Data Entry from Spreadsheets

Transfer data from Excel/CSV into web applications:

import pandas as pd

def bulk_data_entry(csv_path: str, app_url: str):
    """Enter data from CSV into web application"""

    # Load data
    df = pd.read_csv(csv_path)

    # Open application
    task = f"Open web browser and navigate to {app_url}"
    automate_task(task)

    # Process each row
    for index, row in df.iterrows():
        print(f"Processing row {index + 1}/{len(df)}")

        task = f"""
        Enter the following data into the form:
        - Product: {row['product_name']}
        - Quantity: {row['quantity']}
        - Price: {row['price']}
        - Category: {row['category']}

        Click Save and wait for confirmation.
        Then click 'Add Another' to continue.
        """

        automate_task(task)

        # Brief pause between entries
        time.sleep(2)

    print("Data entry complete")
Example 3: Automated Testing

Test user interfaces by simulating user interactions:

def test_checkout_flow():
    """Test e-commerce checkout process"""

    test_steps = [
        "Navigate to the product page",
        "Click 'Add to Cart' button",
        "Click the shopping cart icon",
        "Verify product appears in cart",
        "Click 'Proceed to Checkout'",
        "Fill shipping address with test data",
        "Select shipping method: Standard",
        "Enter payment info (test card)",
        "Click 'Place Order'",
        "Verify order confirmation appears"
    ]

    results = []

    for step in test_steps:
        print(f"Testing: {step}")

        task = f"""
        {step}

        After completing this action:
        1. Take a screenshot
        2. Verify the action succeeded
        3. Report any errors or unexpected behavior
        """

        result = automate_task(task)
        results.append({
            "step": step,
            "success": result.get("success", False),
            "notes": result.get("notes", "")
        })

        time.sleep(1)

    # Generate test report
    return generate_test_report(results)
Example 4: Document Processing

Extract information from documents and enter into systems:

def process_invoices(pdf_folder: str):
    """Extract data from invoices and enter into accounting system"""

    import os

    pdf_files = [f for f in os.listdir(pdf_folder) if f.endswith('.pdf')]

    for pdf_file in pdf_files:
        print(f"Processing {pdf_file}")

        # Open PDF and extract data
        task = f"""
        1. Open the PDF file at {os.path.join(pdf_folder, pdf_file)}
        2. Extract the following information:
           - Invoice number
           - Date
           - Vendor name
           - Total amount
           - Line items with descriptions and amounts
        3. Take note of all extracted data
        """

        extracted_data = automate_task(task)

        # Enter into accounting system
        entry_task = f"""
        1. Open the accounting software
        2. Click 'New Invoice Entry'
        3. Fill in the extracted data:
           {extracted_data}
        4. Attach the PDF file
        5. Click 'Save'
        6. Verify the entry was saved successfully
        """

        automate_task(entry_task)

        # Move processed file
        os.rename(
            os.path.join(pdf_folder, pdf_file),
            os.path.join(pdf_folder, 'processed', pdf_file)
        )

Safety Guidelines & Best Practices

Computer Use introduces new security considerations. Following Anthropic's safety guidelines is essential for responsible deployment.

Critical Safety Requirements
  • Run in Isolated Environments: Always use virtual machines or containers, never on your main system
  • Minimal Privileges: Grant only necessary permissions and filesystem access
  • No Production Credentials: Never expose production API keys or passwords
  • Network Isolation: Restrict network access to only required services

Security Vulnerabilities

Anthropic acknowledges that Computer Use is susceptible to:

  • Jailbreaking: Attempts to bypass safety guidelines through adversarial prompts
  • Prompt Injection: Claude may follow commands found in on-screen content, potentially conflicting with user instructions
  • Unintended Actions: Model errors could trigger destructive operations

Development Best Practices

1. Start with Low-Risk Tasks

Begin exploration with non-critical workflows:

  • Data entry into test environments
  • Form filling with dummy data
  • UI testing without production access
2. Implement Human-in-the-Loop

Require confirmation for sensitive operations:

def require_confirmation(action: str) -> bool:
    """Request human confirmation for sensitive actions"""
    print(f"\nClaude wants to perform: {action}")
    response = input("Allow this action? (yes/no): ")
    return response.lower() == "yes"

# In tool handler
if action_is_sensitive(action):
    if not require_confirmation(action):
        return {"error": "Action denied by user"}
3. Monitor and Log All Actions

Maintain audit trail of all Computer Use actions:

import json
from datetime import datetime

def log_action(action_type: str, details: dict):
    """Log all Computer Use actions"""
    log_entry = {
        "timestamp": datetime.now().isoformat(),
        "action_type": action_type,
        "details": details
    }

    with open("computer_use_audit.jsonl", "a") as f:
        f.write(json.dumps(log_entry) + "\n")

# Log every tool call
log_action("mouse_click", {"x": 100, "y": 200})
log_action("type_text", {"text": "username"})
4. Set Timeouts and Iteration Limits

Prevent runaway automation loops:

def automate_with_limits(task: str, max_steps: int = 50, timeout_seconds: int = 300):
    """Run automation with safety limits"""
    start_time = time.time()

    for step in range(max_steps):
        # Check timeout
        if time.time() - start_time > timeout_seconds:
            raise TimeoutError("Automation exceeded time limit")

        # Execute step
        result = execute_step(task)

        if result.get("complete"):
            return result

    raise RuntimeError("Exceeded maximum step count")

Anthropic's Safety Classifiers

Anthropic has developed new classifiers that identify when Computer Use is being employed and whether harmful actions are occurring. These classifiers help detect:

  • Spam generation attempts
  • Misinformation creation
  • Fraud or malicious automation

Current Limitations

Computer Use is in public beta and has notable limitations. Understanding these constraints helps set realistic expectations and plan appropriate use cases.

Performance Challenges
  • Slow Execution: Significantly slower than human operation due to screenshot analysis and planning overhead
  • Action Errors: Mistakes are common, requiring error recovery and retries
  • UI Navigation Issues: Complex interfaces with many elements can confuse the model
Difficult Actions

Anthropic notes that some actions people perform effortlessly present challenges for Claude:

  • Scrolling: Both page scrolling and precise scrollbar manipulation
  • Dragging: Click-and-drag operations, especially over long distances
  • Zooming: Adjusting zoom levels or map navigation

Workaround: Use keyboard alternatives when available (Page Down, Arrow keys, keyboard shortcuts).

API and Model Constraints
  • Model Selection: Latest beta support uses Claude Opus 4.7, Opus 4.6, Sonnet 4.6, and Opus 4.5; older compatible models use the 2025-01-24 header
  • Beta Header Required: API changes may occur as the feature evolves
  • High Token Usage: Screenshots and tool definitions consume significant context
When NOT to Use Computer Use

Computer Use is not optimal for:

  • Tasks with available APIs (use API integration instead)
  • Real-time or time-sensitive operations
  • Production environments without supervision
  • Tasks requiring high precision or zero error tolerance
  • Systems with sensitive data or credentials
Future Improvements

Anthropic expects Computer Use capabilities to improve rapidly over time:

  • Better Accuracy: Reduced errors through improved training
  • Faster Execution: Optimized screenshot analysis and action planning
  • Advanced Actions: Better handling of complex UI interactions
  • Additional Models: Potential expansion to Opus and other model tiers

Your feedback during this beta period directly shapes these improvements.

Conclusion

Anthropic's Computer Use API represents a breakthrough in desktop automation, enabling AI to interact with computers the way humans do. While still in beta with notable limitations, it opens unprecedented possibilities for workflow automation across any application interface.

Start with low-risk tasks in isolated environments, implement proper safety measures, and provide feedback to help shape this emerging technology. As Computer Use matures, it will transform how we automate complex, multi-application workflows.

Need Help Implementing AI Automation?

Digital Applied specializes in AI integration and workflow automation. Our team can help you evaluate Computer Use API, design safe automation architectures, and implement production-ready solutions with proper security measures and error handling.

Free consultation
Expert guidance
Tailored solutions

Frequently Asked Questions

Related Guides

Continue exploring AI automation and desktop workflow tools with these related guides