Skip to main content
Open In ColabOpen on GitHub

Anchorbrowser

Anchor is the platform for AI Agentic browser automation, which solves the challenge of automating workflows for web applications that lack APIs or have limited API coverage. It simplifies the creation, deployment, and management of browser-based automations, transforming complex web interactions into simple API endpoints.

This notebook provides a quick overview for getting started with Anchorbrowser tools. For more information of Anchorbrowser visit Anchorbrowser.io or Anchorbrowser Docs

Overview

Integration details

Anchor Browser package for LangChain is langchain-anchorbrowser, and the current latest version is PyPI - Version.

Tool features

Tool NamePackageDescriptionParameters
AnchorContentToollangchain-anchorbrowserExtract text content from web pagesurl, format
AnchorScreenshotToollangchain-anchorbrowserTake screenshots of web pagesurl, width, height, image_quality, wait, scroll_all_content, capture_full_height, s3_target_address
AnchorWebTaskToolKitlangchain-anchorbrowserPerform intelligent web tasks using AI (Simple, Standard, Advanced modes)see below

Parameter default values are the same as those in the Anchorbrowser API reference respectively: Get Webpage Content, Screenshot Webpage, and Perform Web Task.

Info: Anchor provides agent usage both with browser_use and openai-cua while using StandardAnchorWebTaskTool and AdvancedAnchorWebTaskTool tools.

AnchorWebTaskToolKit Tools

The difference between each tool in this toolkit is the pydantic configuration structure.

Tool NamePackageParameters
SimpleAnchorWebTaskToollangchain-anchorbrowserprompt, url
StandardAnchorWebTaskToollangchain-anchorbrowserprompt, url, agent, provider, model
AdvancedAnchorWebTaskToollangchain-anchorbrowserprompt, url, agent, provider, model, highlight_elements, output_schema

Setup

The integration lives in the langchain-anchorbrowser package.

%pip install --quiet -U langchain-anchorbrowser

Later in this notebook there will be a chaining example with openai, to run that part the langchain-openai package is needed:

%pip install -qU langchain langchain-openai

Credentials

Use your Anchorbrowser Credentials. Get them on Anchorbrowser API Keys page as needed.

import getpass
import os

if not os.environ.get("ANCHORBROWSER_API_KEY"):
os.environ["ANCHORBROWSER_API_KEY"] = getpass.getpass("ANCHORBROWSER API key:\n")

For the OpenAI Chaining example OpenAI credentials is needed too:It's also helpful (but not needed) to set up LangSmith for best-in-class observability:

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OPENAI API key:\n")

Instantiation

Instantiace easily Anchorbrowser tools instances.

from langchain_anchorbrowser import (
AnchorContentTool,
AnchorScreenshotTool,
SimpleAnchorWebTaskTool,
)

anchor_content_tool = AnchorContentTool()
anchor_screenshot_tool = AnchorScreenshotTool()
anchor_simple_web_task_tool = SimpleAnchorWebTaskTool()

Invocation

Invoke directly with args

The full available argument list appear above in the tool features table.

# Get Markdown Content for https://www.anchorbrowser.io
anchor_content_tool.invoke(
{"url": "https://www.anchorbrowser.io", "format": "markdown"}
)

# Get a Screenshot for https://docs.anchorbrowser.io
anchor_screenshot_tool.invoke(
{"url": "https://docs.anchorbrowser.io", "width": 1280, "height": 720}
)

# Get a Screenshot for https://docs.anchorbrowser.io
anchor_simple_web_task_tool.invoke(
{
"prompt": "View the NASA website and then get me one of the latest space news",
"url": "https://nasa.gov",
}
)

Invoke with ToolCall

We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:

# This is usually generated by a model, but we'll create a tool call directly for demo purposes.
model_generated_tool_call = {
"args": {"url": "https://www.anchorbrowser.io", "format": "markdown"},
"id": "1",
"name": anchor_content_tool.name,
"type": "tool_call",
}
anchor_content_tool.invoke(model_generated_tool_call)

Chaining

We can use our tool in a chain by first binding it to a tool-calling model and then calling it:

Use within an agent

from langchain.chat_models import init_chat_model

llm = init_chat_model(model="gpt-4o", model_provider="openai")
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain

prompt = ChatPromptTemplate(
[
("system", "You are a helpful assistant."),
("human", "{user_input}"),
("placeholder", "{messages}"),
]
)

# specifying tool_choice will force the model to call this tool.
llm_with_tools = llm.bind_tools(
[anchor_content_tool], tool_choice=anchor_content_tool.name
)

llm_chain = prompt | llm_with_tools


@chain
def tool_chain(user_input: str, config: RunnableConfig):
input_ = {"user_input": user_input}
ai_msg = llm_chain.invoke(input_, config=config)
tool_msgs = anchor_content_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)


tool_chain.invoke(input())

API reference