CRM & Automation10 min read

n8n 70+ AI Nodes: LangChain Agent Workflows Open-Source

n8n now offers 70+ AI nodes for building LangChain agent workflows in open-source. Setup guide for autonomous AI automation pipelines without vendor lock-in.

Digital Applied Team
March 13, 2026
10 min read
70+

Dedicated AI Nodes

400+

Total Integrations

45k+

GitHub Stars

Fair

Source License

Key Takeaways

70+ AI nodes make n8n the most complete open-source AI automation platform: n8n now ships with over 70 dedicated AI nodes covering LLMs, vector databases, embeddings, memory, chains, agents, and output parsers. No other open-source workflow automation platform offers comparable AI node breadth, giving teams a full LangChain-compatible stack without writing Python or managing separate infrastructure.
LangChain agent workflows run natively inside the visual canvas: n8n's LangChain integration exposes agents, chains, tools, and retrievers as visual nodes that wire together with drag-and-drop. Developers familiar with LangChain concepts can build production-grade agent workflows without leaving the n8n interface, combining LLM reasoning with the 400+ integrations n8n already supports.
Self-hosted deployment eliminates vendor lock-in and data privacy concerns: Unlike Zapier, Make, and other SaaS automation platforms, n8n can be self-hosted on any infrastructure including your own servers, Kubernetes cluster, or private cloud. All workflow data, credentials, and LLM API calls stay within your environment — a critical requirement for regulated industries and privacy-conscious teams.
Human-in-the-loop nodes prevent autonomous agent errors at critical decision points: n8n's Wait node and webhook-based approval patterns let workflows pause for human review before executing irreversible actions. Combined with the AI agent nodes, this creates a practical pattern for deploying AI automation in production without sacrificing oversight on high-stakes decisions.

Workflow automation platforms have been racing to add AI capabilities since the LLM wave began in 2023. Most have bolted on basic AI actions — call OpenAI, summarize text, classify input — without fundamentally rethinking what it means to build AI-native workflows. n8n has taken a different approach: integrating LangChain's full agent and chain architecture directly into its visual canvas, making it the only open-source automation platform where you can build production-grade agentic AI pipelines without writing application code.

With over 70 dedicated AI nodes now available, n8n supports the complete LangChain stack: LLMs, chains, agents, tools, memory, vector databases, embeddings, and output parsers — all wirable together visually with the 400+ existing integrations. This guide covers the architecture, setup patterns, and real-world use cases that make n8n a serious choice for teams building autonomous AI automation pipelines. For comparison with SaaS alternatives, our guide on Zapier AI actions and natural language workflow creation covers the leading closed-source option.

n8n 70+ AI Nodes: What Changed

The original n8n AI capabilities were limited to individual LLM call nodes — essentially HTTP requests to OpenAI wrapped in a convenient interface. The current 70+ AI node suite is architecturally different. Rather than wrapping API calls, n8n now exposes LangChain's abstractions as first-class visual components. An AI Agent node is not just a prompt template — it is a full ReAct or OpenAI Functions agent with configurable tool bindings, memory, and iterative reasoning loops.

Agent Nodes

ReAct agents, OpenAI Functions agents, and Conversational agents with configurable tool bindings, system prompts, and iteration limits. Full reasoning loop visibility in the execution log.

Vector Store Nodes

Pinecone, Qdrant, Weaviate, Supabase pgvector, Redis, and in-memory vector stores. Insert documents, search by similarity, and manage collections directly from workflows.

Chain Nodes

Basic LLM chains, retrieval-augmented generation chains, summarization chains, and question-answer chains. Wire an LLM node and a retriever to build RAG in two connections.

The 70+ count includes nodes across six categories: Language Models (14 provider-specific and generic nodes), Chains (8 chain types), Agents (5 agent architectures), Memory (9 memory backends), Vector Stores (12 database integrations), and Utilities (embeddings, text splitters, output parsers, document loaders). Each category maps directly to LangChain's component taxonomy, so developers who know LangChain will immediately understand the n8n AI node structure.

LangChain Integration Architecture

n8n's LangChain integration uses a specialized connection type system to distinguish AI component connections from regular data flow. Where standard n8n nodes pass JSON data along main connections, AI nodes use typed sub-connections: AI Language Model, AI Memory, AI Tool, AI Retriever, AI Text Splitter, AI Embedding, and AI Document. These connection types enforce valid component composition — you cannot accidentally connect a vector store where a language model is expected.

Typed Connections

Sub-connection types (AI Language Model, AI Memory, AI Tool) enforce valid wiring and make the data flow between LangChain components explicit and readable on the visual canvas.

Custom Tool Nodes

Any n8n workflow can be wrapped as a LangChain tool using the Workflow Tool node. Agents gain access to the full 400+ integration library as callable tools — HTTP requests, database queries, email sends — without custom code.

Model Agnostic

Language Model nodes abstract over the provider API. Switch from OpenAI to Anthropic or a local Ollama instance by swapping one node — no chain or agent redesign required.

Execution History

Every agent reasoning step, LLM call, token count, and tool invocation is logged in n8n's execution history. Debug complex agent chains by replaying executions step by step.

The architecture's most powerful aspect is the Workflow Tool node. It allows any n8n workflow to be exposed as a callable tool to LangChain agents. An agent that needs to query a CRM, send a Slack message, or retrieve data from a custom API can call these as tools without any additional code — the tool is simply another n8n workflow. This gives n8n agents access to the broadest tool library of any agent framework, visual or code-based.

Building Your First Agent Workflow

The minimum viable n8n AI agent workflow requires four components: a trigger, a Language Model node, an Agent node, and an output node. Start with a Webhook trigger for on-demand execution or a Schedule trigger for automated runs. Add an OpenAI or Anthropic Language Model node with your API credentials and connect it to the Agent node via the AI Language Model connection type. The Agent node handles the reasoning loop. Connect the output to a Respond to Webhook node or a downstream integration.

Adding tools to the agent requires wiring additional nodes to the Agent's AI Tool connection point. Built-in tools include Wikipedia, Calculator, Code Execution, and SerpAPI search. For custom tool functionality, add a Workflow Tool node that points to another n8n workflow. The agent automatically learns the tool's purpose from its description — write clear, specific tool descriptions to get consistent tool selection behavior.

ReAct Agent

Best for tasks requiring multi-step reasoning with tool use. The agent reasons about which tool to call, executes it, observes the result, and decides the next step — visible in the execution log as discrete reasoning steps.

OpenAI Functions Agent

More reliable tool selection for agents with many tools. Uses structured function calling rather than text parsing for tool invocation. Requires a model with function calling support (GPT-4, Claude 3+).

Conversational Agent

Optimized for multi-turn chat with persistent memory. Connects to a Memory node that maintains conversation history across sessions. Use for chatbots and support automation requiring context retention.

Plan and Execute Agent

Creates a full execution plan before taking any action — better for long-horizon tasks where upfront planning reduces errors. Higher latency than ReAct but more structured for complex multi-step workflows.

Multi-Agent Orchestration Patterns

n8n's Workflow Tool node enables multi-agent orchestration within the visual canvas. A supervisor agent can delegate subtasks to specialized sub-agents by calling them as tools. Each sub-agent runs as an independent n8n workflow with its own LLM, tools, and memory — the supervisor receives the sub-agent's output and decides next steps. This hierarchical pattern scales to complex workflows without coupling all logic into a single massive agent.

A practical content marketing multi-agent workflow might use a supervisor agent to coordinate: a research sub-agent that searches the web and retrieves documents, a writing sub-agent that drafts content, a quality-check sub-agent that reviews the draft, and an publishing sub-agent that posts to the CMS. Each sub-agent has only the tools it needs, reducing token usage and improving reliability. For context on how similar multi-agent patterns apply to Make-based automation, our guide on Make AI scenarios and prompt engineering for marketing automation covers related orchestration concepts.

Vector Database and Memory Nodes

n8n's vector store nodes cover the full document ingestion and retrieval pipeline needed for production RAG workflows. Document loader nodes handle PDF, HTML, CSV, JSON, and plain text inputs. Text splitter nodes chunk documents with configurable size and overlap. Embedding nodes convert chunks to vectors using any supported embedding model. Vector store nodes insert and query the resulting embeddings — all wired together visually without Python or custom ETL code.

Pinecone

Managed vector database with low-latency similarity search. Best for production deployments requiring high query throughput and multi-tenant namespace isolation.

Supabase pgvector

PostgreSQL-native vector search via pgvector extension. Combines relational filtering with vector similarity — ideal when you need SQL joins with semantic search results.

Qdrant

Open-source vector database with self-hosted deployment option. Strong filtering capabilities and Rust-based performance. Good match for n8n self-hosted deployments requiring full data sovereignty.

Memory nodes serve a different purpose than vector stores: they maintain conversation context between agent turns rather than storing document knowledge. n8n offers nine memory backends including Buffer Memory (in-session), Redis Chat Memory (persistent across sessions), Postgres Chat Memory, and Motorhead Memory for long-term summarized memory. Choose Buffer Memory for short-lived chatbot sessions and Redis or Postgres memory for any workflow where conversation history needs to persist across multiple executions or users.

Human-in-the-Loop Approvals

Fully autonomous AI agents are not appropriate for every workflow. Actions like sending customer emails, posting to social media, making payments, or modifying production databases require human review before execution. n8n's Wait node pauses workflow execution and resumes it when a webhook receives an approval signal. Combined with AI agent nodes, this creates a practical pattern for high-stakes automation with oversight.

Approval Workflow Pattern

Agent generates output → Wait node pauses → Slack or email notification with approve/reject buttons → Webhook receives decision → workflow resumes or terminates based on approval status. No code required.

Conditional Autonomy

Use IF nodes to route workflows based on confidence scores, output classification, or value thresholds. Low-confidence outputs route to human review; high-confidence outputs execute automatically. Reduces manual review volume while maintaining safety for edge cases.

The practical pattern for content automation teams is a three-stage pipeline: generation (fully autonomous), review (human approval via Slack message with action buttons), and publishing (fully autonomous after approval). The Wait node holds execution indefinitely until the webhook fires, even across server restarts when using persistent queue mode. Teams that implement this pattern report significantly higher confidence deploying AI automation to production compared to fully autonomous approaches.

Self-Hosted vs Cloud Deployment

n8n offers three deployment options: self-hosted Community edition (free, open-source), n8n Cloud (managed SaaS with generous free tier), and self-hosted Enterprise (paid, adds SSO, RBAC, and audit logging). The right choice depends on your data privacy requirements, operational capacity, and workflow volume.

n8n Cloud

Managed hosting with zero ops overhead. Automatic updates, backups, and scaling. Best for teams without dedicated infrastructure resources. Starts free, scales on executions and active workflows.

Self-Hosted Community

Full data sovereignty, no per-execution costs, and unlimited workflows. Requires Docker or npm setup, a PostgreSQL database for production, and Redis for queue mode. Best for high-volume and privacy-sensitive workloads.

Self-Hosted Enterprise

Adds SSO/SAML, advanced RBAC with environment isolation, audit logs, external secret storage, and dedicated support. Required for regulated industries and large engineering organizations with compliance requirements.

For AI-heavy workflows, self-hosting has a meaningful cost advantage. Cloud platforms charge per execution or per active workflow. A workflow that runs 10,000 times per day on n8n Cloud generates significant execution costs. The same workflow on a self-hosted instance on a $50/month VPS has near-zero marginal cost per execution. The break-even point depends on team size and workflow volume, but most teams running more than a few hundred daily executions find self-hosting cost-effective within months.

Real-World Automation Use Cases

The 70+ AI nodes unlock workflow automation patterns that were impractical with earlier n8n versions. The following use cases represent real production deployments from the n8n community, each built entirely with the visual canvas and no custom application code. Our CRM and automation services team builds and maintains workflows like these for clients across industries.

Lead Qualification Agent

New CRM lead triggers workflow. Agent researches the company via web search, scores lead quality against ideal customer profile criteria, drafts personalized outreach, and routes high-score leads to sales Slack — with human approval before sending.

Knowledge Base RAG Bot

Documentation webhook ingests new docs, chunks and embeds them into Pinecone. Customer support bot queries the vector store on each ticket, retrieves relevant docs, and generates answers grounded in current documentation rather than model training data.

Content Pipeline

Scheduled trigger pulls trending topics from search APIs. Research agent gathers supporting sources. Writing agent drafts social posts and blog outlines. Human approval gate. Publishing agent posts approved content to CMS and social platforms simultaneously.

Competitive Intelligence

Daily workflow monitors competitor pricing pages, press releases, and social mentions. Analysis agent summarizes key changes against a vector store of previous reports. Sends structured briefing to Slack only when significant changes are detected.

Conclusion

n8n's expansion to 70+ AI nodes marks a qualitative shift for the platform. It is no longer simply a no-code automation tool with AI actions bolted on — it is a complete visual environment for building LangChain-compatible AI pipelines that connect to any integration in its 400+ node library. For teams building autonomous AI automation without writing application code, n8n is the most capable open-source option available in 2026.

The combination of self-hosting flexibility, full LangChain agent support, vector database integrations, human-in-the-loop patterns, and zero-cost-per-execution economics makes n8n compelling particularly for teams with data privacy requirements or high workflow volumes. The platform is not the easiest starting point for non-technical users compared to Zapier, but for development teams and technical operators, the depth of its AI capabilities now clearly justifies the learning curve.

Ready to Automate with AI Agents?

Building production-grade AI automation pipelines requires the right architecture and expertise. Our team designs and implements n8n-based AI workflows that integrate with your existing systems and scale with your business.

Free consultation
Expert guidance
Tailored solutions

Related Articles

Continue exploring with these related guides