
Multi-Agent Collaboration in CrewAI: Setup, Roles, and Goals
January 26, 2025How to Add Tool Access to Your Agents (APIs, Browsers, File Systems)
February 1, 2025Create smarter, context-aware autonomous agents with long-term recall and tool access
LangGraph gives structure. Vector memory gives context. In this tutorial, youβll learn how to combine both to build an agent that not only uses external tools but also remembers past interactions, learns from documents, and adapts based on retrieved knowledge. Perfect for search agents, assistants, and knowledge workers.
π§ Why Combine LangGraph + Vector Memory?
Tool use allows agents to interact with the worldβvia APIs, browsers, databases. But without memory, your agent is limited to the current prompt. It canβt learn from previous queries, access external knowledge, or improve over time.
By combining:
- LangGraphβs DAG-based agent flows
- With vector memory retrieval (e.g., FAISS or Chroma)
You create agents that can:- β Search APIs
- β Store results for future context
- β Recall previous interactions or documents
- β Summarize findings and take action
Use Case Examples:
- RAG-enabled research agents
- AI assistants that learn user preferences
- Agents that reuse previous tool outputs to improve answers
π§ Tools Youβll Use
| Tool | Purpose |
|---|---|
| LangGraph | Build stateful agent workflows (DAGs) |
| FAISS or Chroma | Store vector embeddings for memory |
| LangChain | LLM interface, vector store APIs |
| OpenAI or Hugging Face | Power your LLM agent logic |
π What Youβll Build
A Knowledge Assistant Agent that:
- Accepts a research query from the user
- Uses a tool to fetch results (e.g., web search or dummy API)
- Stores those results in vector memory
- Retrieves previous results related to new queries
- Summarizes the final response
Workflow:
[Input] β [Search] β [Store in Memory] β [Retrieve Similar Docs] β [Summarize]
π Step-by-Step Guide
β 1. Install Dependencies
bashCopyEditpip install langgraph langchain openai faiss-cpu tiktokenβ 2. Setup LLM & Vector Store
pythonCopyEditfrom langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.docstore.document import Document
llm = ChatOpenAI(model="gpt-3.5-turbo")
embeddings = OpenAIEmbeddings()
vectorstore = FAISS(embedding_function=embeddings)
β 3. Define State Object
pythonCopyEditfrom langgraph.graph import StateGraph
class AgentState(dict): passβ 4. Define Nodes (Tools + Memory Steps)
pythonCopyEdit# Dummy tool that returns a pretend result
def search_api(state):
query = state['query']
result = f"Research data about {query}"
state["search_result"] = result
return state
# Store result in memory
def store_memory(state):
content = state["search_result"]
doc = Document(page_content=content, metadata={"source": "search"})
vectorstore.add_documents([doc])
return state
# Retrieve similar documents
def retrieve_memory(state):
query = state["query"]
docs = vectorstore.similarity_search(query, k=2)
state["retrieved_docs"] = docs
return state
# Summarize based on both retrieved + current
def summarize(state):
context = "\n".join([doc.page_content for doc in state["retrieved_docs"]])
prompt = f"Use this info to answer: {state['query']}\n\nContext:\n{context}"
state["summary"] = llm.predict(prompt)
return stateβ 5. Build the LangGraph
pythonCopyEditgraph = StateGraph(AgentState)
graph.add_node("Search", search_api)
graph.add_node("StoreMemory", store_memory)
graph.add_node("Retrieve", retrieve_memory)
graph.add_node("Summarize", summarize)
graph.set_entry_point("Search")
graph.add_edge("Search", "StoreMemory")
graph.add_edge("StoreMemory", "Retrieve")
graph.add_edge("Retrieve", "Summarize")
graph.set_finish_point("Summarize")
executor = graph.compile()β 6. Run Your Tool-Using, Memory-Rich Agent
pythonCopyEditstate = AgentState()
state["query"] = "LangGraph tutorials"
final = executor.invoke(state)
print("SUMMARY:", final["summary"])What Makes This Powerful?
- Tool use: The agent can take action and fetch new data
- Long-term recall: It reuses past data in every run
- Graph logic: You can add loops, retries, or async flows easily
- Embeddings: Retrieved memory is semanticβnot just keyword based
Try replacing the dummy tool with:
- Tavily Search API π
- Serper.dev π
- Hugging Face inference APIs π
π§ͺ Where to Go Next
- Add multi-turn chat memory (LangChain
ConversationBufferMemory) - Let the agent decide when to search vs retrieve
- Build a UI using Streamlit or LangChainβs chatbot wrapper
π Videos Series for LangGraph Deeper Learning
- LangGraph Video Intro π
- LangGraph: Agent Executor – Using OpenAI π
- LangGraph: Chat Agent Executor π
- LangGraph: Human-in-the-Loop π
- LangGraph: Dynamically Returning a Tool Output Directly π
- LangGraph: Respond in a Specific Format π
- LangGraph: Managing Agent Steps π
- LangGraph: Force-Calling a Tool π
- LangGraph: Multi-Agent Workflows π
- LangGraph: Persistence π


