Building a Tool-Using Agent with LangGraph and Vector Memory

January 26, 2025

post-thumnail

Create smarter, context-aware autonomous agents with long-term recall and tool access

LangGraph gives structure. Vector memory gives context. In this tutorial, youโ€™ll learn how to combine both to build an agent that not only uses external tools but also remembers past interactions, learns from documents, and adapts based on retrieved knowledge. Perfect for search agents, assistants, and knowledge workers.


๐Ÿง  Why Combine LangGraph + Vector Memory?

Tool use allows agents to interact with the worldโ€”via APIs, browsers, databases. But without memory, your agent is limited to the current prompt. It canโ€™t learn from previous queries, access external knowledge, or improve over time.

By combining:

  • LangGraphโ€™s DAG-based agent flows
  • With vector memory retrieval (e.g., FAISS or Chroma)
    You create agents that can:
    • โœ… Search APIs
    • โœ… Store results for future context
    • โœ… Recall previous interactions or documents
    • โœ… Summarize findings and take action

Use Case Examples:

  • RAG-enabled research agents
  • AI assistants that learn user preferences
  • Agents that reuse previous tool outputs to improve answers

๐Ÿ”ง Tools Youโ€™ll Use

ToolPurpose
LangGraphBuild stateful agent workflows (DAGs)
FAISS or ChromaStore vector embeddings for memory
LangChainLLM interface, vector store APIs
OpenAI or Hugging FacePower your LLM agent logic

๐Ÿš€ What Youโ€™ll Build

A Knowledge Assistant Agent that:

  1. Accepts a research query from the user
  2. Uses a tool to fetch results (e.g., web search or dummy API)
  3. Stores those results in vector memory
  4. Retrieves previous results related to new queries
  5. Summarizes the final response

Workflow:
[Input] โ†’ [Search] โ†’ [Store in Memory] โ†’ [Retrieve Similar Docs] โ†’ [Summarize]


๐Ÿ›  Step-by-Step Guide

โœ… 1. Install Dependencies

bashCopyEditpip install langgraph langchain openai faiss-cpu tiktoken

โœ… 2. Setup LLM & Vector Store

pythonCopyEditfrom langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.docstore.document import Document

llm = ChatOpenAI(model="gpt-3.5-turbo")
embeddings = OpenAIEmbeddings()
vectorstore = FAISS(embedding_function=embeddings)

โœ… 3. Define State Object

pythonCopyEditfrom langgraph.graph import StateGraph

class AgentState(dict): pass

โœ… 4. Define Nodes (Tools + Memory Steps)

pythonCopyEdit# Dummy tool that returns a pretend result
def search_api(state):
    query = state['query']
    result = f"Research data about {query}"
    state["search_result"] = result
    return state

# Store result in memory
def store_memory(state):
    content = state["search_result"]
    doc = Document(page_content=content, metadata={"source": "search"})
    vectorstore.add_documents([doc])
    return state

# Retrieve similar documents
def retrieve_memory(state):
    query = state["query"]
    docs = vectorstore.similarity_search(query, k=2)
    state["retrieved_docs"] = docs
    return state

# Summarize based on both retrieved + current
def summarize(state):
    context = "\n".join([doc.page_content for doc in state["retrieved_docs"]])
    prompt = f"Use this info to answer: {state['query']}\n\nContext:\n{context}"
    state["summary"] = llm.predict(prompt)
    return state

โœ… 5. Build the LangGraph

pythonCopyEditgraph = StateGraph(AgentState)
graph.add_node("Search", search_api)
graph.add_node("StoreMemory", store_memory)
graph.add_node("Retrieve", retrieve_memory)
graph.add_node("Summarize", summarize)

graph.set_entry_point("Search")
graph.add_edge("Search", "StoreMemory")
graph.add_edge("StoreMemory", "Retrieve")
graph.add_edge("Retrieve", "Summarize")
graph.set_finish_point("Summarize")

executor = graph.compile()

โœ… 6. Run Your Tool-Using, Memory-Rich Agent

pythonCopyEditstate = AgentState()
state["query"] = "LangGraph tutorials"

final = executor.invoke(state)
print("SUMMARY:", final["summary"])

What Makes This Powerful?

  • Tool use: The agent can take action and fetch new data
  • Long-term recall: It reuses past data in every run
  • Graph logic: You can add loops, retries, or async flows easily
  • Embeddings: Retrieved memory is semanticโ€”not just keyword based

Try replacing the dummy tool with:


๐Ÿงช Where to Go Next

  • Add multi-turn chat memory (LangChain ConversationBufferMemory)
  • Let the agent decide when to search vs retrieve
  • Build a UI using Streamlit or LangChainโ€™s chatbot wrapper

๐Ÿ“š Videos Series for LangGraph Deeper Learning

Leave a Reply

Your email address will not be published. Required fields are marked *