January 26, 2025
Create smarter, context-aware autonomous agents with long-term recall and tool access
LangGraph gives structure. Vector memory gives context. In this tutorial, youโll learn how to combine both to build an agent that not only uses external tools but also remembers past interactions, learns from documents, and adapts based on retrieved knowledge. Perfect for search agents, assistants, and knowledge workers.
Tool use allows agents to interact with the worldโvia APIs, browsers, databases. But without memory, your agent is limited to the current prompt. It canโt learn from previous queries, access external knowledge, or improve over time.
By combining:
Use Case Examples:
| Tool | Purpose |
|---|---|
| LangGraph | Build stateful agent workflows (DAGs) |
| FAISS or Chroma | Store vector embeddings for memory |
| LangChain | LLM interface, vector store APIs |
| OpenAI or Hugging Face | Power your LLM agent logic |
A Knowledge Assistant Agent that:
Workflow:
[Input] โ [Search] โ [Store in Memory] โ [Retrieve Similar Docs] โ [Summarize]
bashCopyEditpip install langgraph langchain openai faiss-cpu tiktokenpythonCopyEditfrom langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.docstore.document import Document
llm = ChatOpenAI(model="gpt-3.5-turbo")
embeddings = OpenAIEmbeddings()
vectorstore = FAISS(embedding_function=embeddings)
pythonCopyEditfrom langgraph.graph import StateGraph
class AgentState(dict): passpythonCopyEdit# Dummy tool that returns a pretend result
def search_api(state):
query = state['query']
result = f"Research data about {query}"
state["search_result"] = result
return state
# Store result in memory
def store_memory(state):
content = state["search_result"]
doc = Document(page_content=content, metadata={"source": "search"})
vectorstore.add_documents([doc])
return state
# Retrieve similar documents
def retrieve_memory(state):
query = state["query"]
docs = vectorstore.similarity_search(query, k=2)
state["retrieved_docs"] = docs
return state
# Summarize based on both retrieved + current
def summarize(state):
context = "\n".join([doc.page_content for doc in state["retrieved_docs"]])
prompt = f"Use this info to answer: {state['query']}\n\nContext:\n{context}"
state["summary"] = llm.predict(prompt)
return statepythonCopyEditgraph = StateGraph(AgentState)
graph.add_node("Search", search_api)
graph.add_node("StoreMemory", store_memory)
graph.add_node("Retrieve", retrieve_memory)
graph.add_node("Summarize", summarize)
graph.set_entry_point("Search")
graph.add_edge("Search", "StoreMemory")
graph.add_edge("StoreMemory", "Retrieve")
graph.add_edge("Retrieve", "Summarize")
graph.set_finish_point("Summarize")
executor = graph.compile()pythonCopyEditstate = AgentState()
state["query"] = "LangGraph tutorials"
final = executor.invoke(state)
print("SUMMARY:", final["summary"])Try replacing the dummy tool with:
ConversationBufferMemory)