January 26, 2025
Create smarter, context-aware autonomous agents with long-term recall and tool access
LangGraph gives structure. Vector memory gives context. In this tutorial, you’ll learn how to combine both to build an agent that not only uses external tools but also remembers past interactions, learns from documents, and adapts based on retrieved knowledge. Perfect for search agents, assistants, and knowledge workers.
Tool use allows agents to interact with the world—via APIs, browsers, databases. But without memory, your agent is limited to the current prompt. It can’t learn from previous queries, access external knowledge, or improve over time.
By combining:
Use Case Examples:
Tool | Purpose |
---|---|
LangGraph | Build stateful agent workflows (DAGs) |
FAISS or Chroma | Store vector embeddings for memory |
LangChain | LLM interface, vector store APIs |
OpenAI or Hugging Face | Power your LLM agent logic |
A Knowledge Assistant Agent that:
Workflow:
[Input] → [Search] → [Store in Memory] → [Retrieve Similar Docs] → [Summarize]
bashCopyEditpip install langgraph langchain openai faiss-cpu tiktoken
pythonCopyEditfrom langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.docstore.document import Document
llm = ChatOpenAI(model="gpt-3.5-turbo")
embeddings = OpenAIEmbeddings()
vectorstore = FAISS(embedding_function=embeddings)
pythonCopyEditfrom langgraph.graph import StateGraph
class AgentState(dict): pass
pythonCopyEdit# Dummy tool that returns a pretend result
def search_api(state):
query = state['query']
result = f"Research data about {query}"
state["search_result"] = result
return state
# Store result in memory
def store_memory(state):
content = state["search_result"]
doc = Document(page_content=content, metadata={"source": "search"})
vectorstore.add_documents([doc])
return state
# Retrieve similar documents
def retrieve_memory(state):
query = state["query"]
docs = vectorstore.similarity_search(query, k=2)
state["retrieved_docs"] = docs
return state
# Summarize based on both retrieved + current
def summarize(state):
context = "\n".join([doc.page_content for doc in state["retrieved_docs"]])
prompt = f"Use this info to answer: {state['query']}\n\nContext:\n{context}"
state["summary"] = llm.predict(prompt)
return state
pythonCopyEditgraph = StateGraph(AgentState)
graph.add_node("Search", search_api)
graph.add_node("StoreMemory", store_memory)
graph.add_node("Retrieve", retrieve_memory)
graph.add_node("Summarize", summarize)
graph.set_entry_point("Search")
graph.add_edge("Search", "StoreMemory")
graph.add_edge("StoreMemory", "Retrieve")
graph.add_edge("Retrieve", "Summarize")
graph.set_finish_point("Summarize")
executor = graph.compile()
pythonCopyEditstate = AgentState()
state["query"] = "LangGraph tutorials"
final = executor.invoke(state)
print("SUMMARY:", final["summary"])
Try replacing the dummy tool with:
ConversationBufferMemory
)DoggyDish.com is where agentic AI meets real-world deployment. We go beyond theory to showcase how intelligent agents are built, scaled, and optimized—from initial idea to full-scale production. Whether you're training LLMs, deploying inferencing at the edge, or building out AI-ready infrastructure, we provide actionable insights to help you move from lab to launch with the hardware to match.
© 2025 DoggyDish.com · All rights reserved · Privacy Policy · Terms of Use