
Logging, Monitoring & Improving Agents with LangGraph + Weights & Biases
December 27, 2024Serverless Deployment of CrewAI Agents Using Modal or Replicate
January 26, 2025Rapid UI for Intelligent Workflows
Your agent might be smartβbut it needs a user interface to be useful. This guide walks you through combining Streamlit for the frontend and FastAPI for the backend to host your LangChain or CrewAI-powered agent. Build, test, and deploy intelligent workflows with a fast, reactive UI and a clean API layer.
π§ Why Streamlit + FastAPI?
Building agentic applications requires:
- Frontend: A way for users to input tasks, view responses, and interact in real time
- Backend: A secure, scalable layer to handle agent logic, prompt execution, and tool integration
Streamlit gives you a fast, Pythonic UI layer perfect for internal tools and dashboards.
FastAPI provides an async-ready, production-grade backend for your agent orchestration.
When combined, they allow you to:
- β Take user input
- β Call agents with tools and memory
- β Return results in real-time
- β Serve via REST endpoints (or host locally)
What Youβll Build
A Research Assistant App where:
- The user enters a query (e.g., “What’s new with LangGraph?”)
- The app runs an agent to research the topic using DuckDuckGo
- The backend (FastAPI) handles all LLM calls
- The frontend (Streamlit) provides the interactive UI
π¦ Tools Youβll Use
| Tool | Role |
|---|---|
| Streamlit | UI layer for interaction |
| FastAPI | Backend agent logic + API |
| CrewAI or LangChain | Agent orchestration |
| OpenAI / Hugging Face | LLMs powering reasoning |
π Step-by-Step Guide
β Step 1: Install Dependencies
bashCopyEditpip install streamlit fastapi uvicorn crewai langchain openai duckduckgo-searchβ Step 2: Create Your Backend with FastAPI
agent_backend.py
pythonCopyEditfrom fastapi import FastAPI, Request
from pydantic import BaseModel
from crewai import Agent, Task, Crew
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun
app = FastAPI()
class QueryRequest(BaseModel):
question: str
@app.post("/run-agent")
async def run_agent(data: QueryRequest):
llm = ChatOpenAI(model="gpt-3.5-turbo")
search = DuckDuckGoSearchRun()
researcher = Agent(
role="Researcher",
goal=f"Find detailed info about: {data.question}",
tools=[search],
llm=llm
)
writer = Agent(
role="Writer",
goal="Write a clear and helpful summary",
llm=llm
)
task1 = Task("Do the research", agent=researcher)
task2 = Task("Write summary from findings", agent=writer)
crew = Crew(agents=[researcher, writer], tasks=[task1, task2])
output = crew.kickoff()
return {"response": output}β Step 3: Run Your FastAPI Backend
bashCopyEdituvicorn agent_backend:app --reloadβ Step 4: Create the Frontend with Streamlit
app_frontend.py
pythonCopyEditimport streamlit as st
import requests
st.set_page_config(page_title="Agent Assistant", layout="centered")
st.title("π AI Research Assistant")
st.write("Ask a question, and let the agent research and summarize for you.")
user_input = st.text_input("Enter your query:")
if st.button("Run Agent"):
with st.spinner("Thinking..."):
response = requests.post("http://localhost:8000/run-agent", json={"question": user_input})
result = response.json()
st.success("Done!")
st.markdown("### Agent Response")
st.write(result["response"])β Step 5: Launch the App
- Start the backend:
bashCopyEdituvicorn agent_backend:app --reload
- Run the frontend:
bashCopyEditstreamlit run app_frontend.py
π You now have a working agentic app with a frontend + backend!
π¦ Optional Enhancements
| Feature | How-To |
|---|---|
| π Real-time streaming output | Use FastAPI + LangChainβs streaming support |
| π§ Memory | Add FAISS or Chroma for vector memory |
| π API keys/secrets | Use .env or Streamlit secrets manager |
| π File uploads | Let users upload PDFs and retrieve insights |
| π Deployment | Use Streamlit Cloud, Hugging Face Spaces, or Render.com |
π Additional Resources
- π οΈ Uvicorn Deployment Guide
- π FastAPI Docs
- π‘ Streamlit Docs
- π§ LangChain Tools
- π§ CrewAI Docs


