Build stateful AI agents using Mem0

F

Farhan Ahmad

Guest
You may have interacted with (or built) many AI bots in the last few years. Most of them serve a basic purpose (like helping with customer support) but many lack the ability to remember the user's preferences and conversations from previous sessions (which means the agents are stateless), so every new session with the AI agent feels like you're starting with a blank slate.

This is where the concept of stateful agents comes in.
An an Agent that actually remembers your user's preferences and past behaviors (a stateful agent), and then uses that info to guide the user is extremely helpful. This helps the user reach their goals faster and the conversation flows more smoothly which ultimately keeps your users happy with your product/service.

Let's see how Mem0 makes it possible to build stateful agents in a few easy steps.

What's Mem0?​


To put it simply, Mem0 is a framework that helps your agent remember the user's past behaviors and preferences by storing the conversation and also helping you retrieve it as needed, using natural language. It does the heavy lifting for us, all we have to do is add a couple of lines of code in our app and we're good to go.

With Mem0 you can:

  1. Store conversations
  2. Get older conversations
  3. Search Conversations using natural language
  4. Update stored conversations as needed

In this tutorial we'll build a very simple Next.js app that showcases the power of this framework. We'll keep it super easy to follow and beginner friendly :)

Overview of what we're building today​


We're going to be building a very basic but powerful stateful agent. We'll be using the below stack for this:

  1. Typescript framework - Next.js
  2. LLM provider - OpenAI
  3. Memory layer - Mem0

Mem0 helps your agent remember

Our goal is to demonstrate the concept of stateful agents using Mem0, which is why we'll keep this tutorial simple and beginner-friendly.

Getting Started (Next.js project initialization)​


Alright, we'll start with initializing a Next.js app using create-next-app (I know, I'm a lazy guy).

If you don't already have npx installed, use this command to install it:
npm install -g npx

Fire up your favorite text editor, open the command line and type the below command to initialize a new project:
npx create-next-app@latest

It's going to ask you a bunch of questions, starting with the project name, linter choice, etc. If you're new, you can just follow my lead according to the screenshot below:

Initializing new project using create-next-app

As you can see, the project name I specified was "smart-agent". After this step, a new project gets initialized with the name "smart-agent" (or whatever name that you chose to specify) on the left hand side in VS Code (see screenshot below).

Project structure for newly initialized nextjs project

A good way to get familiar with Next.js is to expand/collapse each folder in this project to get a feel for how things are structured and how different components inside the app interact with each other.

Try running the app to see all's good​


Use the command below to switch to the project directory inside the CLI (remember, in my case the name of the agent is "smart-agent"):
cd smart-agent

Now run the below command to run the server:
npm run dev

(Screenshot for reference)

CLI command to run Nextjs app

Now when you open your browser and search for "localhost:3000" you should see something like this:

A running Nextjs app

Cool! That means our app is officially set up. Let's add some features now.

Creating a simple chat feature​


For the next step we'll create a simple app that will help us interact with the LLM using the OpenAI API. We're not going to worry about adding the "remembering feature" just now to keep things simple.

The very first thing we'll need is the OpenAI API key. Let's go grab it from OpenAI's website. Go to the URL https://platform.openai.com/api-keys where you can either use an existing API key or create a new one.

Once you have copied the API key, come back to VS code and add it to our project's environment variable. To do this we'll create a new file called ".env.local" signifying that this file is for the local environment. So when we run npm run dev and start the server, the server will pick up all values specified here. You can refer to the screenshot below.

Adding OpenAI API key inside Next.js project

Cool, now that we have added the API key, let's go ahead and install the official openai package to interact with the LLM API.

Run the below command to install the "openai" package:
npm i openai

You can see below that the openai package was added to the package.json file:

installed package

Alright let's now build the UI and logic for our chat app​


Since our app was initialized using create-next-app, the page.tsx file is the easiest way to update the UI of our app. To keep this tutorial super easy, we'll only update page.tsx file (and one server file) to build out our app.

Go ahead and copy the below code and paste it in your page.tsx file (replacing the older code):


Code:
"use client"

import { useEffect, useRef, useState } from "react"

type Role = "system" | "user" | "assistant"

export type ChatMessage = {
  role: Role
  content: string
}

export default function Page() {
  return <Chat />
}

function Chat() {
  const [messages, setMessages] = useState<ChatMessage[]>([
    {
      role: "system",
      content: "You are a concise, friendly assistant. Keep answers under ~120 words unless asked.",
    },
  ])

  const [input, setInput] = useState("")
  const [pending, setPending] = useState(false)
  const [error, setError] = useState<string | null>(null)
  const bottomRef = useRef<HTMLDivElement>(null)

  useEffect(() => {
    bottomRef.current?.scrollIntoView({ behavior: "smooth" })
  }, [messages, pending])

  async function send() {
    const text = input.trim()
    if (!text || pending) return

    // Add the user's message locally first
    const next = [...messages, { role: "user" as const, content: text }]
    setMessages(next)
    setInput("")
    setPending(true)
    setError(null)

    try {
      const res = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ messages: next }),
      })

      if (!res.ok) throw new Error(`HTTP ${res.status}`)

      const data: { reply?: string } = await res.json()
      const reply = data.reply ?? "(No response)"

      setMessages((cur) => [...cur, { role: "assistant", content: reply }])

      /* eslint-disable @typescript-eslint/no-explicit-any */
    } catch (e: any) {
      console.error(e)
      setError(e?.message || "Something went wrong.")
      setMessages((cur) => [...cur, { role: "assistant", content: "Sorryβ€”something went wrong. Try again." }])
    } finally {
      setPending(false)
    }
  }

  function onSubmit(e: React.FormEvent) {
    e.preventDefault()
    void send()
  }

  return (
    <div className="flex flex-col h-[100svh] max-w-2xl mx-auto p-4 gap-3">
      <header className="flex items-center justify-between">
        <h1 className="text-xl font-semibold">Simple Chat</h1>
      </header>

      <main className="flex-1 overflow-y-auto rounded-2xl border bg-white p-3 space-y-3 text-black">
        {/* Hide system message from the transcript */}
        {messages
          .filter((m) => m.role !== "system")
          .map((m, i) => (
            <Bubble key={i} role={m.role} text={m.content} />
          ))}
        {pending && <Bubble role="assistant" text="…thinking" muted />}
        <div ref={bottomRef} />
      </main>

      <form onSubmit={onSubmit} className="flex gap-2">
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message and press Enter…"
          className="flex-1 px-4 py-2 rounded-xl border outline-none"
          aria-label="Message"
        />
        <button type="submit" disabled={!input.trim() || pending} className="px-4 py-2 rounded-xl border shadow disabled:opacity-50">
          Send
        </button>
      </form>

      {error && (
        <p className="text-xs text-red-600" role="alert">
          {error}
        </p>
      )}
    </div>
  )
}

function Bubble({ role, text, muted }: { role: Role; text: string; muted?: boolean }) {
  const isUser = role === "user"
  return (
    <div
      className={
        "max-w-[85%] whitespace-pre-wrap px-4 py-2 rounded-2xl shadow " +
        (isUser ? "bg-gray-200 ml-auto" : "bg-white border") +
        (muted ? " text-gray-500" : "")
      }
    >
      {text}
    </div>
  )
}

Once you're done updating your page.tsx file, open your browser and visit localhost:3000 URL and you'll see something like this:

Chat app's updated UI

Our app is taking shape for sure! But if you try sending a message it'll throw an error, because we have only implemented the UI, and we still need to add the logic to communicate with OpenAI's LLM.

Adding a new route to handle requests to OpenAI API​


Whenever the user sends a message, we want that message to be sent to OpenAI's LLM and receive back a response. Finally, we want to add this LLM's response to the chat UI. Age old game of state updates and API requests. Let's get it going.

Create a new folder inside the app folder named api, and within api folder create another folder named chat. Inside the chat folder create a new file called route.ts as shown in the screenshot:

New project structure after creating a new route

As you can see that our route.ts file is empty. Let's paste the below code into the file:


Code:
import { NextRequest, NextResponse } from "next/server"
import OpenAI from "openai"

export const runtime = "nodejs"
export const dynamic = "force-dynamic" // avoid caching

export async function POST(req: NextRequest) {
  if (!process.env.OPENAI_API_KEY) {
    return NextResponse.json({ error: "Missing OPENAI_API_KEY on the server." }, { status: 500 })
  }

  let body: unknown
  try {
    body = await req.json()
  } catch {
    return NextResponse.json({ error: "Invalid JSON body." }, { status: 400 })
  }
  /* eslint-disable @typescript-eslint/no-explicit-any */
  const { messages } = (body as any) || {}

  if (!Array.isArray(messages) || messages.length === 0) {
    return NextResponse.json({ error: "'messages' must be a non-empty array." }, { status: 400 })
  }

  try {
    const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })

    const completion = await openai.chat.completions.create({
      model: "gpt-4.1-nano", // an affordable model here
      messages,
    })

    const reply = completion.choices?.[0]?.message?.content?.trim() ?? ""

    return NextResponse.json({ reply })
  } catch (err: any) {
    console.error("/api/chat error:", err?.response?.data || err?.message || err)
    return NextResponse.json({ error: "Something went wrong generating a reply." }, { status: 500 })
  }
}

Great, now run the server if it isn't already using npm run dev and you should be able to talk to the AI as shown below:

AI chat feature works!

But.. why do we want our Agent to remember?​


Before we get into implementing the memory feature, lets quickly demonstrate why we need this memory feature.

As shown in the GIF below, try telling the agent something about your likes/preferences (for example, "My favorite travel destination is Japan"). Then go ahead and refresh the page. Once the page is refreshed ask the agent about your preference that you had mentioned earlier ("What's my favorite travel destination?").

You'll quickly notice that the agent doesn't remember anything about you:
GIF

And this is why we want to add memory to the agent. This will help the agent remember user's preferences across different user sessions, as you'll see soon.

Adding the ability to remember using Mem0​


Okay now that our simple chat feature is ready let's give our app the ability to remember user's preferences from past conversations. We'll use Mem0 for implementing this feature. If you are someone who prefers reading docs first, feel free to read Mem0's quick start guide if you want to dive deeper using this link: Mem0's Quick-start guide.

Two ways to use Mem0​


Broadly speaking, there are two ways to use Mem0. You can either:

  1. Use Mem0's cloud platform to manage your conversations using a MEM0_API_KEY (easiest)
  2. or, you can choose to host Mem0's open-source version on your own infra (hard + time consuming)

In this tutorial, for the sake of keeping things simple, we'll use Mem0's platform for building out our agent in an instant.

Setting up Mem0 in our project​


Let's start by installing the npm package mem0ai using the below command:
npm install mem0ai

After running the command our package.json should be updated as shown below:

installed mem0 package

Now that we have the package installed, there is one more step that's required before we can start using Mem0. We need to grab an API key for Mem0's website to actually make it work.

Head over to this URL https://app.mem0.ai/login and create a new account if you haven't already.

Once you're done, head over to https://app.mem0.ai/dashboard/get-started and copy your API key. At the time when I'm writing this, that page should look something like below:

Copy API key from Mem0's dashboard

Once you've copied the API key come back to your code editor and open the .env.local file and add the API key you just copied as MEM0_API_KEY. See the screenshot below for reference.

updated .env.local file

Nice! let's start implementing the memory feature using the mem0 library.

Implementing Mem0 in our route.ts file​


We know that Mem0 allows us to store conversations and search from older one's using natural language. Let's now see it in action by implementing it in our app.

Go into the route.ts file that we created earlier inside app/api/chat folder. Next, replace the code inside route.ts file with the code below:


Code:
import { NextRequest, NextResponse } from "next/server"
import OpenAI from "openai"
import MemoryClient from "mem0ai"
import { ChatCompletionMessageParam } from "openai/resources"

export const runtime = "nodejs"
export const dynamic = "force-dynamic" // avoid caching

export async function POST(req: NextRequest) {
  const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! })

  if (!process.env.OPENAI_API_KEY) {
    return NextResponse.json({ error: "Missing OPENAI_API_KEY on the server." }, { status: 500 })
  }

  let body: unknown
  try {
    body = await req.json()
  } catch {
    return NextResponse.json({ error: "Invalid JSON body." }, { status: 400 })
  }
  /* eslint-disable @typescript-eslint/no-explicit-any */
  const { messages } = (body as any) || {}

  if (!Array.isArray(messages) || messages.length === 0) {
    return NextResponse.json({ error: "'messages' must be a non-empty array." }, { status: 400 })
  }

  const userId: string = (body as any)?.userId || "demo-user"
  const lastUserContent = [...messages].reverse().find((m: any) => m.role === "user")?.content || ""

  let memoryContext = ""
  try {
    console.log("searching from memory...")
    const hits = await mem0.search(lastUserContent || "recent preferences", { user_id: userId })
    // Each hit can be an object; we try common fields (text/memory/value)
    const lines = (Array.isArray(hits) ? hits : [])
      .map((m: any) => m?.text ?? m?.memory ?? m?.value ?? "")
      .filter(Boolean)
      .slice(0, 6)
    if (lines.length) {
      memoryContext = "Known about user (from memory): \n" + lines.join(" - ")
    }
  } catch (e) {
    console.log("mem0.search error", e)
  }

  // Prepend memory context so the model can personalize
  const messagesWithMemory: ChatCompletionMessageParam[] = memoryContext
    ? [
        { role: "system", content: "Use the following user memory if helpful. Don’t repeat it verbatim." },
        { role: "system", content: memoryContext },
        ...messages,
      ]
    : messages

  try {
    const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })

    const completion = await openai.chat.completions.create({
      model: "gpt-4.1-nano", // an affordable model here
      messages: messagesWithMemory,
    })
    const reply = completion.choices?.[0]?.message?.content?.trim() ?? ""

    try {
      // Grab the latest user turn as a natural-language search query for Mem0
      console.log("adding to memory...")
      const lastUserMessage = [...messages].reverse().find((m: any) => m?.role === "user")
      const messagesToStore = [
        lastUserMessage ? { role: "user", content: lastUserMessage.content } : messages[messages.length - 1],
        { role: "assistant", content: reply },
      ]
      await mem0.add(messagesToStore as any, { user_id: userId, metadata: { category: "chat" } })
    } catch (e) {
      console.log("mem0.add failed:", e)
    }

    return NextResponse.json({ reply })
  } catch (err: any) {
    console.error("/api/chat error:", err?.response?.data || err?.message || err)
    return NextResponse.json({ error: "Something went wrong generating a reply." }, { status: 500 })
  }
}

Can you spot the changes that we made? Let's go through these new changes one by one below.

First, we initialized a new mem0 client:
const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! })

Notice that we're passing the MEM0_API_KEY environment variable we had added earlier to .env.local file.

Next, we added this block of code to search the conversation in Mem0 using user's recent message:


Code:
const userId: string = (body as any)?.userId || "demo-user"
  const lastUserContent = [...messages].reverse().find((m: any) => m.role === "user")?.content || ""

  let memoryContext = ""
  try {
    console.log("searching from memory...")
    const hits = await mem0.search(lastUserContent || "recent preferences", { user_id: userId })
    // Each hit can be an object; we try common fields (text/memory/value)
    const lines = (Array.isArray(hits) ? hits : [])
      .map((m: any) => m?.text ?? m?.memory ?? m?.value ?? "")
      .filter(Boolean)
      .slice(0, 6)
    if (lines.length) {
      memoryContext = "Known about user (from memory): \n" + lines.join(" - ")
    }
  } catch (e) {
    console.log("mem0.search error", e)
  }

As you can see, every conversation thats stored needs some kind of identifier. In our app we haven't implemented authentication which is why we'll simply mock the user_id value to be "demo-user".
Then, we use mem0.search() to search for any memory that could help us learn about the user's preferences that belongs to the unique user_id.

And now finally, we've also added logic to store the conversations in Mem0 as shown below:


Code:
try {
      // Grab the latest user turn as a natural-language search query for Mem0
      console.log("adding to memory...")
      const lastUserMessage = [...messages].reverse().find((m: any) => m?.role === "user")
      const messagesToStore = [
        lastUserMessage ? { role: "user", content: lastUserMessage.content } : messages[messages.length - 1],
        { role: "assistant", content: reply },
      ]
      await mem0.add(messagesToStore as any, { user_id: userId, metadata: { category: "chat" } })
    } catch (e) {
      console.log("mem0.add failed:", e)
    }

The above code adds the last assistant message + the last user message to conversation. We do this so that only the new messages get stored in Mem0 and not the entire conversation history. This helps us save cost and prevents duplicate memory from being saved in Mem0.

And that's pretty much all we need! Our agent now has the ability to learn user's preferences and remember them across sessions. Let's try it out.

Trying out our Mem0 Agent​


Okay let's take for a run now. We'll tell the agent about our favorite color, refresh the page and ask it about our favorite color again to see if it remembers anything.

Below is a GIF of me doing it:

Mem0 helps your agent remember

Voila! Your agent will now remember anything you talk about, even across sessions.

How can we see the stored memories in Mem0?​


Yes you can also see the memories that are stored with Mem0 for every user_id. Go to your Mem0 dashboard here: https://app.mem0.ai/dashboard/memories and you'll see something like this:

Mem0 memories for every user_id in dashboard

Notice that every memory is associated with a user_id. The memory "favorite color is blue" is associated with "demo-user".

Conclusion​


There's no doubt stateful agents will become the standard in the near future. We'll stop seeing generic customer chat bots and instead, chat bots that remember everything about you from your past interactions will become the norm.
This is still the beginning and the reason I like solutions like Mem0 so much is because how easy it is to use for beginners to build complex stateful agents in no time.

As always, if you found this article useful, feel free to follow me here or on Linkedin (https://www.linkedin.com/in/farhans-profile/).

Cheers!

Continue reading...
 


Join 𝕋𝕄𝕋 on Telegram
Channel PREVIEW:
Back
Top