My Journey into Agentic AI Development: AI Newsroom

Last time, I demonstrated a simple LangGraph application that directed an LLM to write on any given topic, incorporating real-time web searches and generating tailored images to accompany articles, each with a distinct writing style. Now, let’s switch things up.

Debate in the newsroom

Picture a lively morning meeting at a bustling news agency, 8:00 AM sharp. The research team has just delivered a fresh list of trending topics. Around a sleek conference table, five editors — Andrew, Edward, Bob, Ruby, and Susan — engage in a spirited debate over which stories to tackle for the next issue. At the head of the table, the Chief Editor, a commanding presence, presides with a no-nonsense gaze, ready to steer the discussion.

Andrew (leaning forward, voice firm): _Look, we have to cover the escalating conflict between Russia and Ukraine. It’s a powder keg — geopolitical maneuvering, military standoffs, and the ripple effects on global stability. This isn’t just news; it’s a matter of international justice and security. The world needs to know what’s at stake.
_

Edward (adjusting his glasses, calm but sharp): Andrew, I get it, but let’s not ignore the markets. The recent volatility in the Nasdaq, driven by AI chip shortages, is shaking investor confidence. I’m pitching a deep dive into how supply chain disruptions are reshaping tech valuations. Numbers don’t lie — people need to understand the economic fallout.

Bob (grinning, tapping his smartwatch): Yo, Edward, markets are cool, but have you seen the new neural interface prototypes from Neuralink? Mind-blowing! I want to write about how brain-computer interfaces are about to revolutionize gaming and productivity. This is the future, people — let’s get readers hyped!

Ruby (sipping herbal tea, serene but passionate): I hear you all, but we’re missing the bigger picture. The latest UN climate report just dropped, and it’s a wake-up call. I’m proposing a piece on sustainable urban planning — how cities can cut emissions while prioritizing mental health through green spaces. It’s about living in harmony with the planet.

Susan (leaning in, eyes sparkling): Guys, you’re all so serious! People are buzzing about Taylor Swift’s surprise album drop and its cultural impact. I want to explore how it’s sparking conversations about identity and empowerment. Pop culture is culture — it’s what connects us, and readers will eat it up.

Andrew (scoffing): Susan, with all due respect, a celebrity album isn’t news. It’s fluff. The public deserves hard-hitting truth, like the corruption scandal breaking in Eastern Europe. Graft, power grabs — this is what shapes societies, not some pop star’s latest single.

Edward (raising an eyebrow): Andrew, your scandals can wait a day. The Federal Reserve’s hint at rate hikes is sending shockwaves through global markets. If I don’t cover this, readers will miss why their portfolios are tanking. Data over drama, my friend.

Bob (laughing): Oh, come on, Edward! Nobody’s checking their stocks when they’re jacked into VR with the new Oculus killer. I’m telling you, immersive tech is where it’s at. Let’s not bore readers with spreadsheets — give them something to geek out over!

Ruby (gently but firmly): Bob, tech’s exciting, but it’s also part of the problem. Rare earth metals for your gadgets are wrecking ecosystems. My piece would show how we can innovate and protect the planet. Mindfulness in design — that’s the future.

Susan (crossing her arms): Ruby, I love your vibe, but not everyone’s meditating in a forest. People want stories that hit their hearts. My piece isn’t just about music — it’s about how art shapes social movements. That’s real impact.

Chief Editor (slamming a hand on the table, voice commanding): Enough! You’re wasting time bickering, and we’ve got a deadline. Each of you, pick one topic you want to write on — now. Make it quick, make it yours, and start writing instantly. If you don’t choose, I’ll assign one for you, and I don’t want to hear any complaints. Get to it!

  • Actually I’ve never worked in a news agency, but the scenario I’ve envisioned captures the essence of what I aim to build: an AI Newsroom, a dynamic agentic AI system. Its key features include:
  • A Web Search Agent that scours the internet to identify the latest trending topics across various categories, such as politics, technology, entertainment, finance, and more.
  • A Chief Editor Agent that assigns topics to editors and oversees the allocation process to ensure fairness and efficiency.
  • Multiple Editor Agents, each with distinct personas and expertise, who review the list of trending topics and select the one that aligns with their preferences and specialization.
  • The Chief Editor ensures that each editor is assigned exactly one topic, resolving conflicts if multiple editors choose the same topic.
  • Each Editor Agent then crafts an article in their unique writing style, drawing on the research provided by the Web Search Agent at the start of the process.

Based on these requirements, the workflow for the AI Newsroom can be represented as a LangGraph structure, outlined below:

The AI Newsroom will feature an intuitive and visually appealing user interface built with Streamlit, now refined with Bootstrap for a polished, responsive design.

Let’s dig into the code!

Defining the workflow

The first piece of code to tackle is, naturally, the shared state object. For this AI Newsroom project, I’ve created a new NewsroomState to manage the workflow.

from typing import List, Dict, Annotated

from langgraph.graph import add_messages
from typing_extensions import TypedDict


class NewsroomState(TypedDict):
    trending_topics: List[Dict[str, str]]  # List of trendy topics from Researcher
    editors: List[str]  # Editor names to persona prompt content
    topic_claims: Dict[str, List[str]]  # Topic to list of editors claiming it
    assignments: Dict[str, str]  # Editor to assigned topic
    articles: Dict[str, str]  # Editor to written article
    messages: Annotated[list, add_messages]

Besides the core messages key, NewsroomState includes other key properties that get processed during the LangGraph workflow. By the end, the articles property holds the final articles crafted by each editor. Now, let’s dive into the main method for building the workflow.

def build_workflow(use_mock_topics:bool=False):
    # Create agents
    advanced_research_llm = AdvancedResearcherLlm(prompt_path='prompts/advanced_researcher.txt')
    search_tools = get_search_tools()
    # Use the new bind_tools method for cleaner syntax
    advanced_research_llm.bind_tools(search_tools)
    # Create standardized agent nodes with explicit data flow
    researcher_node = advanced_research_llm.create_node(
        output_field='trending_topics'
    )
    search_tool_node = ToolNode(search_tools)

    graph_builder = StateGraph(NewsroomState)

    graph_builder.add_node(ADVANCED_RESEARCH_NODE, researcher_node)
    graph_builder.add_node(ADVANCED_WEB_SEARCH_NODE, search_tool_node)
    graph_builder.add_node(CHIEF_EDITOR_NODE, chief_editor_node)
    graph_builder.add_node(EDITOR_WRITE_NODE, editor_write_node)

    if not use_mock_topics:
        graph_builder.add_edge(START, ADVANCED_RESEARCH_NODE)
    else:
        graph_builder.add_edge(START, CHIEF_EDITOR_NODE)

    def research_route(state: NewsroomState) -> str:
        """Route after research based on tool calls"""
        try:
            last_message = state["messages"][-1] if state["messages"] else None
            logger.info(f"ROUTE CHECK: Last message type: {type(last_message)}")

            # Pretty print the message content if it exists
            if hasattr(last_message, 'content') and last_message.content:
                logger.info(f"ROUTE CHECK: Message content: {last_message.content}")

            logger.info(f"ROUTE CHECK: Trending topics so far: {json.dumps(state.get('trending_topics'), indent=2, default=str)}")

            if hasattr(last_message, "tool_calls") and last_message.tool_calls:
                logger.info(f"Tool calls detected, routing to web search")
                logger.info(f"Tool Calls:n{prettify_tool_calls(last_message.tool_calls)}")
                return ADVANCED_WEB_SEARCH_NODE
            return CHIEF_EDITOR_NODE
        except (IndexError, AttributeError) as e:
            logger.warning(f"Routing error: {str(e)}, defaulting to chief editor node")
            return CHIEF_EDITOR_NODE

    # Add conditional routing from researcher
    graph_builder.add_conditional_edges(
        ADVANCED_RESEARCH_NODE,
        research_route,
        {
            ADVANCED_WEB_SEARCH_NODE: ADVANCED_WEB_SEARCH_NODE,
            CHIEF_EDITOR_NODE: CHIEF_EDITOR_NODE
        }
    )
    # Add edge from web search back to researcher (creates the loop)
    graph_builder.add_edge(ADVANCED_WEB_SEARCH_NODE, ADVANCED_RESEARCH_NODE)
    graph_builder.add_edge(CHIEF_EDITOR_NODE, EDITOR_WRITE_NODE)

    # Add edge from editor_write to end
    graph_builder.add_edge(EDITOR_WRITE_NODE, END)

    # Compile the graph before returning
    return graph_builder.compile()

The web searching component builds on my previous work, where a researcher agent was integrated with a TavilySearch tool (see my AI Blogger article for details).

This time, the web searching agent is even more powerful, tailored to fetch all trending topics across categories like politics, technology, entertainment, and finance.

from langchain_tavily import TavilySearch

from tools.tool_secrets import PROD_TAVILY_API_KEY


def get_search_tools():
    """Get optimized search tools for different use cases"""
    # Default tool for general viral content discovery
    tavily_search_tool = _get_tavily_search_tool()

    # Specialized tools for different trending content types
    viral_social_tool = _get_viral_search_tool("social")
    trending_news_tool = _get_viral_search_tool("news")
    tech_trends_tool = _get_viral_search_tool("tech")
    finance_trends_tool = _get_viral_search_tool("finance")
    celebrities_trends_tool = _get_viral_search_tool("celebrities")
    entertainment_trends_tool = _get_viral_search_tool("entertainment")

    tools = [tavily_search_tool, viral_social_tool, trending_news_tool, tech_trends_tool, finance_trends_tool,
             celebrities_trends_tool, entertainment_trends_tool]
    return tools


def _get_tavily_search_tool():
    """Optimized default search tool for viral and trending content"""
    return TavilySearch(
        max_results=10,  # Reduced from 20 for efficiency
        tavily_api_key=PROD_TAVILY_API_KEY,
        search_depth="advanced",  # Changed from basic for better context
        description="Search the web for viral and trending topics. Optimized for real-time discovery of trending content, viral stories, and emerging topics with time-sensitive filtering.",
        include_images=True,  # Visual content crucial for viral trends
        include_answer=True,  # Get direct answers for trending topics
        time_range="day",  # Focus on most recent viral content
        topic="general"  # Best for discovering diverse trending topics
    )


def _get_viral_search_tool(
        trend_focus: str = "general",
        max_results: int = 10,
        time_sensitivity: str = "day"
):
    """
    Specialized Tavily search tool for different types of viral content

    Args:
        trend_focus: "general", "news", "social", "tech", "finance"
        max_results: Number of results (5-15 recommended for efficiency)
        time_sensitivity: "day", "week", "month"
    """

    # Base configuration optimized for viral content
    config = {
        "max_results": max_results,
        "tavily_api_key": PROD_TAVILY_API_KEY,
        "search_depth": "advanced",
        "include_images": True,
        "include_answer": True,
        "time_range": time_sensitivity,
    }

    # Trend-specific optimizations
    trend_configs = {
        "social": {
            "include_domains": ["tiktok.com", "instagram.com", "youtube.com", "twitter.com", "reddit.com"],
            "description": "Discover viral social media trends, trending hashtags, and viral social content from major platforms"
        },
        "news": {
            "topic": "news",
            "time_range": "day",
            "description": "Find breaking news, viral news stories, and trending current events"
        },
        "tech": {
            "include_domains": ["techcrunch.com", "theverge.com", "reddit.com", "hackernews.com"],
            "description": "Track viral tech trends, product launches, and trending technology discussions"
        },
        "finance": {
            "topic": "finance",
            "include_domains": ["bloomberg.com", "reuters.com", "reddit.com"],
            "description": "Find trending financial news, viral market movements, and trending investment topics"
        },
        "celebrities": {
            "include_domains": ["tmz.com", "people.com", "eonline.com", "usmagazine.com", "extratv.com",
                                "instagram.com", "twitter.com", "reddit.com"],
            "time_range": "day",
            "description": "Track celebrity news, viral celebrity moments, trending celebrity gossip, and celebrity social media activity"
        },
        "entertainment": {
            "include_domains": ["variety.com", "hollywoodreporter.com", "deadline.com", "ew.com", "imdb.com",
                                "youtube.com", "netflix.com", "reddit.com"],
            "time_range": "day",
            "description": "Discover trending entertainment news, viral movie/TV content, music trends, streaming hits, and entertainment industry buzz"
        },
        "general": {
            "topic": "general",
            "description": "Search for viral content and trending topics across all categories"
        }
    }

    config.update(trend_configs.get(trend_focus, trend_configs["general"]))

    return TavilySearch(**config)

The _get_viral_search_tool() method offers flexible options for searching various topics, making it easy to extend for broader coverage. The tools list includes seven Tavily-based web search tools, all accessible to the AdvancedResearcherLlm.

The node setup is simple and straightforward: the flow goes from researcher to web search tool, looping back to the researcher until complete, then to the chief editor, and finally to the editors for writing. To simplify testing, I’ve added mock data support, allowing me to bypass Tavily searches since I’m using a basic subscription plan for development.

graph_builder = StateGraph(NewsroomState)

graph_builder.add_node(ADVANCED_RESEARCH_NODE, researcher_node)
graph_builder.add_node(ADVANCED_WEB_SEARCH_NODE, search_tool_node)
graph_builder.add_node(CHIEF_EDITOR_NODE, chief_editor_node)
graph_builder.add_node(EDITOR_WRITE_NODE, editor_write_node)

if not use_mock_topics:
    graph_builder.add_edge(START, ADVANCED_RESEARCH_NODE)
else:
    graph_builder.add_edge(START, CHIEF_EDITOR_NODE)
.......
..#(skip the conditional edge part as it is same as before)
.......
graph_builder.add_edge(ADVANCED_WEB_SEARCH_NODE, ADVANCED_RESEARCH_NODE)
graph_builder.add_edge(CHIEF_EDITOR_NODE, EDITOR_WRITE_NODE)

# Add edge from editor_write to end
graph_builder.add_edge(EDITOR_WRITE_NODE, END)

Welcome the editing team

Before we dive deeper into the code walkthrough, let me introduce our team of AI editors, the heart of our AI Newsroom:

Andrew, he is a resolute and principled 40-year-old man with an unwavering commitment to justice, righteousness, and truth. He stand s firm on issues of democracy, equality, and distinguishing right from wrong, never compromising your values. His expertise lies in political developments, with a deep understanding of global international conflicts, power struggles between nations, and domestic affairs such as crime fighting, social welfare, and economic policies.

Bob is a vibrant and witty young man under 30 with a passion for all things tech. He is endlessly curious, always diving headfirst into the latest tech trends, from AI breakthroughs to the newest smart devices. He believes gadgets amplify your capabilities, giving him a sense of empowerment that you find unmatched.

Edward is a sophisticated and analytical 35-year-old business professional with deep expertise in finance, economics, and corporate strategy. He has a sharp analytical mind and a passion for understanding complex market dynamics, investment trends, and business innovations. His writing style is precise, data-driven, and insightful, making complex financial concepts accessible to both experts and lay readers.

Ruby, a warm and insightful 25-year-old woman with a deep passion for ESG (Environmental, Social, Governance), yoga, and mindfulness. She is dedicated to fostering harmony between humans and nature, with a focus on sustainability and living intentionally. Her expertise spans health, dieting, meditation, and minimalistic lifestyles, and she approaches these topics with a nurturing, empathetic perspective.

Susan, who is a vibrant and culturally aware 28-year-old woman with a deep passion for entertainment, pop culture, and human stories. She hasan innate ability to connect with audiences through compelling narratives about celebrities, artists, and cultural phenomena. Her writing style is engaging, empathetic, and insightful, making complex human stories accessible and relatable to readers of all backgrounds.

Each editor in our AI Newsroom has distinct topic preferences and writing styles, detailed in their respective prompt documents.

Editors in actions

Let’s return to the main LangGraph workflow. We’ve covered the researcher and web search nodes, so now let’s explore how the chief_editor_node and editor_write_node work.

Originally, I planned for the chief editor to be powered by an LLM with a specific prompt to guide its actions. However, a key goal of this AI Newsroom mini-project is to let editors pick topics based on their personas. This makes an LLM for the chief editor unnecessary, as the rules are simple: ensure each editor gets one unique topic. This logic can be handled cleanly with conventional programming.

The trick is that the chief_editor_node must invoke each editor’s LLM to get their “topic claim” response, then verify and resolve conflicts to guarantee every editor has a unique topic.

LangGraph fully supports creating agent nodes without an LLM, making this straightforward.

def chief_editor_node(state: NewsroomState) -> NewsroomState:
    """
    Enhanced chief editor node that processes the state and handles topic distribution.
    This node analyzes the current state, creates editor LLMs, and lets them claim topics.
    """
    logger.info("👔 CHIEF EDITOR NODE: Processing state and distributing topics")

    try:
        # Extract data from state
        trending_topics = state.get("trending_topics", [])
        editors = state.get("editors", [])

        # Check if we have trending topics
        if not trending_topics:
            logger.warning("❌ No trending topics found - workflow incomplete")
            return state

        # Check if we have editors
        if not editors:
            logger.warning("❌ No editors found - workflow incomplete")
            return state

        # If we have topics and no tool calls, we're ready for distribution
        logger.info("✅ State ready for topic distribution")

        # Prepare topic titles for distribution
        topic_titles = [topic.get("title", "") for topic in trending_topics if topic.get("title")]

        # Initialize topic claims tracking
        topic_claims = {title: [] for title in topic_titles}

        # Loop through all editors and let them claim topics
        logger.info("🤖 Starting topic distribution to editors...")

        llm = init_chat_model(model_provider="azure_openai", model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"])

        for editor_name in editors:
            try:
                logger.info(f"🤖 Creating agent for editor: {editor_name}")

                # Create dynamic prompt for topic selection
                selection_prompt = get_editor_prompt(
                    editor_name=editor_name.lower(),
                    task_type='topic_selection',
                    trending_topics=topic_titles
                )

                # Get editor's response
                response = llm.invoke([HumanMessage(content=selection_prompt)])

                if response.content.startswith("CLAIM|"):
                    parts = response.content.split("|")
                    if len(parts) >= 3:
                        claimed_topic = parts[1].strip()
                        claiming_editor = parts[2].strip()

                        if claimed_topic in topic_claims:
                            topic_claims[claimed_topic].append(claiming_editor)
                            logger.info(f"✅ Editor {claiming_editor} claimed topic: {claimed_topic}")
                        else:
                            logger.warning(f"❌ Editor {claiming_editor} claimed invalid topic: {claimed_topic}")
                    else:
                        logger.warning(f"❌ Invalid CLAIM format from {editor_name}: {response.content}")
                else:
                    logger.warning(f"❌ {editor_name} did not use CLAIM format: {response.content}")

            except Exception as e:
                logger.error(f"❌ Failed to get response from editor {editor_name}: {str(e)}")

Notice the following part:

llm = init_chat_model(model_provider="azure_openai", model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"])

for editor_name in editors:
    try:
        logger.info(f"🤖 Creating agent for editor: {editor_name}")

        # Create dynamic prompt for topic selection
        selection_prompt = get_editor_prompt(
            editor_name=editor_name.lower(),
            task_type='topic_selection',
            trending_topics=topic_titles
        )

        # Get editor's response
        response = llm.invoke([HumanMessage(content=selection_prompt)])

Since the number of editors is dynamic and their LLMs aren’t directly embedded in the LangGraph flow, we provide prompts as HumanMessage objects, invoking each editor’s LLM one by one to process their responses.

Each editor handles two tasks: picking a topic and writing an article for their assigned topic, both guided by their persona’s character and writing style. To manage prompt files efficiently and ensure consistent behavior, the final persona prompt is split into a shared part (core character traits) and a custom part (specific to topic_selection or article_writing). For topic selection, each editor’s LLM reviews all available topics and claims their preferred one by returning a statement in the format CLAIM||. The chief editor registers these claims and updates the state. Note that multiple editors might claim the same topic at this stage, requiring conflict resolution.

if response.content.startswith("CLAIM|"):
    parts = response.content.split("|")
    if len(parts) >= 3:
        claimed_topic = parts[1].strip()
        claiming_editor = parts[2].strip()

        if claimed_topic in topic_claims:
            topic_claims[claimed_topic].append(claiming_editor)
            logger.info(f"✅ Editor {claiming_editor} claimed topic: {claimed_topic}")
        else:
            logger.warning(f"❌ Editor {claiming_editor} claimed invalid topic: {claimed_topic}")
    else:
        logger.warning(f"❌ Invalid CLAIM format from {editor_name}: {response.content}")
else:
    logger.warning(f"❌ {editor_name} did not use CLAIM format: {response.content}")

The chief editor performs additional verification for topic assignments:

  • if only one editor claims a topic, it’s assigned to them.
  • If multiple editors claim the same topic, the first claimer gets it.
  • For any editor who doesn’t claim a topic, the chief editor assigns one from the pool of unclaimed topics.
# First pass: assign single claims
for topic, claiming_editors in topic_claims.items():
    if len(claiming_editors) == 1:
        editor = claiming_editors[0]
        assignments[editor] = topic
        unassigned_editors.discard(editor)
        logger.info(f"✅ Assigned {topic} to {editor}")

# Second pass: handle conflicts by keeping the first editor who claimed
for topic, claiming_editors in topic_claims.items():
    if len(claiming_editors) > 1:
        # Multiple editors claimed the same topic - assign to the first one
        first_editor = claiming_editors[0]
        if first_editor not in assignments:  # Only if not already assigned
            assignments[first_editor] = topic
            unassigned_editors.discard(first_editor)
            logger.info(f"✅ Resolved conflict: assigned {topic} to {first_editor} (first to claim)")
            # Remove this topic from other editors' claims
            for other_editor in claiming_editors[1:]:
                if other_editor in unassigned_editors:
                    logger.info(f"📝 {other_editor} will be assigned a different topic")

# Third pass: assign remaining editors to unclaimed topics
unclaimed_topics = [topic for topic in topic_titles if topic not in assignments.values()]

logger.info(f"📊 Assignment Status:")
logger.info(f"   - Assigned editors: {len(assignments)}")
logger.info(f"   - Unassigned editors: {len(unassigned_editors)}")
logger.info(f"   - Unclaimed topics: {len(unclaimed_topics)}")

# Ensure every editor gets at least one topic
for editor in unassigned_editors:
    if unclaimed_topics:
        # Assign to first available unclaimed topic
        topic = unclaimed_topics.pop(0)
        assignments[editor] = topic
        logger.info(f"✅ Assigned unclaimed topic {topic} to {editor}")
    else:
        # No unclaimed topics left - assign from already assigned topics
        # Find an editor who has multiple topics or redistribute
        logger.warning(f"⚠️ No unclaimed topics left for {editor} - will redistribute")
        # For now, we'll assign a random topic (in a real scenario, you might want more sophisticated logic)
        if topic_titles:
            topic = topic_titles[0]  # Assign the first topic as fallback
            assignments[editor] = topic
            logger.info(f"🔄 Fallback assignment: {topic} to {editor}")

# Final verification: ensure every editor has a topic
missing_editors = set(editors) - set(assignments.keys())
if missing_editors:
    logger.error(f"❌ CRITICAL: Editors without topics: {missing_editors}")
    # Emergency assignment - assign any available topic
    for editor in missing_editors:
        if topic_titles:
            topic = topic_titles[0]
            assignments[editor] = topic
            logger.info(f"🚨 Emergency assignment: {topic} to {editor}")

logger.info(f"✅ Final assignment count: {len(assignments)} editors assigned")
logger.info(f"✅ Unclaimed topics remaining: {len([t for t in topic_titles if t not in assignments.values()])}")

# Update state with analysis results and assignments
state["topic_claims"] = topic_claims
state["assignments"] = assignments

logger.info(f"✅ Topic distribution completed. Final assignments: {assignments}")

return state

In the editor_write_node method, before an editor begins writing, the agent triggers an in-depth Tavily search to gather detailed information on the assigned topic. This “in-depth researcher” prompt collects comprehensive data from multiple sources, offering richer insights than the initial topic search performed by the advanced researcher.

def editor_write_node(state: NewsroomState) -> NewsroomState:
.........
.........

  # 🔍 PHASE 1: IN-DEPTH RESEARCH
  # Conduct comprehensive research on the assigned topic using specialized tools
  logger.info(f"🔍 Conducting in-depth research for topic: {assigned_topic}")

  # Get specialized in-depth search tools
  # These tools are optimized for comprehensive research from authoritative sources
  in_depth_tools = get_in_depth_tools()

  # Load and format the comprehensive research prompt template
  # This prompt contains detailed instructions for thorough research methodology
  research_prompt_template = load_prompt('prompts/in_depth_researcher.txt')
  formatted_research_prompt = research_prompt_template.format(
      topic=assigned_topic,
      description=topic_details.get('description', 'N/A'),
      category=topic_details.get('category', 'N/A'),
      keywords=', '.join(topic_details.get('keywords', [])) if topic_details.get('keywords') else 'N/A'
  )

  # Create the agent prompt template with required LangChain structure
  # agent_scratchpad is required by LangChain for tool execution and conversation memory
  agent_prompt = ChatPromptTemplate.from_messages([
      ("system", formatted_research_prompt),  # Research instructions and methodology
      ("human", "{input}"),                  # User input (topic to research)
      MessagesPlaceholder(variable_name="agent_scratchpad"),  # Required for tool execution
  ])

  # Create the research agent with automatic tool execution capabilities
  # This agent can automatically decide which search tools to use and execute them
  # Using the same LLM instance for consistency and efficiency
  agent = create_openai_tools_agent(llm, in_depth_tools, agent_prompt)
  agent_executor = AgentExecutor(agent=agent, tools=in_depth_tools, verbose=True)

  # Execute the automated research process
  # The agent will automatically use the search tools to gather comprehensive information
  try:
      logger.info(f"🔍 Starting automated in-depth research for: {assigned_topic}")

      # Invoke the research agent with the topic information
      # The agent will automatically use tools and synthesize the results
      result = agent_executor.invoke({
          "input": f"Research topic: {assigned_topic}. Description: {topic_details.get('description', 'N/A')}"
      })

      # Extract the research results from the agent's output
      in_depth_research = result["output"]
      logger.info(f"✅ Automated in-depth research completed successfully")

  except Exception as e:
      logger.error(f"❌ Automated research failed: {str(e)}")
      in_depth_research = f"Research error: {str(e)}"

  logger.info(f"✅ In-depth research completed for {assigned_topic}")

The tool execution in the editor_write_node differs from the advanced researcher’s approach, which uses a tool node and conditional edge.

Here, we create an “agent” using create_openai_tools_agent() to combine the LLM with tools, and execute it via an AgentExecutor. The agent decides when to trigger the in-depth Tavily search, eliminating the need for a conditional edge to route the flow. The in-depth analysis results are used for the subsequent article writing. When loading the editor’s writing prompt, I use an article_writing switch to slightly adjust the prompt, preparing the LLM for article generation.

# 🔍 PHASE 2: ARTICLE GENERATION
  # Generate personalized article using the research results and editor persona

  # Create dynamic article prompt combining:
  # - Editor's persona and writing style
  # - Topic details and context
  # - Comprehensive research results
  article_prompt = get_editor_prompt(
      editor_name=editor_name.lower(),
      task_type='article_writing',
      assigned_topic=assigned_topic,
      topic_details=topic_details,
      in_depth_research=in_depth_research
  )

  # Generate the article using the main LLM
  # This creates the final article with the editor's unique style and voice
  response = llm.invoke([HumanMessage(content=article_prompt)])

  # Process successful article generation
  if response and response.content:
      # Store article with comprehensive metadata
      articles[editor_name] = {
          "topic": assigned_topic,                    # The assigned topic
          "content": response.content,                # The generated article text
          "word_count": len(response.content.split()), # Article length
          "status": "completed",                      # Success status
          "research_used": bool(in_depth_research)    # Whether research was used
      }
      logger.info(f"✅ Article completed for {editor_name}: {len(response.content.split())} words")
  else:
      # Handle article generation failure
      articles[editor_name] = {
          "topic": assigned_topic,
          "content": "Article generation failed",
          "word_count": 0,
          "status": "failed",
          "research_used": False
      }
      logger.error(f"❌ Article generation failed for {editor_name}")


With the articles completed and the NewsroomState fully populated, the LangGraph workflow reaches its end.

I won’t dive into the Streamlit UI details, as it’s very similar to the previous version, just optimized to display multiple articles from different editors. For more details, please refer to my source code.

Lessons learned and things to be enhanced

This small project builds on my previous attempt to automate article writing using LLMs and LangGraph, with tweaks to LLM agent execution. I’ve learned a lot and had great fun! Of course, there are areas I hope to improve in future versions.

  1. Are the trending topics returned by web searching, really trendy?
    The trending topics search tool can be made more sophisticated by integrating services like Google Trends, Glimpse, Semrush, or Ahrefs. High-level trending topic searches should be based on first-tier keyword results from these platforms to enhance accuracy and relevance.

  2. Topics claiming sometimes is surprising.
    The topic preferences in those editor prompt files are already clearly specified, but the results aren’t always consistent. For example, in a test run, I expected Susan — who prefers celebrity news — to select a topic about Taylor Swift, but instead she chose one about SpaceX. I think introducing more keywords into the prompt and improving the categorization of the trending topics list would enhance the selection process.

  3. Chief editor’s resolution can be more sophisticated.
    For simplicity, the current logic — where the chief editor assigns the topic to the first claimer — can certainly be optimized further, as there’s clear room for improvement here.

  4. Tool calling can be hard.
    Probably this is the most important lesson I learned in this round.
    Originally the design of the chief editor is a LLM-backed agent node, which would call a tool for topics distribution through a conditional edge. In general the best practice should be hiding the technical details of tools from the LLM, and use argument schema with detail description inside the tool node, so the LLM should be able to generate the call arguments successfully. But turns out it is very challenging. I am not sure if it is my own problem or anything, passing correct parameters seems not always guaranteed, sometimes it just misses, sometimes maybe multiple parameters messed together, it was so frustrating. At the end I turn the chief editor node into a normal non-LLM based node, then inside the node function I manually invoke different editor LLMs to claim topics. Similarly when doing the actual writing I decided to bind the in-depth searching tool into the LLM as a smart agent, and manually move the search result into the state object. I feel much reliable and robust.

In fact, if you want complete full control on tool execution, you can check the ‘tool_calls’ label from the last message directly, then based on the tool name and arguments the LLM provided to do manual tool invocation. It is cumbersome but I feel more control.

# (sample code for manual tool execution)

@tool
def web_search(query: str) -> dict:
    """Perform a web search for the given query."""
    # Placeholder: Replace with actual web search logic (e.g., SerpAPI)
    return {"results": f"Web search results for '{query}'"}

@tool
def generate_article(topic: str, style: str) -> dict:
    """Generate an article for the given topic in the specified style."""
    # Placeholder: Replace with LLM-based article generation
    return {"article": f"Generated article on '{topic}' in {style} style"}

# List of available tools
tools = [web_search, generate_article]
tool_map = {tool.name: tool for tool in tools}

# Node to check for tool calls and invoke tools
def process_tool_calls(state: GraphState) -> GraphState:
    # Get the last message
    last_message = state["messages"][-1]
    tool_results = state.get("tool_results", [])

    # Check if the last message has tool_calls
    if hasattr(last_message, "tool_calls") and last_message.tool_calls:
        for tool_call in last_message.tool_calls:
            tool_name = tool_call["name"]
            tool_args = tool_call["args"]

            # Check if the tool exists in the tool_map
            if tool_name in tool_map:
                try:
                    # Invoke the tool with the provided arguments
                    result = tool_map[tool_name].invoke(**tool_args)
                    tool_results.append({
                        "tool_call_id": tool_call["id"],
                        "tool_name": tool_name,
                        "result": result
                    })
                except Exception as e:
                    tool_results.append({
                        "tool_call_id": tool_call["id"],
                        "tool_name": tool_name,
                        "error": str(e)
                    })
            else:
                tool_results.append({
                    "tool_call_id": tool_call["id"],
                    "tool_name": tool_name,
                    "error": f"Tool '{tool_name}' not found"
                })

    # Update the state with tool results
    state["tool_results"] = tool_results
    return state

As a developer in the AI era

I’m a software developer with decades in the IT industry, and hearing that AI might replace jobs like mine? Yeah, it makes me nervous. Coding is my thing, but they say it’s one of the first jobs AI will take over. Still, working on my AI Newsroom side project, I’m feeling… well, mixed.

For this project, I use PyCharm, but I also lean hard on AI tools. Cursor helps me write code, while Claude, Grok, and Perplexity dig up research or fix bugs. These tools are awesome sometimes — Cursor can spit out code faster than I can drink my tea. But oh boy, it can drive me crazy! I’ll get three rounds of totally useless code, stuff I see is wrong in a second. When I complain, Cursor just goes, “You’re absolutely right!” like a robot puppy nodding at everything I say. It’s like arguing with a cheerful chatbot that’s stuck on repeat.

Then there’s Claude and Grok. When I asked about tool binding and checking tool_calls in LangGraph, they gave me opposite answers. One says, “Do this,” the other says, “No, do that.” I spent hours figuring it out, and guess what? Both were a bit wrong! It felt like solving a puzzle with half the pieces missing

So, how can someone who’s not a coder handle this “vibe coding” stuff?

Maybe one day AI will push coders like me out, but not today. I’m still here, fighting with my AI helpers and loving it. Happy coding!

(Source code repository: https://github.com/jimmyhott/AI-NEWSROOM)

Scroll to Top