A good conversational agent should be built with "wide context awareness": leveraging the potential of LLMs to enable digital customer service that truly understands your clients and provides effective solutions.
It’s well-known that, in many instances, people end up closing a chatbot session and turning to a traditional interface. Sometimes this happens because they can't even explain the issue they want to resolve; other times, it's out of sheer boredom (we've all been there). This is a classic problem, often addressed by companies through the well-known escalation process—handing the matter over to a human who can analyze the situation more broadly and personally. But this is where businesses lose speed and scale. Even those “award-winning” chatbots with limited problem-solving abilities miss out on numerous opportunities, such as upselling, because they fail to fully grasp their clients’ situational context.
This isn’t a new topic on this humble blog. In this article, I’ve analyzed how combining multimodal channels could enhance the customization of client relationships, creating more relatable and natural conversations. In several posts, I’ve explored possibilities for personalizing conversational agents using various techniques, such as embedding-based methods. However, in today’s post, I offer a focused and detailed exploration of technical possibilities for consuming contextual data about your client (user), delivering a personalized experience that deeply understands the user's situational context and how they can benefit from the session.
Situational Contexts
Consider relationship chatbots. A client may approach with any type of post-sales request—from issues with a purchased product or service to payment concerns or even to upgrade their plan. In other words, a wide range of possibilities. From the outset, the client has to navigate an entry menu to receive appropriate service for their specific issue, often losing patience in this initial “entry” phase. If they end up being “escalated,” they may already feel like they’re receiving subpar service.
Conversational agents powered by language models can bring a historical context to the session while also factoring in real-world events, allowing them to infer the reason for the interaction. This not only adds naturalness to the conversation but also improves accuracy right from the start. For instance, if a client’s last two interactions involved questions about limitations in their contracted plan, and they were presented with information about a premium plan that could meet their specific needs but chose not to purchase, it’s possible that this new interaction is to finally make the upgrade. In this case, the welcome message could be adapted to reflect that suggestion.
From a technical perspective, this could be implemented using a Redis (vector) database with an always-updated summary of past interactions, along with Prompt Engineering to guide the agent in utilizing this historical knowledge base effectively. In short, how to use it to your business's advantage.
Another way to surprise your client during a crisis could be recognizing their location where your service is temporarily unavailable. The welcome message could then include an apology for the situation and inform them that the issue is being addressed. While traditional chatbots can deliver this implementation, they do so in a much more rigid, manual, and less personalized manner. Another advantage here is that building this with a framework like LangChain requires minimal coding (or none at all). You only need an API to check service disruptions across regions and teach the agent to query it at the start of each session.
Similarly, in a crisis, the “tone” of the conversation could be adjusted for specific sessions by monitoring your social media channels (e.g., through services like Stilingue). For instance, you could detect an ongoing event on your platforms (like backlash from a promotion) and identify if the client belongs to the affected group. In this case, the conversation’s tone could be adjusted so the client knows upfront that your company is aware of the issue and ready to assist. This real-time recognition builds empathy and strengthens the consumer’s connection to your brand.
Conversely, when the social media climate is positive, you can surprise clients with assertive offers. For example, if a product is seeing high engagement among a specific customer profile, you can tailor a personalized offer to clients in that segment, linking it to previous interactions. Sales up!
Genuine Intelligence About Your Data
This isn’t a new topic here either. I’ve discussed ways to derive broader insights from your data in a manner different from traditional dashboards. With LLMs, you can ask the agent/model to structure personalized reports dynamically (based on changing prompts), addressing the classic issue of dashboards becoming obsolete as hypotheses evolve.
A more efficient way to structure this information for language model consumption involves using semantic vector databases, which store fragments of text (e.g., sentences or paragraphs) and create an index for efficient retrieval. However, this method can be ineffective and costly when seeking associations that account for global relationships across various entities.
This is where GraphRAG comes in. With this method, you map relationships between all the entities within your business as a graph. For example, customers of Profile X located in City Y. This approach is far more efficient for generating insights like sales trends, bottlenecks, and other patterns. By considering all variables that make up your training base, a graph offers a global view of your business—unlike a vector database, which provides more focused/restricted responses. For more details, I recommend this article by the only company providing connectors to transform corporate relational databases into graphs, or this piece by Microsoft’s team explaining the differences between the two formats.
So, colleagues, these technologies unlock countless possibilities for transforming customer relationships and intelligent corporate database consumption, generating actionable insights. While GenAI applications remain underutilized compared to traditional methods, discussions are increasing, and experimentation is underway in R&D-driven companies. In my view, the benefits of these technologies and their scalable impact should inspire greater resource allocation. But let’s not forget—the tech industry is cyclical, so let’s prepare for the next wave of transformations! That’s the goal here. 😊 Until next time. 😎
Comments