Agentic AI: AI agents are coming to your business software
Three years after the spectacular rise of generative AI, agentic AI and its unprecedented possibilities for autonomy and intelligent automation are taking centre stage. We explain what it is and the areas in which AI agents are already very present, if not indispensable.
Everyone is talking about agentic AI… Some are enthusiastic, others are concerned, and still others downplay the importance of its “advent” by arguing, quite rightly, that the notion of “agent” has been a constant topic of discussion since the early days of artificial intelligence. What is new about this concept that it provokes such strong reactions? Perhaps simply this: the possibility for a software system to perform actions autonomously in order to achieve a defined objective. Before large language models (LLMs) and their generative applications, this autonomy was out of reach: the path to be taken to obtain a result in a given field had to be coded.
What is an AI agent?
Broadly speaking, agentic AI refers to AI systems that can operate with a certain degree of independence, make decisions and take actions to achieve specific goals. The AI agent is, in a way, the basic building block of this type of system and can be defined as the coupling of an LLM with one or more tools that the model can invoke autonomously. Where LLMs alone only generate text on a probabilistic basis and without the ability to take action, AI agents add a functional layer by giving them the ability to activate external tools such as a search engine or pre-existing software features.
To illustrate what this means, imagine a virtual assistant capable of understanding your request, “What is the status of mission ZT312?” Faced with this textual query, the AI agent automatically identifies that it needs to consult your management system, search for the information there, and return it to you in a clear manner. All this without you having to navigate through menus, click on buttons, or even know exactly where this information is located in your software.
There is nothing “magical” about this: it is humans who define the purpose of the AI agent, the tools at its disposal, the type and structure of the expected responses, and the level of control over the results.
AI agents on the front line of customer service
In December 2024, the start-up Artisan launched a campaign across the Atlantic proclaiming “Stop hiring humans. The era of AI employees is here”. The dystopian nature of the message sparked an outcry among the professions that felt most threatened by advances in AI (developers, creatives, lawyers, etc.).. In reality, it is in customer service and customer relations professions that AI penetration has progressed the most and that AI agents are already demonstrating their potential through the integration of AI agents into contact centre software and ticketing solutions. Far from the fantasy of AI “replacing” employees, these agents are proving particularly effective at automating repetitive tasks and improving the customer experience.
Here are five scenarios where agentic AI is already demonstrating its added value.
Automatic recognition and contextualisation – A customer calls customer service. Previously, an automated system based on a rigid decision tree would ask them to “Press 1 for orders, 2 for after-sales service, 3 for billing…”. With an AI agent placed at the beginning of the journey, the system automatically orchestrates several tools. As soon as the call connects, the first tool identifies the customer from their telephone number and retrieves their profile from the CRM. A second tool consults the history of their recent interactions (orders, support tickets, previous exchanges). When the customer formulates their request in natural language, the LLM analyses the intention and activates a third tool to search the transactional database . In a matter of seconds, the AI agent understands that this is a complaint about a late delivery, identifies the order in question, and can either provide an immediate response or intelligently route the customer to the right department with all the relevant context already gathered. The customer only had to explain their problem, without having to navigate through any menus.
Autonomous processing of common requests – Via email or the customer service chatbot, a customer requests a refund for a defective product. The AI agent activates several tools in succession: an order data retrieval tool that extracts all the purchase details (date, amount, product), a contract terms verification tool that automatically consults the company’s returns policy and confirms that the deadline has been met. Next, a compliance check tool verifies the status of the customer’s account (no previous abuse, payment made). Finally, if all conditions are met, the AI agent triggers a fourth tool that initiates the refund in the accounting system and sends a confirmation email to the customer. What previously required the intervention of an advisor who had to consult four different systems (taking an average of 8 to 10 minutes) is now resolved autonomously in a matter of seconds. On the other hand, if the conditions are not met, an escalation tool redirects the request to an advisor who can decide on the situation and inform the customer of the final decision.
Intelligent prioritisation and optimal routing – A request arrives by email. Unlike traditional ticketing systems that apply fixed rules based on keywords, the AI agent uses three tools in a coordinated manner. First, a semantic analysis tool evaluates the actual content of the message to understand the exact nature of the problem (not just detecting the word “urgent” but understanding whether the situation really is urgent). Next, a scoring tool cross-references this analysis with the customer profile (account value, satisfaction history) and the business context (contract with service commitment, impact on customer activity). Finally, a matching tool identifies similar past resolutions in the knowledge base and determines which expert has the best resolution rate for this type of problem. The ticket is then automatically assigned with the correct priority level and to the best contact person. The major benefit here? Optimisation of expert resources that handle the right requests at the right time, reducing resolution times by 40% on average.
Formulating personalised responses – A customer asks a technical question about a product. The AI agent combines four tools to generate an appropriate response. A document search tool scans your knowledge base and identifies relevant information. A customer profile analysis tool determines the requester’s level of expertise (novice or advanced user) and communication preferences (concise or detailed). A contextual retrieval tool extracts previous interactions to ensure consistency. Finally, the LLM synthesises these elements to write a response that matches the customer’s tone, adapts the level of technical detail and refers to the specific context of their situation. Unlike the standardised responses of automatic FAQs, this approach results in very high satisfaction rates because each response is unique and perceived as such by the person to whom it is addressed.
24/7 availability and peak/crisis management –The most obvious benefit of agentic AI is undoubtedly service continuity. An AI agent never sleeps, never takes holidays, and can handle thousands of requests simultaneously. For example, a peak in requests occurs at 3 a.m. at an internet service provider following a technical incident affecting its business services. Faced with this influx, the AI agent activates an anomaly detection tool that identifies and confirms the recurrence of the problem in incoming requests. An incident database consultation tool recognises the problem as already documented and extracts the resolution procedure. The AI agent then automatically generates personalised responses for each affected customer, tailoring the explanations to their situation (some customers need an immediate workaround, others need a recovery estimate). At the same time, a notification tool alerts the technical teams and provides them with a summary of the situation and a reminder of the steps to resolve the problem.
>> The fundamental difference with previous systems? Whereas traditional chatbots followed predefined scripts and failed as soon as they deviated from the expected framework, AI agents understand the context, adapt to new situations and intelligently orchestrate the resources at their disposal. These AI agents do not replace humans in complex cases requiring empathy and judgement, but they free teams from repetitive tasks so they can focus on adding value to customer relationships.
>> Rapid advances in agentic AI in customer service roles foreshadow what will happen over the next two years in other fields of application, with two key areas of focus: improving the user experience and intelligently automating chains of actions that previously required the intervention of one or more human operators.
Nomadia and agentic AI
At Nomadia, enthusiasm for agentic AI can only be understood by taking two fundamental requirements into account: digital sobriety and technological sovereignty. In the agentic applications we are working on, the issue of ecological impact is central. An AI agent consumes ten times more energy resources than an LLM alone. Each interaction reinjects the entire context back into the circuit, multiplying token consumption and therefore the energy footprint. For this reason, we always favour the most economical approach, which involves a drastic selection of use cases and conscious choices in terms of architecture.
Local hosting and the choice of responsible partners are pillars of our approach to artificial intelligence. Contrary to what may be claimed, we do not host large language models (LLMs) in production internally — except in an experimental setting — but only machine learning and deep learning models dedicated to image recognition and processing. This allows us to avoid multimodal LLM models, which have a significantly higher environmental impact.
For language model-based features, we have chosen to work with Mistral AI, a European provider whose models are less resource-intensive and suited to our use cases. Mistral AI strives to provide greater transparency on the environmental impact of its models: its first published study on the carbon footprint and water consumption of its Mistral Large 2 model was carried out with independent external partners (Ademe, Carbone 4) and according to a recognised methodology (Frugal AI, compliant with international standards*).
This positioning meets two objectives: reducing the overall environmental impact by avoiding heavier LLM architectures, and fully controlling the production, deployment and evolution of our artificial intelligence solutions, while relying on a European player that complies with our requirements for sovereignty and transparency in a sector where complete transparency is still lacking.
We also guarantee that our customers’ data remains strictly confidential, meeting GDPR requirements without compromise. Our partnership with Mistral, a French AI player, is part of this approach. Beyond digital sovereignty, Mistral offers models recognised as the most efficient on the market, with – for our specific use cases – a quality of results equivalent to that of the American giants’ LLMs.
For us, agentic AI is neither a marketing gimmick nor a miracle solution. It is a powerful tool that, when used wisely, can truly simplify our users’ daily lives and improve the efficiency of their operations. But this tool must be deployed methodically, soberly and with technical expertise. This is what we are committed to: enabling you to benefit from technological advances and supporting you in their integration, while helping you to enhance your expertise and minimise the environmental impact of your activities.