This article is based on the FORWARD 2026 conference, featuring Amélie Richardson, Chief People Officer at Mirakl, and Olivier Ruton, Head of Digital Learning at Servier.
Neither Mirakl nor Servier presents AI as a technical skill that requires certification. The key point is not “knowing how to use ChatGPT.” Rather, the key point is knowing how to exercise managerial judgment in an environment where productivity is increasing. And this shift is fundamental.
At Mirakl, AI is widely used. Managers themselves use it to structure feedback or clarify sensitive messages. But a clear decision has been made: managers are not evaluated on their use of AI. They are evaluated on:
In other words: AI is a tool. It is not an end in itself. Amélie Richardson makes this clear: the framework is based on the principle of individual accountability. There is no centralized monitoring of usage . There is no reporting on the number of prompts. What matters is sound judgment and what we accomplish using the tool.
A concrete example: internal staff
Mirakl has developed internal agents connected to company documents. One of the mentioned use cases involves creating or updating job descriptions based on existing templates. The agent can generate an initial structured draft. However, the manager is responsible for verifying that it meets actual needs, adjusting it based on the team’s context, or challenging the relevance of the wording. AI provides the structure, but it is the manager who validates, corrects, and makes the final decision.
If a manager passively accepts the output produced, quality declines. If they challenge it, quality improves. Competence, therefore, is not the ability to produce a text. It is the ability to evaluate that text.
At Servier, the rollout began with a common e-learning foundation. But very quickly, Olivier Ruton reframed the issue: AI is not primarily a tool. It is a managerial skill that needs to be integrated. And this skill is explicitly named: critical thinking. One very clear point was emphasized: AI is generative, not creative.
This means that quality depends directly on:
A concrete example: preparing for an interview
When a manager prepares for a performance review, AI can help them organize the facts, structure their arguments, identify areas for improvement, and spot potential biases in their analysis.
But Olivier insists: AI can also reinforce confirmation bias if the manager doesn’t step back and take a broader view. A manager who is convinced that an employee “isn’t up to par” may unconsciously frame their prompt in a way that confirms that intuition. The AI will then generate a coherent but biased set of arguments.
It is up to the manager to stay vigilant. The tool amplifies performance, but it does not correct inherent human bias.
Section to be highlighted with a different background:
Feedback from Mirakl and Servier points to the same conclusion: training people to use the tool is not enough. They must also be trained to exercise good judgment.
This means:
The risk isn’t that managers will use AI. The risk is that they will stop exercising their judgment. And in both organizations, the line is clear: AI is a catalyst, but sound judgment must remain fundamentally human.
To learn more, read the full comments from Amélie Richardson, Chief People Officer at Mirakl, and Olivier Ruton, Head of Digital Learning at Servier, from FORWARD 2026.
This article is based on the FORWARD 2026 conference, featuring Amélie Richardson, Chief People Officer at Mirakl, and Olivier Ruton, Head of Digital Learning at Servier.
Neither Mirakl nor Servier presents AI as a technical skill that requires certification. The key point is not “knowing how to use ChatGPT.” Rather, the key point is knowing how to exercise managerial judgment in an environment where productivity is increasing. And this shift is fundamental.
At Mirakl, AI is widely used. Managers themselves use it to structure feedback or clarify sensitive messages. But a clear decision has been made: managers are not evaluated on their use of AI. They are evaluated on:
In other words: AI is a tool. It is not an end in itself. Amélie Richardson makes this clear: the framework is based on the principle of individual accountability. There is no centralized monitoring of usage . There is no reporting on the number of prompts. What matters is sound judgment and what we accomplish using the tool.
A concrete example: internal staff
Mirakl has developed internal agents connected to company documents. One of the mentioned use cases involves creating or updating job descriptions based on existing templates. The agent can generate an initial structured draft. However, the manager is responsible for verifying that it meets actual needs, adjusting it based on the team’s context, or challenging the relevance of the wording. AI provides the structure, but it is the manager who validates, corrects, and makes the final decision.
If a manager passively accepts the output produced, quality declines. If they challenge it, quality improves. Competence, therefore, is not the ability to produce a text. It is the ability to evaluate that text.
At Servier, the rollout began with a common e-learning foundation. But very quickly, Olivier Ruton reframed the issue: AI is not primarily a tool. It is a managerial skill that needs to be integrated. And this skill is explicitly named: critical thinking. One very clear point was emphasized: AI is generative, not creative.
This means that quality depends directly on:
A concrete example: preparing for an interview
When a manager prepares for a performance review, AI can help them organize the facts, structure their arguments, identify areas for improvement, and spot potential biases in their analysis.
But Olivier insists: AI can also reinforce confirmation bias if the manager doesn’t step back and take a broader view. A manager who is convinced that an employee “isn’t up to par” may unconsciously frame their prompt in a way that confirms that intuition. The AI will then generate a coherent but biased set of arguments.
It is up to the manager to stay vigilant. The tool amplifies performance, but it does not correct inherent human bias.
Section to be highlighted with a different background:
Feedback from Mirakl and Servier points to the same conclusion: training people to use the tool is not enough. They must also be trained to exercise good judgment.
This means:
The risk isn’t that managers will use AI. The risk is that they will stop exercising their judgment. And in both organizations, the line is clear: AI is a catalyst, but sound judgment must remain fundamentally human.
To learn more, read the full comments from Amélie Richardson, Chief People Officer at Mirakl, and Olivier Ruton, Head of Digital Learning at Servier, from FORWARD 2026.
Discover all our courses and workshops to address the most critical management and leadership challenges.