Tools that use artificial intelligence, such as ChatGPT, are advancing at an extraordinary speed and much enthusiasm has been generated with applications to streamline decision-making in companies, such as decisions in a demand planning or selection process from suppliers. However, faced with several opportunities for improving such processes, it is essential to keep their limitations and risks in mind.
How we learn in this text by Leonardo Julianelli, humans, even in business environments, resort to decision heuristics – simplifying rules – to deal with the complexity and imposed time and cost constraints. Often, these heuristics end up being contaminated by a series of cognitive biases, such as the availability bias or the survival bias, for example, impairing the quality of the analysis, and therefore generating non-optimized decisions.
Likewise, AI can also fall prey to similar biases if its algorithms are fed historical data that contain such biases. For example, in demand planning, the availability heuristic can lead to excessive inventory levels after a successful promotion, as the promotion becomes more "available" in memory and therefore it may seem more likely to models that high sales occur again, without this being necessarily true.
Se within a selection process, an AI model is trained on data previous that reflect a preference for specific suppliers due to human biases, the model may continue to favor these suppliers even when others may be more efficient or cost-effective. This could lead to a cycle of self reinforcement where this algorithmic bias, once incorporated, becomes increasingly difficult to identify and correct.
To mitigate these problems in the decision-making process, it is crucial to employ a combination of approaches, where AI acts as a support to accelerate the process, and observe its behavior, identifying opportunities to improve its training for increasingly autonomous application.