In the 18th century, the Reverend Thomas Bayes invented a statistical theorem that has become a core component of decision-making processes. His approach allows individuals to continuously adjust their predictions in uncertain environments following the emergence of new evidence.

Although the statistical formula may look daunting, the underlying logic is easy to comprehend. Consider the following example: Based upon prior, subjective beliefs, an individual assumes that the probability that Company X be subject to an activist campaign is 10%, like any listed company. He further believes that 40% of all companies are vulnerable and that 80% of companies confronted by an activist were vulnerable in the first place, as summarized by the following table:

After missing expectations over a few quarters, Company X becomes *‘vulnerable.’* With this new information, the individual must update the probability of a campaign by calculating the ratio of 8% by 40%, which equals 20%, double the initial likelihood. It is simple; it is Bayesian inference.

The Bayesian theorem has countless applications, starting with the functioning of the human brain. The mind assesses reality by assigning probabilities to hypotheses based on sensory data (e.g., vision) and by continually updating these hypotheses as it receives incremental data. The brain’s objective is to minimize the discrepancies between perception and reality.

Quite logically, Bayesian statistics are suitable for artificial intelligence (AI). Through so-called Bayesian networks, a machine can, for example, calculate the probability of having a particular disease *given* specific symptoms, the likelihood of certain weather patterns *given* atmospheric conditions, etc.

In his book *‘**The Signal and the Noise**’ *(2012), Nate Silver, an American statistician, encouraged individuals to build a mental model based on Bayesian statistics to reduce cognitive biases and make more reliable predictions upon which better decisions can be made. Using a poker analogy of his: ‘*Successful [decision-makers] think of the future as speckles of probability, flickering upward and downward like a stock market ticker to every new jolt of information*.’

Consider responding to a request for tender. Few suppliers would explicitly express a view on the probability of winning it to decide whether to respond to the open invitation. And fewer suppliers would update this probability in light of further evidence, be it the identity of competitors or new specific information about the mandate. And yet, such an approach would undoubtfully lead to an improved allocation of resources over time. The same applies to any decision across all corporate functions, including buy-side M&A (before and after due diligence).

Given the recent acceleration of AI technology, Nate Silver’s decade-old call to enhance decision-making by being more ‘*Bayesian*’ is worth reiterating. As machines build judgment capacity and become smarter, humans must work to get smarter, too.