what will jerome powell say during his press conference?

December 25, 2025

Recently, I saw a Kalshi market trading on which words the chair of the FED Jerome Powell would say in his next press conference in January. Interested at the thought of retiring my parents young, I began thinking of ways of calculating the probabilities of words being said.

Dirichlet Distribution

At first, I thought of creating the Dirichlet distribution (commonly referred to the "generalization" of the Beta distribution) over the individual words that appear throughout the press conference transcripts. Modelling the event that any given word appears from their underlying probability parameter allows us to calculate the probability of the event that a specific word appears over the length of a press conference (~6898 words).

So, starting with a uniform prior across all the distinct words from the historical FED transcripts, the associated underlying probability of each word was updated accordingly (the words that Powell says are assumed to come from a Multinomial distribution). With the updated probabilities, calculating the probability that a word appears is simple by considering the complement case.

A:=the event that the word "inflation" appears in 6898 wordsp:=the posterior probability of the word "inflation"P(A)=1(1p)6898A := \text{the event that the word "inflation" appears in 6898 words} \\ p := \text{the posterior probability of the word "inflation"} \\ P(A) = 1 - (1-p)^{6898}

Results

Here are the results based on the Dirichlet posterior probabilities, ordered from the largest to the smallest discrepancy between the computed probabilities and the market probabilities (using Manhattan norm).

PhraseMarket ProbabilityComputed ProbabilityAbsolute Difference
Good Afternoon0.980.00160.98
Balance of Risk0.950.000130.95
Balance Sheet0.850.00430.85
Layoff0.830.0630.77
Artificial Intelligence0.673.9×10⁻⁷0.67
Goods Inflation0.610.0160.59
Median0.430.9630.57
Unchanged0.920.3780.54
Shut Down0.490.000250.49
Recession0.240.6670.49

Obviously, these results are not fantastic. Our model does not consider many important factors such as the dependencies between word appearances and real world context. Moreover, the posterior probabilities were dominated by common words like "the", "a", "and", which made the probability of a target phrase generating extremely small.

Dependency on words

One shortcoming of using a Dirichlet distribution over individual words is that word generation is a very dependent process. For example, the word that comes after the sentence prefix "With the increase in inflation, the attitude towards the economy was ______" is most likely an adjective, and an adjective with negative connotation (something like "pessimistic"). This means that for some words, there is a higher probability of appearing than others words given the current context we have seen so far.

Having taken a NLP course this semester, this inspired me to consider using a LLM. At an LLM's core is a conditional probability distribution which spits the most likely next token given context i.e. P(h<t)P(\cdot | h_{<t}) where h<th_{<t} is the given context. Armed with this, my strategy was as follows:

  1. First take an open source model (Qwen 30B Instruct model). This is our prior probability, akin to taking a random English speaker from the planet.
  2. Finetune the open source model on FED press conference transcripts. This would take our random English speaker and make them sound like Jerome Powell talking.
  3. Run a BFS algorithm to find the probability that the model would say a specific phrase across the length of the a transcript.

The algorithm is as follows:

The tree search expands as follows:

Beta Distribution

View on GitHub