Data-Based Economics, ESCP, 2024-2025
2025-03-25
Taking the Fed at its Word: A New Approach to Estimating Central Bank Objectives using Text Analysis by Adam H. Shapiro and Daniel J. Wilson link
I had several conversations at Jackson Hole with Wall Street economists and journalists, and they said, quite frankly, that they really do not believe that our effective inflation target is 1 to 2 percent. They believe we have morphed into 1+1/2 to 2+1/2 percent, and no one thought that we were really going to do anything over time to bring it down to 1 to 2.
Sep 2006 St. Louis Federal Reserve President William Poole
Like most of you, I am not at all alarmist about inflation. I think the worst that is likely to happen would be 20 or 30 basis points over the next year. But even that amount is a little disconcerting for me. I think it is very important for us to maintain our credibility on inflation and it would be somewhat expensive to bring that additional inflation back down.
March 2006 Chairman Ben Bernanke
With inflation remaining at such rates, we could begin to lose credibility if markets mistakenly inferred that our comfort zone had drifted higher. When we stop raising rates, we ought to be reasonably confident that policy is restrictive enough to bring inflation back toward the center of our comfort zone, which I believe is 1+1/2 percent…So for today, we should move forward with an increase of 25 basis points…
Jan 2006 Chicago Federal Reserve President Michael Moskow
We are determined to ensure that inflation returns to our two per cent medium-term target in a timely manner. Based on our current assessment, we consider that the key ECB interest rates are at levels that, maintained for a sufficiently long duration, will make a substantial contribution to this goal. Our future decisions will ensure that our policy rates will be set at sufficiently restrictive levels for as long as necessary.
Mar 2024 Press conference from Christine Lagarde
The traditional branches of text analysis1 are:
Natural Language Processing is a subfield of machine learning.
Before getting started with text analysis, one needs to get hold of the text in the first place
Before any statistical analysis can be applied, the text needs to be preprocessed.
We will review the following processing steps:
Split input into atomic elements.
from nltk.tokenize import sent_tokenize
txt = """Animal Farm is a short novel by George Orwell. It was
written during World War II and published in 1945. It is about
a group of farm animals who rebel against their farmer. They
hope to create a place where the animals can be equal, free,
and happy."""
sentences = sent_tokenize(txt)
print(sentences)
Problems:?
\[ \text{the central bank forecasts increased }\underbrace{\text{inflation}}_{?}\]
Idea: we would like the weights to be endogenously determined \[ \underbrace{\text{the}}_{x_1} \underbrace{\text{ central}}_{x_2} \underbrace{\text{ bank}}_{x_3} \underbrace{\text{ forecasts}}_{x_4} \underbrace{\text{ increased} }_{x_5} \underbrace{\text{ inflation}}_{x_6}\]
the | central | bank | forecasts | increased | inflation | economy | exchange rate | crisis | sentiment | |
---|---|---|---|---|---|---|---|---|---|---|
text1 | 1 | 1 | 2 | 1 | 1 | 2 | -1 | |||
text2 | 3 | 1 | 1 | 2 | +1 | |||||
text3 | 4 | 1 | 1 | 1 | 1 | -1 | ||||
Bag of words approach with raw word count has a few issues:
Improvement: TF-IDF (Term-Frequency*Inverse-Distribution-Frequency)
Recent trends for text analysis:
Next week
Introduction to Large Language Models