Following the July employment report, President Erica Mutontalfer, director of the Labor Statistics of President Trump Philly, showed that Vray had little employment growth over the past quarter. Initially, the president accused her of “rigging” her, making him look bad. Recently, members of his administration have sought to reduce criticism to substantial revision criticism (one such a representative case is Casey Mulligan’s tweet here).
What is the true motivation here for why it is less inflammatory (unreliable work numbers) and how does a successful change to a statistical program look like?
It wasn’t that the review was disappointed. Statistics always have reviews. Statistical reports are built on a variety of assumptions. Ultimately, you are a collection to sample the samples you use based on assumptions and stylized facts, and argue about the overall population. Ideally, you’ll look at the entire population, but that will keep costs down in terms of both money and time. Therefore, we use a (ideally) representative sample of the population. Ifube’s assumptions and stylized facts change or are no longer of any use, so the model needs to be revised. Reviews change the results of claims that the sample can support. In such cases, the representation of the revised data is a sign of improvement in the model. Without revisions, the model will no longer be useful over time.
What is the size of the review? Of course, that’s a concern. If model revisions frequently swing with a HUIT amount, the model will show up. However, University of Arkansas economist Jeremy Hopearl shows that BLS data revisions have shrunk over time (see also this post by University of Louisiana economist Gary Wagner). There is not much room for improvement there.
The size and frequency of revisions depend on the sample on the response rate of the sample. A major problem with common BLS data is that fees are falling. A lower response rate means that greater attributions need to be made with less data. Not ideal. Improved response rates are a sign of better quality data.
You can also see that BLS data corresponds to other sources. ADP, a salary company, provides surveys on their monthly jobs. It is not identical to the BLS report (see the FAQ at the bottom for different gender abilities) but is a useful comparison tool. In fact, revisions to BLS data (and ADP’s own revisions) tend to bring the two datasets closer together. Over time, the number of private employment in BLS and private employment in ADP differs, with the ADP report taking an average of 1,000 lower employment than the BLS report. Such a contradiction is not at all bad considering that we talk about the profits/losses of dozens of jobs each month, if not hundreds of thousands of people a month.[1] A lower inconsistency between the two datasets is a sign of improvement.
Improved economic data is a good thing. However, any improvement can be a difficult process. There must be a great variety in how to assess whether a change is an improvement.
–
[1] Note: All data uses unadjusted numbers in seasonal terms. As for adjustment, the NSA provides the best apple and application comparisons, as it is a function of the model chosen by each institution. However, this does not change this when using seasonally adjusted numbers. The inconsistency rises to 5,000 employees a month.
