Revenue projections vary in accuracy depending on methods, horizon, data quality, and the scenario framing used by analysts. Research across forecasting disciplines and public finance consistently shows that shorter horizons, transparent models, and explicit scenario assumptions improve trustworthiness, while systemic shocks and political incentives increase error and bias.
What drives projection accuracy?
The forecast horizon is a primary factor: short-term revenue estimates tied to current tax laws and recent collections are typically more reliable than long-term forecasts that must account for economic cycles, demographic change, and policy shifts. Evidence from forecasting competitions led by Spyros Makridakis, University of Nicosia, indicates that accuracy deteriorates with horizon and that simple, robust methods often match or outperform highly complex algorithms when uncertainty is large. Human judgment can add value for scenario-building; work from the Good Judgment Project led by Philip E. Tetlock and Barbara Mellers at the University of Pennsylvania shows that trained, aggregated forecasters reduce error relative to untrained experts, particularly when probabilistic scenarios are used.
Model risk and data limitations also matter. Models fitted to historical relationships assume structural stability; when those relationships break—because of technological change, environmental shocks, or tax base composition—errors grow. The International Monetary Fund Fiscal Affairs Department and the Organisation for Economic Co-operation and Development highlight that commodity-dependent and informality-heavy economies experience larger forecast volatility because receipts swing with prices and collection capacity. Political and institutional context creates incentives for optimistic bias; research by Bent Flyvbjerg, University of Oxford, on infrastructure and public projects documents systematic optimism that can analogously affect fiscal projections.
Assessing scenarios and consequences
Scenario-based projections—baseline, optimistic, and pessimistic—are useful for stress-testing budgets, but their accuracy depends on how scenarios are constructed and communicated. A well-documented baseline that ties assumptions to observable inputs (growth rates, unemployment, commodity prices) and quantifies uncertainty with probability ranges is more actionable than narrative scenarios without explicit linkages. Tetlock’s work at the University of Pennsylvania emphasizes calibration: probabilistic forecasts should be validated against outcomes to improve future performance.
Consequences of inaccurate projections are tangible. Overly optimistic revenue estimates can create budget shortfalls, force mid-year cuts, or erode public trust; pessimistic bias can lead to unnecessary austerity with social and territorial impacts, especially in regions dependent on central transfers or single industries. Environmental shocks such as droughts or floods shift tax and fee bases and impose redistribution pressures on subnational governments, amplifying errors in territorial planning. Cultural norms around transparency and the institutional capacity to update projections affect whether governments can adjust policy before errors materialize.
Practically, accuracy improves when agencies combine historical-model forecasts, scenario analysis, and expert judgment; publish assumptions and error bands; and maintain independent review. Evidence from forecasting science and public finance indicates no single method guarantees accuracy under all scenarios, but disciplined, transparent processes and regular calibration measurably reduce forecast error and the policy risks that follow.