16:10 - 18:30
Submission 211
Forecasting Solar PV: What Works, When, and Where?
WISO25-211
Presented by: Thi Ngoc Nguyen
Thi Ngoc NguyenFelix Müsgens
BTU Cottbus-Senftenberg, Germany
Reliable forecasting of solar photovoltaic (PV) output is fundamental for the effective integration of renewable energy, ensuring power grid stability and optimizing operational efficiency. To this end, we conducted a comprehensive meta-analysis of deterministic solar forecasting, rigorously reviewing 500 full-text articles from an initial pool of over 2,000 identified through Google Scholar. This yielded a unique database of 5,823 observations, encompassing a broad spectrum of forecast configurations, evaluation metrics, and modelling approaches, allowing for a quantitative assessment of key influencing factors.

By applying multivariate adaptive regression splines (MARS), partial dependence plots, and linear regressions, we quantified the influence of 14 critical variables on forecast accuracy. Our analysis revealed forecast horizon as the dominant factor, necessitating distinct optimal strategies for intra-hour, intra-day, and day-ahead predictions. Specifically, intra-hour forecasts gain from historical and spatial-temporal data, intra-day forecasts excel with image-based and hybrid models, and day-ahead forecasts are best supported by numerical weather prediction (NWP) and local meteorological data. Interestingly, ensemble–hybrid models consistently surpassed individual methods across all forecast horizons, while sophisticated time series models can also deliver strong performance through good data pre-processing and appropriate inputs.

Beyond the forecast horizon and model choice, our meta-analysis also explored the influence of data characteristics and evaluation practices. We found that coarser forecast resolutions often correlate with higher accuracy, and training periods around 2,000 days tend to yield the best forecasts. Furthermore, we identified substantial progress in forecast accuracy over time, particularly for shorter horizons, with recent years demonstrating accelerated gains.

Our analysis also revealed potential pitfalls, such as the risk of "cherry-picking" in error reporting, underscoring the urgency for standardized validation practices. Consistent with this, our results highlight that at least one year of test data is essential for robust model evaluation. Notably, while the skill score enables cross-study comparisons, it does not fully normalise for climate and geographical variations. Significant differences in forecast performance across climate zones suggest caution when extrapolating methodological insights, particularly from mature solar markets (e.g., US, Western Europe) to emerging regions (e.g., Northern Africa, Middle East).

In conclusion, this work delivers globally applicable guidance on input selection and model choice tailored to the forecast horizon and context, and forecast evaluation standards. These findings are crucial for enabling more reliable solar energy integration and directing future advancements in forecasting methodologies.