MetaMesh

MetaMesh

MetaMesh is WindBorne's multi-model blended forecast product. Rather than relying on any single weather model, MetaMesh combines forecasts from multiple leading models to produce a single, more accurate prediction.

1. Model Metadata

2. Forecasting Regime

3. Input Models

MetaMesh blends forecasts from multiple sources, using each cycle's latest available data to produce a single bias-corrected output.

4. Model Resolution

MetaMesh is available in two modes. Both use the same multi-model blending approach to learn and correct local model biases, but differ in how those biases are learned.

Both modes are served from the same Point Forecast endpoint and can be queried by either station ID or by lat/lon coordinates. When a coordinate query falls within a certain radius of a supported station, MetaMesh will snap the request to that station and return the Station Forecast in preference to the dynamic Point Forecast.

Static Station Forecast

Trained on METAR observation data at fixed station locations. MetaMesh learns station-specific biases to deliver higher accuracy.

Station forecasts are pre-computed and saved for initialization cycles (00z, 06z, 12z, 18z), so historical init times can be retrieved.

See Available Stations for the full list of supported stations.

Point Forecast

Trained on ERA5 reanalysis data, allowing forecasts at any location worldwide. The regression learns global, gridded model biases from ERA5 rather than station observations, making it location-independent.

Point forecasts are computed on-demand, only the latest initialization cycle is served. Historical init times are not stored.

5. Outputs & Products

See Point Forecast API for usage — query by coordinates or station ID.

6. Benchmarks

MetaMesh beats every model that goes into it. Below, we evaluate the dynamic blend against ERA5 at 35,350 grid points globally across 4 variables and out to 15 days. MetaMesh outperforms the best individual model on 100% of the points across the entire evaluation period. Blending is especially important at longer horizons. At Day 15, when the spread between individual models doubles, MetaMesh continues to track the best of them.

RMSE vs. forecast lead time for MetaMesh and input models
RMSE vs. forecast lead time for 2m temperature, 2m dewpoint, mean sea level pressure, and wind speed, compared against all input models out to 360 hours. Evaluation period is Jan 1 – Mar 21.
RMSE vs. forecast lead time for MetaMesh and input models
RMSE vs. forecast lead time for 2m temperature, 2m dewpoint, mean sea level pressure, and wind speed, compared against all input models out to 360 hours. Evaluation period is Jan 1 – Mar 21.

To understand these results, we asked ourselves if MetaMesh is just a clever blend of public models, or is our constellation actually moving the needle? To find out, we re-ran the whole evaluation with WeatherMesh stripped out. The gap between the two lines is what WindBorne's proprietary data adds.

MetaMesh with and without WeatherMesh comparison
MetaMesh with WeatherMesh models included (MM-Static) vs. MetaMesh without them (MM-Static-NoWB). Same variables, lead-time and evaluation period as above.
MetaMesh with and without WeatherMesh comparison
MetaMesh with WeatherMesh models included (MM-Static) vs. MetaMesh without them (MM-Static-NoWB). Same variables, lead-time and evaluation period as above.

And it'll only grow from here. The results above reflect ~300 balloons aloft today. The constellation is on track to double every 6 months, and every additional balloon is data no other model has.

The other part of the story is the architecture of MetaMesh itself. No single forecast model is best at everything (yet). Each has its strengths, its blind spots, and a window of lead times where it leads. The case for a multi-model blend is to recognize that honestly and combine inputs in a way no single model can replicate. If you color every METAR station by which model wins there, you get a patchwork, and the patchwork shifts as lead time grows.

METAR stations colored by best-performing model across lead times
Each dot represents a METAR station, colored by whichever model had the lowest RMSE for 2m temperature over the evaluation period of Q1. Maps shown for f024, f096, f168, f240, and f360.
METAR stations colored by best-performing model across lead times
Each dot represents a METAR station, colored by whichever model had the lowest RMSE for 2m temperature over the evaluation period of Q1. Maps shown for f024, f096, f168, f240, and f360.

In this complicated patchwork, MetaMesh learns where each input is reliable and weights it accordingly, so the customer gets the best available signal instead of having to pick one model and live with it.

Additionally, whereas most blends apply fixed weights on a fixed schedule ("60% IFS, 40% GFS"), MetaMesh learns weights that vary by lead time, variable, and region, recalibrates them daily, and re-blends as new input models are published. The chart below captures the last part: RMSE for 2m temperature at f168 dropping as each input arrives over a forecast cycle.

RMSE for 2m temperature at f168 over a forecast cycle
RMSE for 2m temperature at f168 dropping as each input model arrives over a forecast cycle.
RMSE for 2m temperature at f168 over a forecast cycle
RMSE for 2m temperature at f168 dropping as each input model arrives over a forecast cycle.

7. Historical Data

Static station forecasts are archived, so past initialization times can be retrieved. Dynamic point forecasts are not archived as they are generated on demand at the user's request.

Backtests are available upon request. Please contact us for access.