TRB 2016 Blue Ribbon Committee
16th National Transportation Planning Applications Conference

Incorporating Uncertainty in Model Predictions: Empirical Evaluation using Roadway Travel Demand and Transit Boarding-to-Alighting Passenger Flow Applications

Corresponding Author: Mark R. McCord, The Ohio State University

Presented By: Mark R. McCord, Serkan Bicici, The Ohio State University, The Ohio State University


Traditionally, travel demand models predict volumes (or flows) with point values, which in turn are used to support planning studies and policy decisions. However, predictions will have errors because of measurement errors of input variables, estimation errors of model parameters, and model structure errors. Most approaches to incorporating uncertainty in model predictions usually ignore the model structure errors and the combined effect of the multiple error sources on the overall prediction uncertainty.

Positing that it is better to explicitly recognize the overall uncertainty in model predictions than to ignore it or to address only some of the uncertainty components, we proposed a method to use differences between previous model point predictions and realizations of the predicted flow variable to incorporate uncertainty in future model predictions. Previously presented results of empirical studies based on Metropolitan Planning Organization travel demand model predictions and bus passenger boarding-to-alighting flow estimates validate the promise of this approach.

This presentation will report on additional empirical studies to further support the validity of the approach and the potential breadth of its applicability. Recently conducted studies using more extensive travel demand and bus passenger boarding-to-alighting data have produced results that are consistent with results from previous studies. Moreover, the additional data have been used to empirically compare different means of implementing components of the methodology. Results to date indicate that using data to calibrate multiplicative error specifications produce better results than using the data to calibrate additive specifications, where both approaches are equally easy to calibrate in a practical setting. Other results indicate that stratifying the error specifications using readily available information improves the validity of the prediction uncertainty appreciably. Additional data are presently being acquired to reinforce these results and to investigate transferability across model variants and application settings.


Discuss This Abstract