We evaluate the accuracy of a machine-learning algorithm that uses LiDAR data to optimize ground-based sensor placements for catchment-scale snow measurements. Sampling locations that best represent catchment physiographic variables are identified with the Expectation Maximization algorithm for a Gaussian mixture model. A Gaussian process is then used to model the snow depth in a 1 km2 area surrounding the network, and additional sensors are placed to minimize the model uncertainty. The aim of the study is to determine the distribution of sensors that minimizes the bias and RMSE of the model. We compare the accuracy of the snow-depth model using the proposed placements to an existing sensor network at the Southern Sierra Critical Zone Observatory. Each model is validated with a 1 m2 LiDAR-derived snow-depth raster from 14 March 2010. The proposed algorithm exhibits higher accuracy with fewer sensors (8 sensors, RMSE 38.3 cm, bias53.49 cm) than the existing network (23 sensors, RMSE 53.0 cm, bias515.5 cm) and randomized placements (8 sensors, RMSE 63.7 cm, bias524.7 cm). We then evaluate the spatial and temporal transferability of the method using 14 LiDAR scenes from two catchments within the JPL Airborne Snow Observatory. In each region, the optimized sensor placements are determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys is then compared to 100 configurations of sensors selected at random. We find the error statistics (bias and RMSE) to be
more consistent across the additional surveys than the average random configuration.