This paper analyzes a procedure called Testing‐Based Forward Model Selection (TBFMS) in linear regression problems. This procedure inductively selects covariates that add predictive power into a working statistical model before estimating a final regression. The criterion for deciding which covariate to include next and when to stop including covariates is derived from a profile of traditional statistical hypothesis tests. This paper proves probabilistic bounds, which depend on the quality of the tests, for prediction error and the number of selected covariates. As an example, the bounds are then specialized to a case with heteroscedastic data, with tests constructed with the help of Huber–Eicker–White standard errors. Under the assumed regularity conditions, these tests lead to estimation convergence rates matching other common high‐dimensional estimators including Lasso.