The paper explores a model of equilibrium selection in coordination games, where agents from an infinite population stochastically adjust their strategies to changes in their local environment. Instead of playing perturbed best-response, it is assumed that agents follow a rule of `switching to better strategies with higher probability'. This behavioral rule is related to bounded-rationality models of Rosenthal (1989) and Schlag (1998). Moreover, agents stay with their strategy in case they successfully coordinate with their local neighbors. Our main results show that both strict Nash equilibria of the coordination game correspond to invariant distributions of the process, hence evolution of play is not ergodic but instead depends on initial conditions. However, coordination on the risk-dominant equilibrium occurs with probability one whenever the initial fraction contains infinitely many agents, independent of the spatial distribution of these agents.