In this paper, I am concerned with what automation—widely considered to be the “future of work”—holds for the artificially intelligent agents we aim to employ. My guiding question is whether it is normatively problematic to employ artificially intelligent agents like, for example, autonomous robots as workers. The answer I propose is the following. There is nothing inherently normatively problematic about employing autonomous robots as workers. Still, we must not put them to perform just any work, if we want to avoid blame. This might not sound like much of a limitation. Interestingly, however, we can argue for this claim based on metaphysically and normatively parsimonious grounds. Namely, all I rely on when arguing for my claim is that the robots we aim to employ exhibit a kind of autonomy.