BACKGROUND: Multiple databases provide ratings of drug-drug interactions. The ratings are often based on different criteria and lack background information on the decision making process. User acceptance of rating systems could be improved by providing a transparent decision path for each category. METHODS: We rated 200 randomly selected potential drug-drug interactions by a transparent decision model developed by our team. The cases were generated from ward round observations and physicians' queries from an outpatient setting. We compared our ratings to those assigned by a senior clinical pharmacologist and by a standard interaction database, and thus validated the model. RESULTS: The decision model rated consistently with the standard database and the pharmacologist in 94 and 156 cases, respectively. In two cases the model decision required correction. Following removal of systematic model construction differences, the DM was fully consistent with other rating systems. CONCLUSION: The decision model reproducibly rates interactions and elucidates systematic differences. We propose to supply validated decision paths alongside the interaction rating to improve comprehensibility and to enable physicians to interpret the ratings in a clinical context.