Algorithms are involved in decisions ranging from trivial to significant, but people often express distrust towards them. Research suggests that educational efforts to explain how algorithms work may help mitigate this distrust. In a study of 1,921 participants from 20 countries, we examined differences in algorithmic trust for low-stakes and high-stakes decisions. Our results suggest that statistical literacy is negatively associated with trust in algorithms for high-stakes situations, while it is positively associated with trust in low-stakes scenarios with high algorithm familiarity. However, explainability did not appear to influence trust in algorithms.We conclude that having statistical literacy enables individuals to critically evaluate the decisions made by algorithms, data and AI, and consider them alongside other factors before making significant life decisions. This ensures that individuals are not solely relying on algorithms that may not fully capture the complexity and nuances of human behavior and decision-making. Therefore, policymakers should consider promoting statistical/AI literacy to address some of the complexities associated with trust in algorithms. This work paves the way for further research, including the triangulation of data with direct observations of user interactions with algorithms or physiological measures to assess trust more accurately.