Artificial intelligence (AI) is increasingly used to make decisions for humans. Unlike traditional AI that is programmed to follow human-made rules, machine-learning AI generates rules from data. These machine-generated rules are often unintelligible to humans. Will users feel more uncertainty about decisions governed by such rules? To what extent does rule transparency reduce uncertainty and increase users' trust? In a 2x3x2 between-subjects online experiment, 491 participants interacted with a website that was purported to be a decision-making AI system. Three factors of the AI system were manipulated: agency locus (human-made rules vs. machine-learned rules), transparency (no vs. placebic vs. real explanations), and task (detecting fake news vs. assessing personality). Results show that machine-learning AI triggered less social presence, which increased uncertainty and lowered trust. Transparency reduced uncertainty and enhanced trust, but the mechanisms for this effect differed between the two types of AI. Lay Summary Machine-learning AI systems are governed by system-generated rules based on their analysis of large databases. These rules are not predetermined by humans. Furthermore, they can sometimes be seen as difficult to interpret by humans. In this research, I ask whether users trust the judgments of such systems that are driven by machine-made rules. The results show that when compared with a traditional system that was programmed to follow human-made rules, machine-learning AI was perceived as less humanlike. This led users to be more uncertain about the decisions produced by the machine-learning AI system. This also decreased their trust in the system and their intention to use it. Transparency of the rationales for its decisions alleviated users' uncertainty and enhanced their trust, provided that the rationales are meaningful and informative.