Unlike experts, who rated ChatGPT comparably to official institutional websites in terms of scientific adequacy, public comprehension, and overall satisfaction, patients preferred the AI-generated texts for understandability, trust, reassurance, and satisfaction. Interestingly, patients were unable to distinguish between ChatGPT and human responses. These findings suggest that while the radiation protection information provided by institutional websites is generally well-received by experts, it remains imperfect for patients. Conversely, ChatGPT appears more suitable for providing radiation protection information to the public and may be a valuable tool in assisting health professionals with patient communication. However, this should be considered in light of other studies that highlight patients' low trust in healthcare systems' ability to use AI responsibly and protect them from potential AI-related harms.(9)