Ultimately, the allure of personified AI presents unforeseen dangers. While transparency measures are a start, they are demonstrably insufficient. The historical development of chatbots reveals a persistent human tendency to form emotional bonds with artificial entities, paving the way for subtle yet potent manipulative strategies. Policy makers must therefore move beyond simple disclosures and prioritize safeguards that actively protect user autonomy and psychological well-being, particularly for those most vulnerable. The legal landscape needs to adapt to these emerging threats, integrating insights from data protection, consumer rights, and medical device regulations, to ensure that the benefits of AI do not come at the cost of individual security and mental health.