Concerns are growing over whether users of AI platforms, particularly children, will be negatively impacted by them. 

Garcia filed a lawsuit against Character.ai for wrongful death, claiming her son, 14, was too attached to the platform, forming an emotional bond that did him harm. In response to these allegations, Character.AI has sought to reassure users it would improve its approach to handling violations against its terms and services.

 Even so, Garcia is adamant about tougher protective measures that would diminish the chances of negative interactions and emotional dependence on the platform.

Character.ai’s lawyers have moved to have the case dismissed. It cites the First Amendment, guaranteeing free speech in the United States. The company adds that to hold it liable for what users do is a violation of its constitutional rights.

As these things continue, the outcome remains unclear. The case has thrown an intense spotlight on the ethical challenges raised by the AI platforms and their ability to influence users. Such growing concerns should diligently spur discussions and initiatives to safety for technology centered on children.

Thank you for reading this post, don't forget to follow my whatsapp channel

Discover more from TechKelly

Subscribe now to keep reading and get access to the full archive.

Continue reading