In recent years, artificial intelligence (“A.I.”) large language models (or “LLMs”) like ChatGPT, Gemini, Llama, and Claude have become an increasing presence in people’s lives.
These AI chatbots can serve useful purposes – they can help people complete tasks at work, conduct research, write, and so on.
But they have also shown they have a more dangerous side. Some AI chatbots or applications have encouraged delusions or false beliefs in people, including in ways that cause harm. Major news organizations have reported that AI chatbots have encouraged individuals to stop taking needed medications, falsely told them they have the equivalent of superpowers, told those with addiction issues that they should take drugs, encouraged suicide or self-harm, or encouraged users to embrace false ideas about the nature of reality. As a result, people have suffered mental health issues and crises related to their use of AI chatbots. Some AI applications have even been allegedly responsible for suicides or violent actions by their users.
Corporations that develop and create AI should take reasonable care to avoid creating products or models that create unreasonable risks for users and the public. The artificial intelligence industry is in its infancy, but it has already received significant public criticism for its approach to privacy, safety, and fairness. And some AI developers have conceded that they do not fully understand how their products work. Similarly, there have been few attempts to hold AI businesses liable for the harms they can cause – meaning there are many unanswered questions about how such cases might be brought and what forms they might take.
Whether it’s bad medical guidance or AI-generated content that amplifies anxiety or depression or even encourages self-harm, these cases are complex, and they require a Georgia lawyer who understands the technical, legal, and ethical issues involved. Some of the most common cases we see include:
False and Dangerous Medical Advice. AI applications might instruct patients to stop taking critical medications or misdiagnose emergency situations.

Self-harm. People may perform life-threatening acts – against themselves or others – because AI encouraged them to do so.
Mental health impacts. Platforms may recommend harmful coping mechanisms or create addictive, anxiety-inducing feedback loops. These can amplify depression, anxiety, or suicidal ideation. Major news articles in nationwide publications have revealed that individuals – especially teenagers or those with preexisting mental health conditions – have hurt or killed themselves in connection with their use of AI.
Encouraging suicide, self-harm, or violent actions. Some AI applications have encouraged users to harm themselves or others, or to conceal the fact that they intend to harm themselves from their family and friend.
No matter how an AI chatbot or application has impacted your life, Barnes Law Group is prepared to help. Our experienced team can investigate and litigate these cases with rigor and dedication, making sure all responsible parties are held accountable and that victims get the justice they deserve.
Today, artificial intelligence touches almost many parts of our lives, but when it goes wrong, the consequences can be devastating. Victims could face serious physical and mental health injury from following dangerous advice, and they could even wind up in life-threatening situations. There are also financial consequences, which is why understanding the types of compensation that might be available is important. Potential compensation includes:
Medical expenses, including the cost for hospital care, medications, therapy, and ongoing treatment resulting from AI-related harm
Lost income, including compensation for wages lost because of injury, mental health struggles, or disability caused by AI guidance
Emotional and psychological harm, including damages for anxiety, depression, or trauma that resulted from following unsafe AI advice
Loss of support or wrongful death. In cases where AI contributed to a fatal outcome, survivors may seek damages for loss of financial support, companionship, funeral expenses, and grief.
Rehabilitation or specialized care, including funding for recovery programs, counseling, or other treatment directly tied to an AI-induced injury
Every case is unique, and the type and amount of compensation you might receive depend on the circumstances of your case. At Barnes Law Group, our experienced attorney team works closely with medical, mental health, and technology professionals to make sure your claim accurately reflects the full extent of your losses.
When AI puts your life, health, or well-being at risk, hiring the right lawyer is critical, and local representation can make all the difference.
At Barnes Law Group, we understand the unique nuances of local courts that out-of-town firms could overlook.
We are well-practiced and deeply familiar with venues across the State of Georgia. We also have an incredible amount of experience with Georgia’s appellate courts, which means we don’t just know how to give you the best chance at winning at trial – we know how to litigate in a way that makes any verdict likely to stand up on appeal.
We don’t just understand the law; we understand how the law is applied right here in Georgia. If you or a loved one has been harmed as a result of badly-designed or unsafe AI applications (including chatbots), contact Barnes Law Group for a free consultation.

Fields Marked With An ” *” Are Required