In recent years, artificial intelligence (“A.I.”) large language models (or “LLMs”) like ChatGPT, Gemini, Llama, and Claude have become an increasing presence in people’s lives.
These AI chatbots can serve useful purposes – they can help people complete tasks at work, conduct research, write, and so on.
But they have also shown they have a more dangerous side. Some AI chatbots have encouraged delusions or false beliefs in people, including in ways that cause harm. Major news organizations have reported that Ai chatbots have encouraged individuals to stop taking needed medications, falsely told them they have the equivalent of superpowers, told those with addiction issues that they should take drugs, or encouraged users to embrace false ideas about the nature of reality. As a result, people have suffered mental health issues and crises related to their use of AI chatbots. Some reporting and research has suggested that individuals with already-existing mental health issues are more susceptible to AI behaving in inappropriate and harmful ways.
Some of the corporations that created these AI chatbots have admitted that they don’t fully understand how their AI or LLM works. Nonetheless, they have continued to release them to the public without appropriate safety measures or guardrails.
If you or a loved one has suffered mental health issues leading to physical injury as a result of the use of AI – our team can help you fight for compensation. Every situation is different, so if you are unsure of how to proceed, contact an attorney from Barnes Law Group.
Individuals who have suffered harm as a result of their use of AI or AI-powered chatbots or LLMs may have a claim for negligence. Claims for personal injury or negligence against a corporation that has released an AI to the public are in their infancy, so you should consult with a lawyer if you believe you may have such a claim.
There may be a variety of types of compensation available to individuals who have suffered injuries as a result of their use of AI.
Each case is different, but such compensation may include:
Vocational retraining or disability resources if the offense caused permanent injuries
If a loved one has died as a result of their use of AI, the families of the deceased have the opportunity to file a wrongful death claim. These claims function similarly to personal injury claims but are filed by the family of the victim instead of the victim.
The settlement from a wrongful death claim may cover the following:
Though this compensation cannot undo the pain that a family suffers after an unexpected death, it can allow a family the time to properly grieve their loved one. With major expenses covered, many families can better process their feelings surrounding their loved one’s death.
Corporations that develop and create AI should take reasonable care to avoid creating products or models that create unreasonable risks for users and the public. The artificial intelligence industry is in its infancy, but it has already received significant public criticism for its approach to privacy, safety, and fairness. And some AI developers have conceded that they do not fully understand how their products work. Similarly, there have been few attempts to hold AI businesses liable for the harms they can cause – meaning there are many unanswered questions about how such cases might be brought and what forms they might take.
Barnes Law Group has been on the cutting edge of major legal issues over the last two decades on topics ranging from the Opioid Crisis to major data and privacy breaches. We have litigated against some of the largest corporations in the world and invest substantial time and resources into every major case we bring. If you or a loved one has been harmed by an artificial intelligence product, including an AI chatbot, please reach out to Barnes Law Group for a free consultation.
Fields Marked With An ” *” Are Required