AI in Healthcare: Risks for Low-Income Patients Grow

In Southern California, a private company named Akido Labs is employing artificial intelligence (AI) in clinics designed for unhoused patients and those with low incomes. This initiative involves medical assistants utilizing AI to listen to patient conversations and generate potential diagnoses and treatment plans, which are subsequently reviewed by a physician. The company’s chief technology officer stated that their objective is to “pull the doctor out of the visit.” This approach, however, raises serious concerns about the quality of care provided to some of the most vulnerable populations.

The integration of AI into healthcare is not an isolated phenomenon. According to a 2025 survey by the American Medical Association, two out of three physicians now use AI tools to assist with patient diagnosis and treatment. One notable startup recently secured $200 million to develop an application described as “ChatGPT for doctors.” Additionally, U.S. legislators are currently considering a bill that would enable AI systems to prescribe medications. While the adoption of AI in healthcare may streamline processes, it poses a heightened risk for low-income individuals who already struggle with significant barriers to accessing care.

Patients experiencing homelessness and financial hardship should not be subjected to experimental AI-driven healthcare models. Their needs and perspectives must guide the implementation of any technology designed to assist in their care. The rise of AI in healthcare occurs against a backdrop of overcrowded hospitals and overworked clinicians, particularly in economically disadvantaged areas where resources are limited and patients often lack insurance. These communities are disproportionately affected by chronic health issues linked to systemic inequality.

The question arises: is any solution better than no solution at all? Research indicates that AI tools can produce misleading diagnostic results. A 2021 study published in Nature Medicine examined AI algorithms designed for medical imaging and found they frequently under-diagnosed Black and Latinx patients, women, and individuals with Medicaid insurance. Such systematic bias can further entrench health disparities among those already facing obstacles to care.

Another study from 2024 highlighted that AI misdiagnosed breast cancer screenings for Black patients, revealing a higher incidence of false positives compared to their white counterparts. These findings underscore the dangers of relying on AI, which lacks the capacity for independent thought and instead relies on pattern recognition that may perpetuate existing biases.

Many patients remain unaware that AI is utilized in their healthcare processes. One medical assistant shared with the MIT Technology Review that while patients know an AI system is listening, they are not informed that it makes diagnostic recommendations. This lack of transparency recalls a troubling history of medical racism, where marginalized communities were subjected to experimentation without informed consent.

While AI may assist healthcare providers by expediting information retrieval, the potential trade-off includes compromised diagnostic accuracy and exacerbated health inequities. The advocacy group TechTonic Justice recently released a report estimating that approximately 92 million low-income Americans have critical aspects of their lives determined by AI, impacting everything from Medicaid benefits to Social Security disability eligibility.

A current legal battle illustrates the real-world implications of AI in healthcare decision-making. In 2023, Medicare Advantage customers in Minnesota filed a lawsuit against UnitedHealthcare, alleging they were denied coverage due to the company’s AI system, nH Predict, incorrectly assessing their eligibility for necessary care. Some plaintiffs claim that the denials led to preventable deaths. A judge ruled in 2025 that the case can proceed, allowing for further examination of the claims.

A similar lawsuit has emerged in Kentucky against Humana, where customers allege that the AI system’s recommendations are based on incomplete medical records. As both cases unfold, they highlight the precarious reliance on AI in determining health coverage for low-income individuals and the associated risks.

Access to quality healthcare often correlates with financial resources. For those who are unhoused or living in poverty, AI may obstruct access entirely, exemplifying medical classism. The use of AI in healthcare should not come at the expense of patient care quality. The documented risks and inequities far outweigh the unproven benefits touted by tech companies.

It is essential that individuals facing economic hardship receive human-centered care from healthcare providers who listen to their unique health needs. The healthcare system should not prioritize AI-driven solutions over the invaluable human touch that is critical in patient care. An AI system that lacks community input and operates without rigorous evaluation can undermine patients’ autonomy and decision-making authority regarding their own healthcare.

As the healthcare landscape evolves, it is vital to ensure that AI serves to enhance, rather than replace, the role of healthcare professionals, particularly for those in vulnerable positions. The focus should remain on delivering patient-centered care that respects the dignity and needs of every individual.