OpenAI CEO debuts new AI healthcare company: largely inspired by ChatGPT visits to the doctor
Last week, OpenAI CEO Sam Altman drew a lot of attention when he founded Thrive AI Health (Thrive), an artificial intelligence healthcare company. The company aims to address the chronic disease crisis plaguing 127 million people in the United States by changing patient behavior through AI health coaches, and has received an investment from America's richest man.
Healthcare has been a social issue of greater concern to American society and a key issue in this year's U.S. elections. An inefficient, high-priced healthcare system makes it difficult for many Americans to access effective care. As one of the top names in the AI industry today, Altman's choice to enter AI healthcare at this time has naturally raised many eyebrows.
The other founders who started Thirve with Altman are Arianna Huffington, founder of Thrive Global, a behavioral change technology company, and on July 11, they gave an interview to The Atlantic Monthly, revealing more details about the venture.
They said that Thrive will focus on providing health advice, avoiding medical diagnostic tasks that AI is currently not good at, and incorporating health information into future work scenarios.
However, when confronted with the interviewer's questions, they did not clearly explain what form the product will land in and what specific steps will be taken to ensure the safety of users' data.
Altman also said in an interview that "maybe" information between people and AI should be protected by confidentiality clauses between lawyers and clients, but he believes that this should be left up to society to decide.
It's worth noting that the AI health data obtained by Altman's new company, Thrive, is so private that it has significant economic value. Insurance companies could use this information to adjust the price of specific policies or decide whether to reimburse for a certain drug. The U.S. also recently experienced a breach of this information, which led to a massive healthcare system shutdown.
1. Focusing on health advice rather than medical diagnosis, the model performs "well enough"
The biggest selling point of Thrive, an AI health product, is that it's a "highly personalized AI health coach" that will provide personalized, instant health advice by collecting information about a user's sleep, food, exercise, stress, and social interactions, and combining it with expertise in the areas of medical records and behavioral change.
Altman and Huffington believe that AI health is important for improving the U.S.'s porous healthcare system. Currently, 90 percent of the cost of the U.S. health insurance system is spent on the treatment of chronic illnesses, and Thrive is expected to significantly reduce that cost.
Altman and Huffington compared the technology to Roosevelt's new policy, stating that "AI will be part of a more efficient healthcare infrastructure and continue to support people's health in their everyday lives."
However, the use of AI in healthcare is nothing new. AI has played an important role in areas such as CT reconstruction, drug development, and assisted diagnosis.
Currently, AI healthcare applications are mainly aimed at doctors and R&D personnel with specialized knowledge, rather than at patients with diseases in this campaign. Most patients do not have sufficient medical knowledge, so they are likely to be unable to make effective judgments about health advice or medical diagnoses made by AI, and it is difficult for AI products to ensure that they do not make mistakes.
Altman and Huffington responded to questions about the safety of the products in an interview with The Atlantic Monthly. They argue that the AI models are currently performing well enough. If Thrive focused only on "health advice" rather than "medical diagnosis" and utilized peer-reviewed data for training, the model could provide good enough advice.
However, neither Huffington nor Altman could give a clear response as to what form the product would ultimately take. They said the product will be launched as an app, but Huffington also said the product will be available through a variety of possible models, even similar to applications such as Microsoft Teams integrated into work scenarios.
2, Collecting data is not an issue. Altman said that users are willing to share
This hyper-personalized product will need to convince users to voluntarily transfer a lot of private information so that the AI can have enough information to make a decision. In an interview with The Atlantic Monthly, Altman argued that this is not a huge challenge.
Altman shared that part of the reason he started his new company is that many people have been diagnosed with medical issues with ChatGPT, and he's heard of many people trusting ChatGPT's advice to take the relevant tests and receive treatment. He believes that users are actually willing to share some very detailed and personal information with LLM.
The Atlantic Monthly reporter was very alarmed by this practice, as the medical advice returned by ChatGPT could have AI illusions and pose a threat to patients' health. Patients relying on this false information are also likely to have conflicts with specialized doctors.
The reporter also argued that once medical information is compromised, it could seriously jeopardize users' personal rights. But Altman was not firm in his reaction to the risk of information leakage, arguing that the issue should be handled by the community.
Communication between doctors, patients, lawyers, and clients is protected by law, and communication between people and AI should have similar protections, he says, "and maybe society decides whether to establish some form of AI privilege." In other words, they may not actively promote similar protections and leave the decision to society.
But protecting health data is imminent. As recently as February of this year, U.S. health tech giant Change, which is part of U.S. insurance and health group Healthcare, was hit by a massive ransomware attack that halted large-scale health insurance systems and put nearly one-third of Americans at risk of compromising their medical information.
Open AI has an imperfect resume when it comes to data protection.2023 Early in the year, Open AI's internal systems were subjected to a cyberattack that compromised the company's employee discussions and chats about advanced AI systems.
In addition, tech media outlet Engadget reported in early 2023 that ChatGPT had a serious information breach. At the time, ChatGPT's webpage experienced some glitches, which led to some users' conversation titles appearing in other chat boxes, and some users' identifying information and bank card details were leaked.
Nonetheless, Altman called on the community to give them "trust" in this interview, which is the exact opposite of his call for people not to trust him and OpenAI at the Bloomberg Technology Summit 2023.
Altman believes that there is a common expectation to improve the health of AI technology, which is one of the few application areas where AI can be utilized to change the world. He later added that in order to realize that AI improves human health, "a certain amount of faith is required," meaning that people have to believe that the new company can do this responsibly.
In the Times article, Altman and Huffington describe these "beliefs" in particular. They argue that to achieve "AI-driven behavioral change" and reverse the growing spread of chronic disease, they need three main areas of trust.
On the one hand, it comes from the belief of policymakers that they need to create a "regulatory environment that promotes AI innovation." Healthcare practitioners also need to trust AI tools and integrate AI technologies into their practices. Finally, individuals also need to trust AI to handle their private data responsibly. This is a really big ask for a business that doesn't have any products or promises to take any security measures.
Conclusion: handing over your health to AI may be early in the fashion, AI shouldn't be a faith game
When it comes to the implementation of AI health products, Altman and Huffington described the following scenario in Time: "An AI health coach would provide very accurate advice for everyone: replace your third soda of the afternoon with water and lemon; take a 10-minute walk with your kids at 3 p.m.:after school; start your relaxation routine at 10 p.m. " Artificial intelligence health coaches will eventually change some stubborn bad habits, ultimately improving the overall health of humans and extending their lifespan.
However, are the various "unhealthy" behaviors in people's lives a matter of personal habits or a broader societal problem? Should we leave the chronic disease crisis to individuals and AI solutions, or should it be systematically prevented through research and intervention by governments and healthcare organizations? These may be questions that people need to consider before the so-called AI healthcare infrastructure becomes a reality.
In the interview, Altman talks about how the realization of the AI health vision will require a certain amount of faith on people's part, but in far-reaching technological and healthcare fields such as AI, what we may really need is not such a faith game, but verifiable and explainable technology.
Related Article
-
"Apple's replacement wave is underestimated"! Damo expects more than 500 million iPhone shipments in
-
Current Development Status of Intelligent Driving
-
Main application directions of big models
-
The first "laborers" whose jobs were taken by AI have already appeared
-
With the integration of ChatGPT, this in-car AI voice assistant has "captured" many European countri
-
Tesla (TSLA.US) Boom Comes From 'AI Bubble' as Market Cap Soars Over $240 Billion in 11 Days?