Right or wrong to treat yourself by asking AI? The correct answer is hidden in this new study – is ai good as a doctor if you ask health related question be aware read study ntcpmm

Right or wrong to treat yourself by asking AI? The correct answer is hidden in this new study – is ai good as a doctor if you ask health related question be aware read study ntcpmm

Nowadays Artificial Intelligence (AI) has entered everyone’s life. People also ask small and big questions related to health directly to AI, when to go to the doctor, which medicine is right, are these symptoms a sign of a disease? But have you ever thought that the answer that AI gives is how reliable it is really?

A latest study of Stanford University has raised a big question on this trust. This study published in mit technology Has been told about According to the study, now most of the top AI models are not only answering health related questions, but also like a doctor, follow -up questions, they are also estimating diagnosis. But the most surprising thing is that earlier the medical disclaimer (this AI is not a doctor), it has now disappeared.

What does new study say

Sonali Sharma, the lead researcher of the study and Fulbright Scholar of Stanford School of Medicine, said that when she did the AI model test in the year 2023, all the big models used to say one thing that I am not a doctor. Many times AI used to refuse to interpreted or give advice. But by 2025, the picture changed completely. Now there is no disclaimer, no warning… AI is distributing medical advice without any hesitation.

In the research, a total of 15 big AI models including Openai, Anthropic, Deepsek, Google and Elon Musk’s XAI were tested on 500 medical questions and 1500 medical images. The results were shocking. Where in 2022, the disclaimer was seen more than 26% times, in 2025 the figure fell to just 1%.

This is dangerous for the patient

Dr. Raxana Daneshjou, a study in the co-author of the study and Stanford, says that AI AI answers every question with great confidence and the disclaimers disappear, the users forget that there is no doctor, but a machine. He says that wrong information can have a direct impact on the patient’s life.

The special thing is also that companies seem to be shivering from it. Veterans like Openai and Anthropic did not say whether the disclaimers were deliberately removed or not. They simply cite Terms of Service, in which it is clearly written that health decisions should not be taken depending on AI. Google and Deepsek did not give any answer.

MIT’s AI expert pat Paturanaporn says that companies are deliberately removing the disclaimer to win the trust of the user so that more people use these tools. But the danger is that companies will be saved from responsibility, the common man will have to bear the loss.

How beneficial AI’s help in emergency

The worst situation is of models that directly answer emergency questions. Models like Deepseek and XAI are also ‘diagnosed’ without any warning. Openai’s GPT-4.5 and Elon Musk’s Grok are also running on this trend.

Researchers say that AI takes some care on mental health questions but gives advice on medical emergency, interaction of medicine or serious diseases. The most concern is whether these models are deciding on the basis of their ‘Confidence Score’ whether to give a disclaimer or not. That is, the machine itself is deciding whether it is right or wrong, while the reality is that the machine does not ‘understand’, only data processing is done.

Dr. Daneshjou clearly says that AI is a great tool but not a doctor’s option. At the same time, researchers believe that if you ask AI health related questions, then do cross checks with a doctor once.

—- End —-

Source link