People rely on generative AI chatbots like Chat GPT and Google Gemini to ask questions about physical problems and understand more about health. Some even use AI-based applications to learn about diseases. In recent times, there have been user discussions on the social media platform, X, encouraging users to upload their X-ray, MRI, and pet scan results to an AI chatbot called Grok.
Medical data is a specially protected category and can only be transferred with your consent, in accordance with federal law. Putting people's medical information into an AI chatbot is not safe at all. These private data are likely to be used to train AI models. Transferring your personal information to AI may result in an impact on privacy.
Training and improvements are made based on the data uploaded by the AI models. However, there is no clear information on what purpose that data is being used for and with whom it is being shared. There will be no protection for the data that one has uploaded in AI. Even people's personal medical records have been discovered in AI training datasets. Anyone can find it.
Elon Musk, the owner of X, encouraged users to upload data to Grok, saying that Grok is still in its early stages and will come out better soon. By storing data in Grok, it is intended to improve the AI model so that medical scans can be interpreted with greater accuracy.
Remember that no data uploaded to the Internet will be completely erased.