People often rely on generative AI chatbots like ChatGPT and Google Gemini to ask questions about physical issues or gain a better understanding of their health. Some even use AI-powered applications to check for diseases. Recently, on social media platform X, users have been encouraged to upload their X-rays, MRIs, and PET scan results to an AI chatbot called Grok for interpretation.
Medical data is a highly sensitive category, and federal laws primarily require your consent before it can be shared. However, uploading personal medical information to an AI chatbot is far from secure. Such sensitive data can potentially be used to train AI models, posing significant risks to your privacy.
AI models are often trained and improved using uploaded data. However, it is not always clear how this data is being used or with whom it is being shared. Additionally, data uploaded to an AI platform may not have any specific protections, leaving it vulnerable. There have even been instances of private medical records being found in AI training datasets, making such data accessible to anyone.
Elon Musk, the owner of X, has encouraged users to upload their data to Grok, stating that the model is still in its early stages but will improve soon. By collecting user data, the goal is to refine the AI model to interpret medical scans more accurately.
Remember, anything uploaded to the internet is never truly deleted.