Precious uncontrollable medical data
An estimated 150,000+ health applications are available in the app stores that have been downloaded billions of times. The health tracker market has been soaring these last few years.
Every single minute, enormous quantities of sensitive health data are collected, shared and stored, with a much lower level of protection than that required from healthcare institutions. Here is our unencrypted interview with a specialist.
THE SPECIALIST
Name : Pierre-Mikael Legris
Computer engineer, entrepreneur Founder and CEO Pryv SA
Age : 41
Nationality : Swiss/French Married, 4 children
Pierre-Mikael Legris, let’s start from the beginning: what is medical data? Is the number of steps or floors climbed also considered as medical data?
Originally, medical data were those recorded by medical staff on medical devices – heart rate, weight, blood sugar level, etc. With the growing number of personal health trackers, we need to give a wider scope to this definition. To me, all data used by a doctor to evaluate a patient or to decide on treatment is medical data. If the number of steps you take is considered by your doctor, then that becomes medical data as well.
It is very broad…
Indeed. We can imagine a doctor asking his patients about what they watch on television in order to better assess their psychological condition. In such a case, television programmes can be considered as medical data ! Tracking the location of a person can also be considered as such, if, for instance, it helps a doctor to advise a patient on how to go to work by avoiding areas that are full of allergens that he is sensitive to. From my point of view, it is how the data is used that determines its medical status, more than its nature.
Doesn’t your definition blur the boundaries between medical data and life data?
Maybe it does, but these boundaries have never really been clear. For example, as far as data protection is concerned, European law (GDPR) makes a less and less clear distinction. Rightly so: there is no reason to better protect one or the other, since life data – such as all the places you went to last month – can be just as sensitive as medical data.
Do you think people realise how valuable medical data can be?
I think they do. On the other hand, it is harder for them to be aware of their level of protection. The moment they share such data, they don’t necessarily have the choice. When a person wants to show something to his doctor or when two doctors need to share information about a patient, either they meet, or they use the most practical means of communication available: e-mail, WhatsApp, etc. For sure – and it was proven by Pryv – if you provide these people with a practical secure solution to send data, they will use it quite spontaneously. However, when they haven’t got one…
Users may also think that health data recorded by an app are protected, which often isn’t the case. According to a survey conducted in USA, for example, diabetes apps often share data with third parties, there is not even any legal protection against the sale or disclosure of such data and 81% of these apps don’t even have a privacy policy!
We seem to be very worried about theft, but the main cause of data leakage is by far the fact that people use services that don’t guarantee any security at all. Or else they have accepted the general conditions (GC) without realising that they have agreed to sharing their data !
So, is it the users’ fault?
It would be their fault if we deemed that users are actually able to understand the general conditions. Yet, I think it is humanly impossible to read and understand the GC of services that are submitted nowadays. They are way too complex, too long, too many. These GC are the joke of the century. People need to understand the conditions, otherwise it isn’t fair.
Are there any fairer practices?
They exist, but they are obviously more complex than simply checking the box “I accept the GC”! Some are pedagogical: users have to fill in a questionnaire that checks how much they know about what, when and with who is going to be shared. It is only after they have answered the questions that they can accept the GC, so, in such cases, we can talk about informed consent. Another possibility – that we have chosen with Pryv – is to replace general conditions with dynamic conditions.
Meaning?
Rights of access and sharing are not decided upstream and globally, but at the moment the choice has to be actually made. Just like when your smartphone asks “This app would like to access your camera, do you accept?”. This can also become burdensome. For example, a doctor would need to ask for your authorisation every time he would like to share your data with a peer. We would need to be able to give access “to doctors only”, but this type of authorisation does not exist yet.
Being able to decide about sharing doesn’t turn us into owners of these data…
No, data proprietary rights have not yet been legally established. However, data protection law is becoming more precise and is better applied as far as rights to access, control and delete are concerned. If it is about your personal data, you should be able to consult, see who has access and even request deletion.
Or even benefit financially from the use of my personal data !
It is a recent idea that was born with awareness that companies were transforming our data into commercial products. It is quite simple: if companies make profit by using my data, just like banks benefit from investing my money, why wouldn’t I earn interests, just like from my bank? I don’t know whether this idea has any chance to be applied one day. Meanwhile, users do not get anything in return for their data, apart from the right to use the services provided. It is even more so with free apps, which finance themselves by selling data. This is what the well-known saying refers to “If you are not paying for it, you become the product.”
Data usage does not seem to provoke strong reactions as long as it stays anonymous and is used for average calculations…
…because it doesn’t affect user privacy and its usefulness is easy to understand. It helps device or apps designers to understand how they are used and so, to improve them. It can reveal public health tendencies, such us mobility depending on age. It helps insurance companies to establish reliable low risk models and so to propose insurance coverage at the best prices.
On the other hand, there is a very strong sentiment against the idea of using data associated to the source person…
This is obvious, since in this case, we talk about direct and undeniable privacy invasion. This is intimate data. The idea that certain companies learn about my blood sugar or the number of steps I take on a Wednesday morning is annoying. The idea that they use this data to sell me their products via targeted publicity is even more so!
A common concern is that insurers get hold of individual health data and that they use them for applying extra premiums or even refuse coverage…
There is such a risk and we should trust legislators to ban such type of data use. For the moment, however, it is quite the opposite: insurers have supported the use of health devices and apps not with the aim of knowing and using their clients’ health data, but because it had been proven that people monitoring themselves take better care of their health.
In fact, you seem to believe that the real issue is more about respecting individual choice rather than health data security.
Yes, indeed. There is often a confusion between security and privacy, even though they are quite different: security means the thickness of your front door, privacy is about the keys you give to your relatives for getting in. Today, data security should be maximum everywhere. Users should not need to worry about anything but access and verifying that everything is done with their consent.
How about opting for maximum privacy every time ?
It isn’t that easy! Maximum privacy can also be a problem. Imagine that you are the only one who has access to your medical records: it seems to be all right, except in case you are unconscious and rushed off for emergency care! We need to find a compromise: too much blocking may render your data useless, the contrary makes them vulnerable.
As for vulnerability: why choose to steal health data?
Honestly, I don’t see the point of stealing mass health data, unlike bank data or classified information. What is their value? What could they be used for? On the other hand, if you are a rich and powerful person – a big business owner or a politician – a pirate could be interested in stealing your health data and threaten to disclose them, unless you pay a ransom. Simi- larly, it has already happened that pirates blocked hospitals’ access to their medical records and wanted a ransom to “release them"
It is an ever-changing environment where there is still a lot of work to be done to ensure data protection.
Personal health data networking is very recent, just like the widening use of health apps and personal trackers. Information science and technology have been progressing and changing very fast, so have rules and regulations, but at a slower pace. We are still a bit like in the early days of the automobile era, when there were only a few traffic signs and the highway code was only a page long.