This article is part of the series "[Podcast] Attorney Sue Foster On GDPR". Check out the rest:
Over two podcasts, attorney Sue Foster dispensed incredibly valuable General Data Protection Regulation (GDPR) wisdom. If you’ve already listened, you know it’s the kind of insights that would have otherwise required a lengthy Google expedition, followed by chatting with your cousin Vinny the lawyer. We don’t recommend that!
In reviewing the transcript below, I think there are three points that are worth commenting on. One, the GDPR’s breach reporting rule may appear to give organizations some wiggle room. But in fact that’s not the case! The reference to “right and freedoms of natural persons” refers to explicit privacy and property rights spelled out in the EU Charter. This ain’t vague language.
However, there is some leeway in reporting within the 72-hour time frame. In short: you have to make a good effort, but you can delay if, say, you’re currently investigating and need more time because otherwise you’d compromise the investigation.
Two, the territorial scope requirements in Article 3 are complicated by what it means to target EU citizens in your marketing. The very tricky part is when you’re a multinational company that has both a EU and non-EU presence. If you read closely, Foster is suggesting that EU citizens that happen to find their way to, say, your US web site, would not be protected by the GDPR.
In other words, if the company’s general marketing doesn’t target EU citizens, then the information collected is not under GDPR protections. But that would not apply to a company’s localized web content for, say, the French or German markets — information submitted through those sites would of course be under the GDPR.
Yes, I will confirm this with Foster. But if this is not the case for multinationals, then it would cause a pretty large mal de tête.
Third, GPDR compliance is based on, as Foster notes, a “show your work” principle, the same as you did on math tests in high school. It is not like PCI DSS, where you’re going down a checkoff list: Two Factor Authentication? Yes. Vulnerability Scanning? Yes, etc.
The larger issue is that security technology will change and so what worked well in the past will likely not hold up in the future. With GDPR, you should be able to justify your security plan based on the current state of security technology and document what you’ve done.
- Inside Out Security
- Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She’s a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.
I’m very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.
- Sue Foster
- Hi Andy. Thank you very much for inviting me to join you today. There’s a lot going on in Europe around cybersecurity and data protection these days, so it’s a fantastic set of topics.
- Oh terrific. So what are some of the concerns you’re hearing from your clients on GDPR?
- So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don’t have any office or other kind of presence in Europe that would qualify them as being established in Europe.
But they are offering goods or services to people in Europe. And for these companies, you know in the past they’ve had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it’s a lot clearer and there are rules that are easier for people to understand and follow.
So now when I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.
Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.
It’s also really important for U.S. companies to understand that there’s a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.
Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.
If it’s available to the public and enables communication, even in a very limited sort of forum, it’s going to be covered. That’s another area where U.S. companies are getting to grips with the fact that European rules will apply to them.
So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.
- So just to clarify, let’s say a small U.S. social media company that doesn’t market specifically to EU countries, doesn’t have a website in the language of some of the EU country, they would or would not fall under the GDPR?
- On the basis of their [overall] marketing activity they wouldn’t. But we would need to understand if they’re profiling or they’re tracking EU users or through viral marketing that’s been going on, right? And they are just tracking everybody. And they know that they’re tracking people in the EU. Then they’re going to be caught.
But if they’re not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they’re not affirmatively marketing into the EU, then they’re outside of the scope. Unless of course, they’re offering some kind of service that falls under one of these other regulations that we were talking about.
- What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?
- Sure absolutely. So first it’s important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’. So it’s fairly broad.
The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.
Then there’s a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don’t really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?
Because that’s not defined anywhere in this regulation!
It’s important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.
There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?
So you could, for example, say that there’s a breach that isn’t likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you’re going to have to report the breach.
When you’re going through the process of working out whether you need to report the breach to the DPA, and you’re considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.
Or whether there’s a minimal risk because of steps you’ve already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn’t been compromised.
So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.
If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?
And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone’s privacy rights or rights on their property and things of that sort.
And again, you can look at the steps that you’ve taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you’ve taken steps after the breach that would prevent those kinds of risks from happening.
Now, of course, the problem is the risk of getting it wrong, right?
If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.
And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won’t feel comfortable with that.
- I see. So just to bring it to more practical terms. We can assume that let’s say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
- Most likely. I mean if it’s…yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you’ve got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
- One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let’s say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn’t know the extent of it.
If they had not reported it, and waited until after the 72-hour rule, what would have happened to let’s say a multinational like Yahoo?
- Well, Yahoo would need to go through the same analysis, and it’s hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it’s very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.
It’s hard to imagine they would be in a situation where they would be off the hook for reporting.
Now the 72-hour rule is not hard and fast.
But the idea is you report as soon as possible. So you can delay for a little while if it’s necessary for say a law enforcement investigation, right? That’s one possibility.
Or if you’re doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that’ll be pretty rare.
Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!
- So this is really serious business, especially for multinationals.
This is also a breach reporting related question, and it has to do with ransomware. We’re seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn’t actually take the data or the personal data, would that breach have to be reported or not?
- This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn’t lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn’t actually fall under the GDPR’s definition of a personal data breach, right?
So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it’s not a personal data breach, therefore not reportable.
But it sure feels like one, doesn’t it?
- Yes, it does!
- Yeah. I suspect we’re going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what’s reportable. Don’t know that for sure, but it seems likely to me that they’ll find a way to do that.
Now, there are two important caveats.
Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.
Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.
And also, the company that’s suffering the ransomware attack should consider whether they’re subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there’s been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.
And the Network and Information Security Directive applies to a wide range of companies, including those that provide “essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.
But also, it applies to digital service providers, and that would include cloud computing service providers.
You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they’ll need to think about their obligations to report even if there’s maybe not a GDPR reporting requirement.
- Right, interesting. Okay. As a security company, we’ve been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it’s now actually part of the law.
The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?
- They’ll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that’s partly because the GDPR itself is very much a show-your-work regulation.
You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’
And it’s a little bit like that. The GDPR is a lot about process!
So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you’ve got internal processes for analyzing privacy risks and thinking about privacy solutions.
And for that reason, I think we’re just going to get guidance that stresses that, develops that idea.
But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.
Where we might see some help is, eventually, in terms of ISO standards. Maybe there’ll be an opportunity in the future for something that comes along that’s an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we’ll have a little more certainty about it.
But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.
- That’s very interesting, and a good thing to know. One last question, we’ve been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.
We’ve learned from talking to testing experts that vendors are not really interested in PBD. It’s ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?
- It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?
So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.
And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we’re going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.
Because the stories about the security risks of, say, children’s toys have really caught the attention of the media and the public, and the regulators won’t be far behind.
And now, we have fines for breaches that range from 2% to 4% of a group’s global turnover. It’s an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.
It’s also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.
The [ePrivacy] Regulation says you have to do it, it doesn’t really say how you’re going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there’s no real intelligent interface between the device and the person, the consumer who’s using it. Because there are some things that have, maybe kind of a web dashboard. There’s some kind of app that you use and you communicate with your device, you could have privacy settings.
There’s other stuff that’s much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that’s reeling information about energy consumption to, you know, the grid. Even there, you know, there’s potentially information where the user is going to have to give consent to the transfer.
And it’s hard to kind of imagine exactly what that interface is going to look like!
I’ll mention one thing about the ePrivacy Regulation. It’s in draft form. It could change, and that’s important to know. It’s not likely to change all that much, and it’s on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.
- Sue Foster, I’d like to thank you again for your time.
- You’re very welcome. Thank you very much for inviting me to join you today.