Tag Archives: eu data protection regulation

[Transcript] Attorney Sara Jodka on the GDPR and HR Data

[Transcript] Attorney Sara Jodka on the GDPR and HR Data

In reviewing the transcript of my interview with Sara Jodka, I realize again how much great information she freely dispensed. Thanks Sara! The employee-employer relationship under the GDPR is a confusing area. It might be helpful to clarify a few points Sara made in our conversation about the legitimate interest exception to consent, and the threshold for Data Protection Impact Assessments (DPIAs).

The core problem is that to process personal data under the GDPR you need to have freely-given consent. If you can’t get that, you have a few other options, which are covered in the GDPR’s Article 6.  For employees, consent can not be given freely, and so employers will most likely need to rely on “legitimate interest” exception referred to in that article.

There’s a little bit of paperwork required to prove that the employer’s interest overrides the employee’s rights. In addition, employers will have to notify the employees as to what data is being processed. Sara refers to the ICO, the UK’s data protection authority, and they have an informal guidance, which is worth reading, on the legitimate interest process.

Since the data collected by the employer is also from a vulnerable subject (the employee) and contains a special class of sensitive personal data (health, payroll, union membership, etc.), it meets the threshold set by GDPR regulators — see this guidance — for performing a DPIA. As we know, DPIAs require companies to conduct a formal risk analysis of their data and document it.

Sara reminds us that some US companies, particularly service-oriented firms, may be surprised to learn about the additional work they’ll need to undertake in order to comply with the GDPR. In short: employees, like consumers, are under the new EU law.


Inside Out Security: Sara Jodka is an attorney with Dickenson Wright in Columbus, Ohio. Her practice covers data privacy and cyber security issues. Sara has guided businesses through compliance matters involving HIPPA, Gramm-Leach-Bliley, FERPA, and COPPA, and most importantly for this podcast, certification under the US-EU Privacy Shield, which, of course, falls under the General Data Protection Regulation or GDPR.

A lot of abbreviations there! Welcome, Sara.

Sara Jodka: Thank you for having me.

IOS: I wanted to get into an article that you had posted on your law firm’s blog. It points out an interesting subcategory of GDPR personal data which doesn’t get a lot of attention, and that is employee HR records. You know, of course it’s going to include ethnic, payroll, 401(k), and other information.

So can you tell us, at a high level, how the GDPR treats employee data held by companies?

Employee Data Covered By the GDPR

SJ: Whenever we look at GDPR, there are 99 articles, and they’re very broad. There’s not a lot of detail on the GDPR regulations themselves. In fact, we only have one that actually carves employment data out, and that’s Article 88  — there’s one in and of itself.

Whenever we’re looking at it, none of the articles say that all of these people have these rights. All these individuals have rights! None of them say, “Well, these don’t apply in an employment situation.” So we don’t have any exclusions!

We’re led to “Yes, they do apply.” And so we’ve been waiting on, and we have been working with guidances that we’re receiving, you know, from the ICO, with respect to ….  consent obligation, notice obligation, portability requirements, and  any employee context. Because it is going to be a different type of relationship than the consumer relationship!

IOS: It’s kind of interesting that people, I think, or businesses, probably are not aware of this … except those who are in the HR business.

So I think there’s an interesting group of US companies that would find themselves under these GDPR rules that probably would not have initially thought they were in this category because they don’t collect consumer data. I’m thinking of law firms, investment banking, engineering, professional companies.

US Professional Service Companies Beware!

SJ: I think that’s a very good point! In fact, that’s where a lot of my work is actually coming from. A lot of the GDPR compliance is coming from EU firms that specialize with EU privacy. But a lot of U.S. companies didn’t realize that this is going to cover their employment aspects that they had with EU employees that are in the EU!

They thought, “Well, because we don’t actually have a physical location EU, it doesn’t actually cover us.” That’s not actually at all true.
The GDPR covers people that are working in the EU, people who reside in the EU, so to the extent that U.S. company has employees that are working in the EU it is going to cover that type of employee data. And there’s no exception in the GDPR around it. So it’s going to include those employees.

IOS: So I hadn’t even thought about that. So their records would be covered under the GDPR?

SJ: Yeah, the one thing about the definition of a data subject under the GDPR is it doesn’t identify that it has to be an EU resident or it has to be an EU citizen. It’s just someone in the EU.

When you’re there, you have these certain rights that are guaranteed. And that will cover employees that are working for U.S. companies but they’re working in the EU.

IOS: Right.  And I’m thinking perhaps of a U.S. citizens who come there for some assignment, and maybe working out of the office, they would be covered under these rules.

SJ: And that’s definitely a possibility, and that’s one thing that we’ve been looking for. We’ve been looking for looking for guidance from the ICO to determine …  the scope of what this is going to look not only in an employment situation, but we’re dealing with an immigration situation, somebody on a work visa, and also in the context of schools as we are having, you know, different students coming over to the United States or going abroad. And what protection then the GDPR applies to those kind of in-transition relationships, those employees or students.

With a lot of my clients, we are trying to err on the side of caution and so do things ahead of time, rather than beg forgiveness if the authorities come knocking at our door.

GDPR’s Legitimate Interest Exception is Tricky

IOS: I agree that’s probably a better policy, and that’s something we recommend in dealing with any of these compliance standards.

In that article, you mentioned that the processing of HR records has additional protections under the GDPR …  An employee has to give explicit or consent freely and not as part of an employer-employee contract.

GDPR’s Article 6 says there are only six lawful ways to process data. If you don’t obtain freely given consent, then it gets tricky.

Can you explain this? And then, what does an employer have to do to process employee data  especially HR data?

SJ: Well, when we’re looking at the reasons that we’re allowed to process data, we can do it by consent, and we can also do it if we have a lawful basis.

A number of the lawful bases are going to apply in the employer context. One of those is if there is going to be an agreement. You know, in order to comply with the terms of a contract, like a collective bargaining agreement or like an employment agreement. So hire/fire payroll data would be covered under that, also if there is … a vital interest of an employee.

There’s speculation that that exception might actually be, or that legitimate basis might be used to obtain vital information regarding, like, emergency contact information of employees.

And there’s also one of the other lawful basis is if the employer has a greater, you know, interest in the data that doesn’t outweigh the right of the data subject, the employee.

The issue though is most … when we talk about is consumer data, and we’re looking a lot at consent and what actually consent looks like in terms of the express consent, you know, having them, you know, check the box or whatever.

In an employee situation, the [UK’s] ICO has come out with guidance with respect to this. And they have expressly said in an employee-employer relationship, there is an inherent imbalance of bargaining power, meaning an employee can never really consent to giving up their information because they have no bargaining power. They either turn it over, or they’re not employed. The employer is left to rely only on the other lawful basis to process data, excluding consent, so the contractor allowance and some of the others.

But the issue I have with that is, I don’t think that that’s going to cover all the data that we actually collect on an employee, especially employees who are operating outside the scope of a collective bargaining agreement.

In a context of, say, an at-will employee where there is that … where that contract exception doesn’t actually apply. I think there will be a lot of collection of data that doesn’t actually fall under that. It may fall into the legitimate interest, if the employer has the forethought to actually do what’s required, which is to actually document the process of weighing the employer’s interest against the interest of the employee, and making sure that that is a documented process. [ Read the UK’s ICO guidelines on the process of working out legitimate interest.]

When employers claim a legitimate interest exception to getting employee consent, they have more work to do. [Source: UK ICO]

But also what comes with that is the notice requirement, and the notice requirement is something that can be waived. So employers, if they are doing that, are going to have to  — and this is basically going to cover every single employer — they’re going to have to give their employees notice of the data that they are collecting on them, at a minimum.

IOS: At a minimum. I think to summarize what you’re saying is it’s just so tricky or difficult to get what they call freely given consent, that most employers will rely on legitimate interest.

Triggers for Data Protection Impact Assessments (DPIAs)

IOS: In the second part of this interview, we joined Sara Jodka as she explains what triggers a data protection impact assessment, or DPIA when processing employee data.

SJ: I think that’s required when we’re doing requirements for sensitive data, and we’re talking about sensitive HR data. A DPIA has be performed when two of the following exist, and there’s like nine things that have to be  there in order for a DPIA to have to be done. But you bring up a great point because the information that an employer is going to have is going to necessarily trigger the DPIA. [See these Working Party 29 guidelines for the nine criteria that Sara refers to.]

The DPIA isn’t triggered by us doing the legitimate basis …
and having to document that process. It’s actually triggered because we process sensitive data. You know, their trade union organization, affiliation, their religious data, their ethnicity. We have sensitive information, which is one of the nine things that can trigger, and all you need is two to require a DPIA.

Another one that employers always get is they process data of a vulnerable data subject. A vulnerable data subject includes employees.

IOS: Okay. Right.

SJ:  I can’t imagine a situation where an employer wouldn’t have to do a DPIA. The DPIA is different than the legitimate interest outweighing [employee rights] documentation that has to be done. They’re two different things.


IOS: So, they will have to do the DPIAs? And what would that involve?

SJ: Well, it’s one thing that’s required for high-risk data processing and that, as we just discussed, includes the data that employer has.

Essentially what a DPIA is, it’s a process that is designed to describe what processing the employer has, assess the necessity on proportionality to help manage the risk to the rights and the freedoms of the national persons resulting from the processing of personal data by assessing and determining the measures to address the data and the protections around it.

It’s a living document, so one thing to keep in mind about DPIA is they’re never done. They are going to be your corporation’s living document of the high-risk data you have and what’s happening with it to help you create tools for accountability and to comply with the GDPR requirements including, you know, notice to data subject, their rights, and then enforcing those rights.

It’s basically a tracking document … of the data, where the data’s going, where the data lives, and what happens with the data and then what happens when somebody asks for their data, wants to erase their data, etc.

GDPR Surprises for US Companies

IOS: Obviously, these are very tricky things and you definitely need an attorney to help you with it. So, can you comment on any other surprises U.S. companies might be facing with GDPR?

SJ: I think one of the most interesting points, whenever I was doing my research, to really drill down, from my knowledge level, is you’re allowed to process data so long as it’s compliant with a law. You know, there’s a legal necessity to do it.

And a lot of employers, U.S employers specifically, look at this and thought, “Great, that legal requirement takes the load off of me because I need, you know, payroll records to comply with the Fair Labor Standards Act and, you know, state wage laws. I need my immigration information to comply with the immigration control format.”

You know, they were like, “We have all these U.S. laws of why we have to retain .information and why we have to collect it.” Those laws don’t count, and I think that’s a big shock when I say, well, those laws don’t count.

We can’t rely on U.S. laws to process EU data!

We can only rely on EU laws and that’s one thing that’s brought up and kind of coincides with Article 88, which I think is an interesting thing.

If you look at Article 88 when they’re talking about employee data, what Article 88 does is it actually allows member states to provide for more specific rules to ensure that the protections and the freedoms of their data are protected.

These member states may be adding on more laws and more rights than the GDPR already complies! Another thing is, not only do we have to comply with an EU law, but we also are going to comply with member states, other specific laws that may be more narrow than the GDPR.

Employers can’t just look at the GDPR, they’re going to also have to look at if they know where a specific person is. Whether it’s Germany or Poland. They’re going to have to look and see what aspects of the GDPR are there and then what additional, more specific laws that member state may have also put into effect.

Interviewer: Right!

SJ: So, I think that there are two big legal issues hanging out there that U.S. multinational companies…

IOS: One thing that comes to my mind is that there are fines involved when not complying to this. And that includes, of course, doing these DPIAs.

SJ: The fines are significant. I think that’s the easiest way to put it is that the fines are, they’re astronomical, I mean, they’re not fines that we’re used to seeing so there’s two levels of fines depending on the violation. And they can be up to a company’s 4% of their annual global turnover. Or 20 million Euros.  If you’d look at it in U.S. dollar terms, you’re looking at, like, $23 million at this point.

For some companies that could be, that’s a game changer, that’s a company shut down. Some companies can withstand that, but some can’t. And I think any time you’re facing a $23 million penalty, the cost of compliance is probably going to weigh out the potential penalty.

Especially because these aren’t necessarily one-time penalties and there’s nothing that’s going to stop the Data Protection Authority from coming back on you and reviewing again and assessing another penalty if you aren’t in compliance and you’ve already been fined once.
I think the issue is going to be how far the reach is going to be for U.S. companies. I think for U.S. companies that have, you know, brick and mortar operations in a specific member state, I think enforcement is going to  be a lot easier for the DPA.

There’s going be a greater disadvantage to, actually, enforcement for, you know, U.S. companies that only operate in U.S. soil.

Now, if they have employees that are located in the EU, I think that enforcement is going to be a little bit easier, but if they don’t and they’re merely just, you know, attracting business via their website or whatever to EU, I think enforcement is gonna be a little bit more difficult, so it’s going to be interesting to see how enforcement actually plays out.

IOS: Yeah, I think you’re referring to the territorial scope aspects of the GDPR. Which, yeah, I agree that’s kind of interesting.

SJ: I guess my parting advice is this isn’t something that’s easy, it’s something that you do need to speak to an attorney. If you think that it may cover you at all, it’s at least worth a conversation. And I’ve had a lot of those conversations that have lasted, you know, a half an hour, and we’ve been very easily able to determine that GDPR is not going to cover the U.S. entity.

And we don’t have to worry about it. And some we’ve been able to identify that the GDPR is going to touch very slightly and we’re taking eight steps, you know, with the website and, you know, with, you know, on site hard copy documents to make sure that proper consent and notice is given in those documents.

So, sometimes it’s not going be the earth-shattering compliance overhaul of a corporation that you think the GDPR may entail, but it’s worth a call with a GDPR attorney to at least find out so that you can at least sleep better at night because this is a significant regulation, it’s a significant piece of law, and it is going to touch a lot of U.S. operations.

IOS: Right. Well, I want to thank you for talking about this somewhat under-looked area of the GDPR.

SJ: Thank you for having me.

[Podcast] Attorney Sara Jodka on the GDPR and HR Data, Part II

[Podcast] Attorney Sara Jodka on the GDPR and HR Data, Part II


Leave a review for our podcast & we'll send you a pack of infosec cards.

In the second part of my interview with Dickinson Wright’s Sara Jodka, we go deeper into some of the consequences of internal employee data. Under the GDPR, companies will likely have to take an additional step before they can process this data: employers will have to perform a Data Protection Impact Assessment (DPIA).

As Sara explained in the first podcast, internal employee data is covered by the GDPR — all of the new law’s requirements still apply. This means conducting a DPIA when dealing with certain classes of data, which as we’ll learn in the podcast, includes HR data. DPIAs involve analyzing the data that’s being processed, assessing the risks involved, and putting in place the security measures to protect the data.

Last April, the EU regulators released a guidance on the DPIA, covering more of the details of what triggers this extra work. Legal wonks can review and learn about the nine criterion related to launching a DPIA.  Because HR data processing touches on two of the triggers — vulnerable subjects (employees) and sensitive data (HR) — it crosses the threshold set by the regulators.

Listen to Sara explain it all, and if you’re still not satisfied, have your in-house counsel review the regulator’s legalese contained in the EU guidance.

Continue reading the next post in "[Podcast] Attorney Sarah Jodka on the GDPR and HR Data"

[Podcast] Attorney Sara Jodka on the GDPR and Employee HR Data, Part I

[Podcast] Attorney Sara Jodka on the GDPR and Employee HR Data, Part I


Leave a review for our podcast & we'll send you a pack of infosec cards.

In this first part of my interview with Dickinson Wright attorney Sara Jodka, we start a discussion of how the EU General Data Protection Regulation (GDPR) treats employee data. Surprisingly, this turns out to be a tricky area of the new law. I can sum up my talk with her, which is based heavily on Jodka’s very readable legal article on this overlooked topic, as follows: darnit, employees are people too!

It may come as a surprise to some that the GDPR protects all “natural persons” in the EU. Employees, even non-citizen EU employees, are all completely natural, organic people under the GDPR. Their name, address, payroll, personal contacts, and in particular, sensitive ethnic or health data fall under the GDPR. So IT security groups will need to have all the standard GDPR security policies and procedures in place for employee data files — for example, minimize access to authorized users, set retention limits, and detect breaches.

The tricky part comes in getting “freely given” consent from employees. Listen to the podcast to learn how most EU employers will need to claim “legitimate interest” as away to process employee data without explicit consent. This will lead to some additional administrative overhead for employers, who will have to prove their interests override the employees’ privacy and notify employees of what’s being done to the data.

As we’ll learn in the second part of the podcast, because employee data often contains sensitive data as well, employers will also have to conduct a Data Protection Impact Assessment (DPIA), which will require even more work.

Bottom line: US service-based companies in the EU — financial, legal, professional services — who thought they escaped from the GDPR’s reach because they didn’t collect consumer data are very much mistaken.

Sara explains it all.

Continue reading the next post in "[Podcast] Attorney Sarah Jodka on the GDPR and HR Data"

Another GDPR Gotcha: HR and Employee Data

Another GDPR Gotcha: HR and Employee Data

Have I mentioned recently that if you’re following the usual data security standards (NIST, CIS Critical Security Controls, PCI DSS, ISO 27001) or common sense infosec principles (PbD), you shouldn’t have to expend much effort to comply with the General Data Protection Regulation (GDPR)? I still stand by this claim.

Sure there are some GDPR requirements, such as the 72-hour breach notification, which will require special technology sauce.

There’s also plenty of fine print that will keep CPOs, CISOs, and outside legal counsels busy for the next few years.

US Professional Service Companies Beware!

One of those fine points is how the GDPR deals with employee records. I’m talking about human resource’s employee files, which can cover, besides all the usual identifiers (name, address and photos), personal details such as health, financial, employee reviews, family contact information, and more.

EU-based companies and US companies that have been doing business in the EU have long had to deal with Europe’s stricter national laws about employee data.

The GDPR holds a surprise for US companies that are not consumer-oriented and thought that the new law’s tighter security and privacy protections didn’t cover them. In fact, they do.

I’m referring particularly to US financial, legal, accounting, engineering, and other companies providing B2B services that are not in the business of collecting consumer data.

Let me just say it: the GDPR considers employee records to be personal data — that’s GDPR-ese for what we in the US call PII.  And companies that have personal data of employees – and who doesn’t – will have to comply with the GDPR even if they don’t have consumer data.

So if a US accounting firm in the EU has a data breach involving the theft of employee records, then it would have to notify the local supervisory authority within the 72-hour window.

There’s another surprise for US companies. Even if they don’t have a physical presence in the EU but still have employees there — say French or Italian workers are telecommuting — then their employee records would also be covered by the GDPR.

Employees Have Data Privacy Rights

And that also means, with some restrictions, that employees gain privacy rights over their data: they can request, just as a consumers do, access to their personnel files, and have the right to correct errors.

There’s even an employee “right to be forgotten”, but only when the data is no longer necessary for the “purposes for which it was collected”. Obviously, employers have a wide claim to employee data so it’s easy to see that most employee file are protected from being deleted on demand.

But no doubt they’ll be instances where the “right to be forgotten” rule makes sense. Perhaps a vice president of marketing makes a request to HR to take a one-month leave of absence to study bird life in Costa Rica, has second thoughts, and then asks HR to delete the initial application based on his GDPR rights.

More importantly, employees also have the right to consent to the processing of their data. This particular right is not nearly as straightforward as it is for consumers.

Since employee privacy rights under the GDPR are far from simple, law firms and attorneys are filling the Intertoobz with articles on this subject, especially on the consent to processing loopholes.

As it happens, I came across one written by Sara Jodka, an attorney for Columbus-based Dickinson Wright, that is mercifully clear and understandable by non-attorney life forms.

The DPIA Surprise

The key point that Jodka makes is that since employers have leverage over employees, it’s hard to for the consent to processing to be “freely given”, which is what the GDPR requires.

Typically, an employee has given consent to processing of her data as part of an employment contact. But since the employee likely had no choice but to sign the contract in order to get the job, the GDPR does not consider this freely given.

So how do employers deal with this?

There is an exception that the GPDR makes: if the employer has “legitimate interests”, then consent is not needed. To prove legitimate interest, the company will have to document why its right to the data outweighs the employee’s privacy rights. Essentially, in the one-sided employer-employee relationship, the employer has the burden of proving it needs the data since consent is not generally required.

Though there are some different legal opinions on  the types of employee data covered by legitimate interests, sensitive data involved with monitoring of employee computer usage, their location, or perhaps event their travel plans will definitely require employers to take an extra step.

They will have to perform a Data Protection Impact Assessment or DPIA.

On the IOS blog, we’ve been writing about DPIAs for quite a while. It’s required for the processing of sensitive data, such as racial, ethnic, or health-related. Employee records that contain this information as well as monitoring data will fall under the DPIA rule, which is spelled out in article 35.

In short: companies using the legitimate interest exception for processing employee records will likely also be conducting data assessments that include analyzing the processing, evaluating the security risks involved, and proposing measures to protect the data.

If you’re finding this a little confusing, you are not alone. However, help is on the way!

I interviewed Sara Jodka earlier this week, and she brilliantly explained the subtleties involved in protecting employee records under the GDPR, and has some great advice for US companies .

Stay tuned. I’m hoping to get the first part of the podcast up next week.

[White Paper] Let Varonis Be Your EU GDPR Guide

[White Paper] Let Varonis Be Your EU GDPR Guide

Everyone knows that when you travel to a strange new country, you need a guide. Someone to point out the best ways to move around, offer practical tips on local customs, and help you get the most out of your experience.

The EU General Data Protection Regulation (GDPR) is a country with its own quirky rules (and steep fines if you don’t do things just right). So may we suggest using Varonis to help you navigate the data compliance and regulatory landscape of GDPR-istan?

We’ve amassed lots of experience in the last few years, talking to GDPR experts, exploring the GDPR’s legal fine print, and analyzing the latest guidelines from the regulators. But you don’t have to go through the back pages of the IOS blog to find it all.

Instead we’ve conveniently distilled all of our extensive GDPR travel wisdom into our new white paper.

What’s the best route through GDPR? We’ve developed a practical three-step approach. First, we explain how to use Varonis to monitor and identify risks in your file system environment by finding sensitive personal data with overly permissive access policies. Second, we guide you on the preventive actions to protect your data based on the previous analysis by restricting access rights and eliminating stale or unused data. And third, we offer advice on how to sustain and maintain your GDPR compliance through actively detecting security threats and using this feedback to update your IT policies.

Don’t delay! Download our GDPR travel guide today, and bon voyage. 


New SEC Guidance on Reporting Data Security Risk

New SEC Guidance on Reporting Data Security Risk

In our recent post on a 2011 SEC cybersecurity guidance, we briefly sketched out what public companies are supposed to be doing in terms of informing investors about risks related to security threats and actual incidents. As it happens, late last month the SEC issued a further guidance on cybersecurity disclosures, which “reinforces and expands” on the older one. Coincidence?

Of course! But it’s a sign of the times that we’re all thinking about how to take into account data security risks in business planning.

Just to refresh memories, the SEC asked public companies to report data security risk and incidents that have a “material impact” for which reasonable investors would want to know about. The reports can be filed annually in a 10-K, quarterly in a 10-Q, or, if need be, in a current report or 8-K.

Nowhere in the SEC laws and relevant regulations do the words data security or security risk show up. Instead “material risks”, “materiality”, and “material information” are heavily sprinkled throughout —  lawyer-speak for business data and events worth letting investors know about.

Looking for Material

It’s probably best to quote directly from the SEC guidance on the subject of materiality:

The materiality of cybersecurity risks or incidents depends upon their nature, extent, and potential magnitude, particularly as they relate to any compromised information or the business and scope of company operations The materiality of cybersecurity risks and incidents also depends on the range of harm that such incidents could cause.  This includes harm to a company’s reputation, financial performance, and customer and vendor relationships, as well as the possibility of litigation or regulatory investigations …

An important point to make about the SEC language above is that it’s not about any one particular thing — report ransomware, or a DoS attack — but rather, as the lawyers say, you have to do fact-based inquiry. If you want to get more of a flavor of this kind of analysis, check out this legal perspective.

However, the SEC does provide some insight into evaluating  reportable security risks. The complete list is in the guidance, but here are a few that would be most relevant to IOS readers: the occurrence of prior cybersecurity incidents, the probability and magnitude of a future incident, the adequacy of preventive actions taken to reduce cybersecurity risk, the potential for reputational harm, and litigation, regulatory, and remediation costs.

What about the types of real-world incidents that would have to be disclosed or reported?

I searched and searched, and I did find an example buried in a footnote — wonks can peruse the amazing footnote 33. It’s probably not a great surprise to learn that investors would be interested in knowing when “compromised information might include personally identifiable information, trade secrets or other confidential business information, the materiality of which may depend on the nature of the company’s business, as well as the scope of the compromised information.”

In short, the SEC guidance is just telling us what we in data security already know: the exposure of sensitive PII, such as social security and credit card numbers, or passwords to bank accounts, or trade secrets regarding, say, a cryptocurrency application, require notifying C-levels, affected customers, and regulators. And now investors as well.

Something Noteworthy: Security Policies and Procedures

Sure the typical regulatory verbiage can quickly put you into REM sleep, but occasionally there’s something new and noteworthy buried in the text.

And this latest SEC guidance does have some carefully worded advice regarding cybersecurity procedures and policies. The SEC “encourages” public companies to have them, and to review their compliance. It also asks companies to review their cybersecurity disclosure controls and procedures, and to make sure they are sufficient to notify senior management so they can properly report it.

It’s worth repeating that SEC rules and regulations cover general business risk, not specifically cybersecurity risk. The guidance recommends that companies evaluate this special risk and inform investors when the risks change, and of course let them know about material cybersecurity incidents.

Musings on Cybersecurity Disclosures for Public Companies

In the US, unlike in the EU, there is no single federal data security and breach notification law that covers private sector companies. Sure, we do have HIPAA and GLBA for healthcare and financial, but there isn’t an equivalent to the EU’s GPDR or, say, Canada’s PIPEDA. I’m aware that we have state breach notification laws, but for the most part they’re limited in the PII they cover and have a fairly high threshold for reporting incidents.

However, with this last SEC guidance, we have, for the first time, something like a national data security rule of thumb — not an obligation but rather a strong suggestion —  to have data security controls and reporting in place.

The SEC guidance is based on what investors should be made aware of and wouldn’t necessarily cover serious cybersecurity risks and incidents that don’t have a material financial or business impact.

However, it’s certainly an indication that change is afoot, and US public companies should be thinking about upping their data security game before they’re ultimately required to do so through a future law.

Let’s just say they have been warned.

North Carolina Proposes Tougher Breach Notification Rules

North Carolina Proposes Tougher Breach Notification Rules

If you’ve been reading our amazing blog content and whitepaper on breach notification laws in the US and worldwide, you know there’s often a hidden loophole in the legalese. The big issue — at least for data security nerds — is whether the data security law considers mere unauthorized access of personally identifiable information (PII) to be worthy of a notification.

This was a small legal point until something called ransomware came along.

You have heard of ransomware, right?

It’s that low-tech, but deadly malware that accesses data and encrypts it. To get the data back, the victim has to send a couple of bitcoins to the digital extortionists.

Last year ransomware had more than a few high-profile victims in the US, as well as, of course, across the globe.

But at the US state level, the difference between access alone and access and acquisition — the legal verbiage for copying — in a notification law determines whether the breach is to be reported to local authorities.

Based on my own research, I could only find a few states for which a ransomware attack would have to be reported locally. I should add that even for states that allow for just unauthorized access of PII, there’s often an additional “harm threshold” to the consumer—financial or credit risk, for example— that would have to be met, and so would rule out a pure ransomware attack in which the data wasn’t copied.

After factoring this in, I found only three states for which a ransomware attack ipso facto  I finally get to use that phrase! — would require a notification: New Jersey, Connecticut, and Virginia.

You can look through these charts prepared by some law firms for yourself, and if you come up with other candidates, let me know!

North Carolina: Laboratory of Democracy!

But wait, a legislator in the great state of North Carolina along with the attorney general last month proposed a change to the statutory language defining a breach.

This tweak moves NC from a state that considers a breach to be unauthorized access and acquisition — see section 75-61 (14) of its statutes — to unauthorized access or acquisition.

Now NC joins the aforementioned club for which ransomware attacks will by themselves force companies to notify authorities and consumers.

The new law will also change the time window in which the data breach will have to be reported after discovery. Searching through a huge PDF table of state breach laws, I can say most if not all states ask that a breach be reported “without unreasonable delay.”

Obviously, these words can be subject to interpretation. The proposed NC law instead sets the time limit to just 15 days.

I’m not aware of any other state that has a specific deadline.

The new law also adds consumer-friendly language that makes credit freezes — remember the outcry after Equifax — free upon request. Up to five years of credit monitoring will also be free of charge.

The law is supposed to tighten the rules on fines as well.

We’ll have to wait for the legislation to be reviewed and approved before we have the final legal details.

We’ll keep you posted.

North Carolina Has Lots of Breaches

On looking at their 2017 annual breach report produced by their Department of Justice, I was surprised to learn that over 1000 breaches were reported in this state alone.

That’s an incredibly large number. For comparison purposes, take a peek at California’s breach report for the years 2012- 2015. The incident counts are dramatically smaller— 178 in 2015.

I’m not sure what explains the difference.  But perhaps NC clearly has lots of law-abiding businesses, especially consumer-facing ones holding PII.

By the way, the current NC law covers an extensive list of identifiers, not only the usual social security, driver’s license, and account numbers, but also PINs, online passwords, digital signatures, and email addresses. This broad PII definition may have something to do with the NC data breach reporting spike we’re seeing.

In any case, if you combine their generous list of PII and the newer  breach notification rules, then you’ll have to admit that NC has upped its digital security game and may even be number one, moving past the formidable California and its tough breach law.

And of course, go Wolfpack.

What to be a legal eagle amongst your IT security peers when it comes to breach notification laws and ransomware? Download our comprehensive white paper on this fascinating subject!

Post-Davos Thoughts on the EU NIS Directive

Post-Davos Thoughts on the EU NIS Directive

I’ve been meaning to read the 80-page report published by the World Economic Forum (WEF) on the global risks humankind now faces. They’re the same folks who bring you the once a year gathering of the world’s bankers and other lesser humanoids held at a popular Swiss ski resort. I was told there was an interesting section on … data security.

And there was. Data security is part of a report intended to help our world leaders also grapple with climate change, nuclear annihilation, pandemics, economic meltdowns, starvation, and  terrorism.

How serious a risk are cyber attacks?

In terms of impact, digital warfare makes the WEF top-ten list of global issues, ranking in the sixth position, between water and food crises, and beating out the spread of infectious diseases in the tenth position. It’s practically a fifth horsemen of the apocalypse.

Some of the worrying factoids that the WEF brought to the attention of presidents, prime ministers, chancellors, and kings was that in 2016 over 350 million malware variants were unleashed on the world, and that by 2020, malware may potentially finds its way to over 8.4 billion IoT devices.

There are about 7.6 billion of us now, and so we’ll soon be outnumbered by poorly secured internet connected silicon-based gadgets. It’s not a very comforting thought.

The WEF then tried to calculate the economic damage of malware. One study they reference puts the global cost at $8 trillion over the next five years.

The gloomy WEF authors single out the economic impact of ransomware. Petya and NotPetya were responsible for large costs to many companies in 2017. Merck, FedEx, and Maersk, for example, each reported offsets to their bottom line of over $300 million last year as a result of NotPetya attacks.

Systemic Risk: We’re All Connected

However, the effects of malware extend beyond economics. One of the important points the report makes is that hackers are also targeting physical infrastructure.

WannaCry was used against the IT systems of railway providers, car manufacturers, and energy utilities. In other words, cyberattacks are disrupting things from happening in the real-world: our lights going out, our transportation halted, or factory lines shut down all because of malware.

And here’s where the WEF report gets especially frightening. Cyber attacks can potentially start a chain reaction of effects that we humans are not good at judging. They call it “systemic risk”

They put it this way:

“Humanity has become remarkably adept at understanding how to mitigate countless conventional risks that can be relatively easily isolated and managed with standard risk management approaches. But we are much less competent when it comes to dealing with complex risks in systems characterized by feedback loops, tipping points and opaque cause-and-effect relationships that can make intervention problematic.”

You can come up with your own doomsday scenarios – malware infects stock market algorithms leading to economic collapse and then war – but the more important point, I think, is that our political leaders will be forced to start addressing this problem.

And yes I’m talking about more regulations or stricter standards on the IT systems used to run our critical infrastructure.

NIS Directive

In the EU, the rules of the road for protecting this infrastructure are far more evolved than in the US. We wrote about the Network and Information Security (NIS) Directive way back in 2016 when it was first approved by the EU Parliament.

The Directive asks EU member states to improve co-operation regarding cyber-attacks against critical sectors of the economy — health, energy, banking, telecom, transportation, as well as some online businesses — and to set minimum standards for cyber security preparedness, including incident notification to regulators. The EU countries had 21 months to “transpose” the directive into national laws.

That puts the deadline for these NIS laws at May 2018, which is just a few months away. Yes, May will be a busy month for IT departments as both the GDPR and NIS go into effect.

For example, the UK recently ended the consultation period for its NIS law. You can read the results of the report here. One key thing to keep in mind is that each national data regulator or authority will be asked to designate operators of “essential services”, EU-speak for critical infrastructure. They have 6-months starting in May to do this.

Anyway, the NIS Directive is a very good first step in monitoring and evaluating malware-based systemic risk. We’ll keep you posted as we learn more from the national regulators as they start implementing their NIS laws.



Our Most Underappreciated Blog Posts of 2017

Our Most Underappreciated Blog Posts of 2017

Another year, another 1293 data breaches involving over 174 million records. According to our friends at the Identity Theft Resource Center, 2017 has made history by breaking 2016’s record breaking 1091 breaches. Obviously it’s been a year that many who directly defend corporate and government systems will want to forget.

Before we completely wipe 2017 from our memory banks, I decided to take one last look at the previous 12 months worth of IOS posts.  While there are more than a few posts that did not receive the traffic we had hoped, they nevertheless contained some really valuable security ideas and practical advice.

In no particular order, here are my favorite underachieving posts of the 2017 blogging year.


Wade Baker Speaks – We did a lot of interviews with security pros this year —researchers, front-line IT warriors, CDOs, privacy attorneys.  But I was most excited by our chat with Wade Baker. The name may not be familiar, but for years Baker produced the Verizon DBIR, this blog’s favorite source of breach stats. In this transcript, Wade shares great data-driven insights into the threat environment, data breach costs, and how to convince executives to invest in more data security.

Ann Cavoukian and GDPR – It’s hard to believe that the General Data Protection Regulation (GDPR) is only a few months away. You can draw a line from Cavoukian’s Privacy by Design ideas to the GDPR.  For companies doing business in the EU, it will soon be the case that PbD will effectively be the law. Read the Cavoukian transcript to get more inspired.

Diversity and Data Security – The more I learn about data security and privacy, the more I’m convinced that it will “take a village”.  The threat is too complex for it to be pigeon-holed into an engineering problem. A real-world approach will involve multiple disciplines — psychology, sociology, law, design, red-team thinking, along with computer smarts. In this interview with Allison Avery, Senior Organizational Development & Diversity Excellence Specialist at NYU Langone Medical Center, we learn that you shouldn’t have preconceived notions of who has the right cyber talents.

Infosec Education

PowerShell Malware –  PowerShell is a great next-generation command line shell. In the last few years, hackers have realized this as well and are using PowerShell for malware-free hacking. A few months ago I started looking into obfuscated PowerShell techniques, which allow hackers to hide the evil PowerShell and make it almost impossible for traditional scanners to detect. This is good information for IT people who need to get a first look at the new threat environment. In this two-part series, I referenced a Black Hat presentation given by Lee Holmes — yeah, that guy!  Check out Lee’s comment on the post.

Varonis and Ransomware – This was certainly the year of weaponized ransomware with WannaCry, Petya, et. al. using the NSA-discovered EternalBlue exploit to hold data hostage on a global scale. In this post, we explain how our DatAlert software can be used to detect PsExec, which is used to spread the Petya-variant of the malware. And in this other ransomware post, we also explain how to use DatAlert to detect the mass encryption of files and to limit your risks after ransomware infection.

PowerShell as a Cyber Monitoring Tool – I spent a bit of effort in this long series explaining how to use PowerShell to classify data and monitor events — kind of a roll-your-own Varonis. Alas, it didn’t get the exposure I had hoped. But there are some really great PowerShell tips, and sample code using Register-EngineEvent to monitor low-level file access events. A must read if you’re a PowerShell DIY-er.


NIS, the Next Big EU Security Law – While we’ve all been focused on the EU GDPR, there’s more EU data security rules that go into effect in 2018. For example, The Network and Information Security (NIS) Directive.  EU countries have until July 2018 to “transpose” this directive into their own national laws. Effectively, the NIS Directive asks companies involved in critical infrastructure — energy, transportation, telecom, and Internet — to have in place data security procedures and to notify regulators when there’s a serious cyber incident. Unlike the GDPR, this directive is not just about data exposure but covers any significant cyber event, including DoS, ransomware, and data destruction.

GDPR’s 72-Hour Breach Notification – One particular GDPR requirement that’s been causing major headaches for IT is the new breach notification rules. In October, we received guidelines from the regulators. It turns out that there’s more flexibility than was first thought. For example, you can provide EU regulators partial information in the first 72-hours after discovery and more complete information as it becomes available. And there are many instances where companies will not have to additionally contact individuals if the personal data exposed is not financially harmful. It’s complicated so read this post to learn the subtleties.

By the way, we’ve been very proud of our GDPR coverage. At least one of our posts has been snippetized by Google, which means that at least Google’s algorithms think our GDPR content is the cat’s meow. Just sayin’.


Man vs. Machine – Each week Cindy Ng leads a discussion with a few other Varonians, including Mike Buckbee, Killian Englert, and Kris Keyser. In this fascinating podcast, Cindy and her panelists take on the question of ethics in software and data security design. We know all too well that data security is often not thought about when products are sold to consumers — maybe afterwards after a hack. We can and should do a better job in training developers and introducing better data laws, for example the EU GDPR. But what is “good enough” for algorithms that think for themselves in, say,  autonomous cars?  I don’t have the answer, but is what great fun listening to this group talk about this issue.

Cybercrime Startups – It’s strange at first to think of hackers as entrepreneurs and their criminal team as a startup. But in fact there are similarities, and hacking in 2017 starts looking like a viable career option for some. In this perfect drive-time podcast, our panelists explore the everyday world of the cybercrime startup.

Fun Security Facts

Securing S3 –  As someone who uses Amazon Web Services (AWS) to quickly test out ideas for blog posts, I’m a little in awe of Amazon’s cloud magic and also afraid to touch many of the configuration options. Apparently, I’m not the only one who gets lost in AWS since there have been major breach involving its heavily used data storage feature, known as S3. In this post, Mikes covers S3’s buckets and objects and explains how to set up security policies. Find out how to avoid being an S3 victim in 2018!

Do Your GDPR Homework and Lower Your Chance of Fines

Do Your GDPR Homework and Lower Your Chance of Fines

Advice that was helpful during your school days is also relevant when it comes to complying with the General Data Protection Regulation (GDPR): do your homework because it counts for part of your grade! In the case of the GDPR, your homework assignments involve developing and implementing privacy by design measures, and making sure these policies are published and known about by management.

Taking good notes and doing homework assignments came to my mind when reading the new guideline published last month on GDPR fines. Here’s what the EU regulators have to say:

Rather than being an obligation of goal, these provisions introduce obligations of means, that is, the controller must make the necessary assessments and reach the appropriate conclusions. The question that the supervisory authority must then answer is to what extent the controller “did what it could be expected to do” given the nature, the purposes or the size of the processing, seen in light of the obligations imposed on them by the Regulation’

The supervising authority referenced above is what we used to call the data protection authority or DPA, which is in charge of enforcing the GDPR in an EU country. So the supervising authority is supposed to ask the controller, EU-speak for the company collecting the data, whether they did their homework — “expected to do” — when determining fines involved in a GDPR complaint.

Teachers Know Best

There are other factors in this guideline that affect the level of fines, including the number of data subjects, the seriousness of the damage (“risks to rights and freedoms”), the categories of data that have been accessed, and willingness to cooperate and help the supervisory authority. You could argue that some of this is out of your control once the hackers have broken through the first level of defenses.

But what you can control is the effort a company has put into their security program to limit the security risks.

I’m also reminded of what Hogan Lovells’ privacy attorney Sue Foster told us during an interview about the importance of “showing your work”.  In another school-related analogy, Foster said you can get “partial credit” if you show that to the regulators after an incident that you have security processes in place.

She also predicted we’d get more guidance and that’s what the aforementioned document does: explains what factors are taken into account when issuing fines in GDPR’s two-tiered system of either 2% or 4% of global revenue. Thanks Sue!

Existing Security Standards Count

The guideline also contains some very practical advice on compliance. Realizing that many companies are already rely on existing data standards, such as ISO 27001, the EU regulators are willing to give some partial credit if you follow these standards.

… due account should be taken of any “best practice” procedures or methods where these exist and apply. Industry standards, as well as codes of conduct in the respective field or profession are important to take into account. Codes of practice might give indication of the level of knowledge about different means to address typical security issues associated with the processing.

For those who want to read the fine print in the GDPR, they  can refer to article 40 (“Codes of Conduct”). In short it says that standards associations can submit their security controls, say PCI DSS, to the European Data Protection Board (EDPB) for approval. If a controller then follows an officially approved “code of conduct”, then this can dissuade the supervising authority from taking actions, including issuing fines, as long as the standards group — for example, the PCI Security Standards Council — has its own monitoring mechanism to check on compliance.

Based on this particular GDPR guideline, it will soon be the case that those who have done the homework of being PCI compliant will be in a better position to deal with EU regulators.

Certifiably GDPR

The GDPR, though, goes a step further. It leaves open a path to official certification of a controller’s data operations!

In effect, the supervising authorities have the power (through article 40) to certify a controller’s operations as GDPR compliant. The supervising authority itself can also accredit other standards organization to issue these certifications as well.

In any case, the certifications will expire after three years at which point the company will need to re-certify.

I should add these certifications are entirely voluntary, but there’s obvious benefits to many companies. The intent is to leverage the private sector’s existing data standards, and give companies a more practical approach to compliance with the GDPR’s technical and administrative requirements.

The EDPB is also expected to develop certification marks and seals for consumers, as well as a registry of certified companies.

We’ll have to wait for more details to be published by the regulators on GDPR certification.

In the short term, companies that already have programs in place to comply with PCI DSS, ISO 27001, and other data security standards should potentially be in a better position with respect to GDPR fines.

And in the very near future, a “European Data Protection Seal” might just become a sought after logo on company web sites.

Want to reduce your GDPR fines? Varonis helps support many different data security standards. Find out more!

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part II

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part II

This article is part of the series "[Podcast] Privacy Attorney Tiffany Li and AI Memory". Check out the rest:


Leave a review for our podcast & we'll send you a pack of infosec cards.

Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties of getting AI to forget.

In this second part, we continue our discussion of GDPR and privacy, and examine ways to bridge the gap between tech and law. We then explore some cutting edge areas of intellectual property. Can AI algorithms own their creative efforts? Listen and learn.

Guidance for GDPR Right to be Forgotten

Cindy Ng

We continue our discussion with Tiffany Li who is an attorney and Resident Fellow at Yale Law Schools Information Society Project. In part two, we discuss non-human creators of intellectual property and how it could potentially impact the right to be forgotten, as well as the benefits of multi-disciplinary training where developers take a law class and lawyers take a tech class.

Andy Green

So do you think the regulators will have some more guidance specifically for the GDPR right to be forgotten?

Tiffany Li

The European regulators typically have been fairly good about providing external guidance outside of regulations and outside of decisions. Guidance documents that are non-binding have very helpful in understanding different aspects of regulation. And I think that we will have more research done. I would love to really see though is more interdisciplinary research. So one problem I think that we have in law generally, in technology law, is the sort of habit of operating in a law and policy only silo. So we have the lawyers, we have the policymakers, we have the lobbyists, everyone there in a room talking about, for example, how we should protect privacy. And that’s wonderful and I’ve been in that room many times.

But what’s missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what’s going on. So this is something that’s not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers.

AI and Intellectual Property

Cindy Ng

Speaking of interdisciplinary, you recently wrote a really interesting paper on AI and intellectual property, and you describe the future dilemmas of what might arise in IP law specifically involving works by non-human creators. And I was wondering if you can introduce to our listeners the significance of your inquiry.

Tiffany Li

So this is a draft paper that I’ve been writing about AI and intellectual property. Specifically, I’m looking at the copyright ability of works that are created by non-human authors, which could include AI, but could also include animals for example, or other non-human actors. Getting back to that same difference I mentioned earlier where we have one from an AI that is simply machine learning and super advanced statistics, and we have one from an AI that may be something close to a new type of intelligence. So my paper looks at this from two angles. First, we look what current scholarship says about who should own creative works that are created by AI or non-humans. And here we have an interesting issue. For example, if you devise an AI system to compose music, which we’ve seen in a few different cases, the question then is who you should own the copyright or the IP rights generally over the music that’s created?

One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn’t make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can’t own anything. And even if it could, even if we get the world where AI is a sentient being, we don’t really know what they want. We can’t pay them. We don’t know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don’t perfectly match up with reality.

But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don’t own the product, Google owns that product, right? It’s under the work-for-hire model. So that’s one theory.

And what my research is finding is that none of these theories really makes sense because we’re missing one crucial thing. And I think the crucial point they’re missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn’t, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there’s something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it’s a little high level but I think that that’s interesting distinction that we haven’t paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally.

Andy Green

If we give AIs some of these rights, then it almost conflicts with the right to be forgotten because now you would need the consent of the AI?

Tiffany Li

Sure. That’s definitely possible. We don’t know. I mean, we don’t have AI citizens yet except in Saudi Arabia.

Andy Green

I’ve heard about that, yeah.

Cindy Ng

So since we’re talking about AI citizens, if we do extend AI citizens to have intellectual property rights, does it mean that they get other kinds of rights? Such as freedom of speech and the right to vote, or that’s not a proper approach or way to think about it? Are we treading in science fiction movies that we’ve been where humans are superior to a machine? I know we’re just kind of playing around with ideas, but it will be really interesting to hear your insights especially… It’s your specialty.

Tiffany Li

No problem. I mean, I’m in this field because I love playing around with those ideas. Even though I do continually mention that there is that division between the AI we have now and that futuristic sentient AI, I do think that eventually we will get there. There will be a point where we have AI that can think, for a certain definition of thinking, that can think at least like level human beings. And because those intelligent systems can design themselves, it’s fairly easy to assume that they will then design even more intelligent systems. And we’ll get to that point where there will be super intelligent AIs who are more intelligent than humans. So the question they ask then I think is really interesting. It’s the concept of whether we should be giving these potential future beings the same rights that we give human beings. And I think that’s interesting because it gets down to a really a philosophical question, right? It’s not a question about privacy or security or even law. It’s the question of what we believe is important on a moral level, and it’s who we believe to be capable of either having morals or being part of a moral calculus.

So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there’s something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that’s a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth.

Multidisciplinary Training

Cindy Ng

And to wrap up and to bring all our topics together, I wanna bring it back to regulations and technology and training and I’d like to continue our play thinking with the idea that developers who create technology, if we should require training so that they take principle such as right to be forgotten, privacy by design, and you even mentioned the moral obligation for developers to consider all of these elements because what they’ll be creating will ultimately impact humans. And I wonder if they could get  the training that we require of doctors and lawyers so that everyone is working from the same knowledge base. Could you see that happening? And I wanted to know what your opinions are on this.

Tiffany Li

I love that mode of thought. I think that in addition to lawyers and policymakers needing to understand more from technologists, I think that people working in tech definitely should think more about these ethical issues. And I think that it’s starting, we’re starting to see a trend of people in the technology community thinking about really how their actions can affect the world at large. And there may be partially in the mainstream news right now because of the reaction to the last election and to ideas such as fake news and disinformation and so on. But we see the tech industry changing and we’re accepting somewhat the idea that maybe they should be responsibility or ethical considerations built into the role of being a technologist. So what I like to think about it’s just the fact that regardless of whether you are a product developer or you are a privacy officer or you’re a lawyer at a tech company per se, for example, regardless of what role you have every action that you make have an impact in the world at large.

And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it’s important to remember that especially if you are in a role in which you’re dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on.

I think it’s important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap.

Cindy Ng

So I just wanted to ask if you had anything else that you wanted to share that we didn’t cover? We covered so many different topics.

Tiffany Li

So I’d love to take a moment to introduce the work that I’m currently doing. I’m a Resident Fellow at Yale Law School’s Information Society Project, which is a research center dedicated to different legal issues involving the information society as we know it. I’m currently leading a new initiative which is called the Wikimedia and Yale Law School Initiative on intermediaries and information. This initiative is funded by a generous grant from the Wikimedia Foundation, which is the nonprofit that runs Wikipedia. And we’re doing some really interesting research right now on exactly what we just discussed on the role of tech companies, but particularly these information intermediaries or these social media platforms and so on.

These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it’s a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It’s law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.