Setting up network file sharing is one of those core IT practices that every Windows admin knows about and has implemented as part of their daily work. The basic mechanics of this have not dramatically changed since Windows Server 2003 and are relatively straightforward. However, after configuring the resource shares and the individual NTFS permissions for each folder, admins sometimes lose sight of the big picture as they handle daily permission requests on an ad-hoc basis.
Over time, as permissions are added to folders, the result is that permissions are set too broadly—to the delight of hackers and internal data thieves. The key reason is that admins and IT are generally not equipped to keep track of the current roles of workers, organizational changes that shift group authorizations, and job terminations—three of the most common occurrences that impact user access to file content.
It’s not for lack of focus or commitment on the part of IT, but simply that it’s hard to visualize and understand the mappings between users and their file permissions. This is often the result of complex permission hierarchies that make it difficult for IT staff to work this out quickly on their own without help from software automation.
Get the Free PowerShell and Active Directory Essentials Video Course
Admins, of course, can review file activity records to see who is actually accessing records, and then decide whether the user should have access. As a rule most companies don’t set up file auditing—it’s a resource hog—and even if this is done for a short period, the log results can overwhelm the abilities of admins to parse the trails and come up with the appropriate follow-up actions. However, there is a way out of this permission trap. In this post, we’ll explore a four step strategy that will make it far easier for IT admins to manage file sharing and folder permissions.
Rather than working on an ad-hoc basic, it’s important for admins to have a foundational policy—the simpler the better. Experts recommend thinking about folder permissions as having three states:
- Directly applied permissions —every access control entry is directly applied to the assets control list
- Inherited permissions — permissions are inherited from the parent directory
- Hybrid— both directly and inherited permissions
When looking at your current implementation, work out which one of the above states the folders you’re interested in taming are currently in. Don’t be surprised to find many of the folders in a hybrid state—it’s not at all unusual. However, your goal should be to eliminate the hybrids and move toward a twostate or binary model: the folders should either be inheriting all, or none of their permissions. The next step is to standardize your existing group permissions.
It’s worth pointing out that you should only have group permissions. They are far easier to manage than having individual permissions. Is it acceptable to have a group of only one? The answer is yes since it is likely that the group will eventually grow and you’ll have established a policy that will continue forward.
Here again a simple binary group policy is better: place users into either a read group or a read-write group. Of course, there should also be a separate administrative group, but 99% of users will fall into one of those two groups. One of the reasons it’s hard to work out the actual permissions on a specific folder is that you most likely nested groups inside other groups. Our advice is to try to avoid nesting. It’s better to assign a domain local or universal group to the ACL and add users to this group. In some cases, nested groups may be best (following Microsoft’s recommended AGLP strategy), especially when there’s a group already created that contains the right users, and will be maintained by a group owner.
Over the years, there’s been some confusion about how to handle the combination of NTFS permissions and Windows sharing permissions. Experts agree it’s best to standardize share permissions and use the NTFS permissions to granularly manage access. For example, you’ll want to set sharing permissions so that they are accessible to all authenticated users, and then use the NTFS permissions to determine on a more granular basis who has access (whether over the network or directly on the server). As with groups, it’s best to avoid ‘nested shares’ – ultimately it just introduces unnecessary complexity.
The final element is to set up traverse permissions correctly for the shares. For example, if you’re trying to give someone access to a folder that’s several levels below a share, they’ll need traverse permissions all the way down the tree. Rather than trying to do that manually, it’s better to use an automated solution that keeps track of these and sets them correctly.
With the permissions now squared away, can we simplify the actual structure of the shared areas? The answer that IT experts give is also to take a simple binary approach. They suggest using large departmental or divisional shares and then use specific project shares to allow employees from different departments to work together on as-needed basis.
Part of the reason that data permissions are set too broadly is that IT can often only guess at whether a user is truly authorized to access content. So admins will err on the side of inclusiveness. A better approach is for IT to work more closely with the data owners—the users, generally managers, from the business side who know the context about the data, and are best positioned in the organization to say who should have access.
IT should initiate an initial entitlement review process with the data owners. This would involve the owners reviewing who currently has access to a folder— typically by reviewing current group structures and possibly audit logs—and then deciding whether to remove users from a group. For IT, this is often a complex process—especially tracing users to groups—so automated solutions will make this easier.
It’s important to keep in mind that entitlement reviews are not a one-time fix, instead they need to be continually performed to keep pace with changing user roles. As an example, it’s common for some users to be given temporary access to project folders—perhaps they were hired as a short-term consultant or they’re an employee assigned to a group on as-needed basis. When the project is finished, access should be revoked.
Unfortunately, managers often forget to contact IT or assume that IT will remove access for them. These kind of changes fall through the cracks and lead to permissions that don’t reflect current organizational structure, and ultimately are broader than necessary. But with regular entitlement reviews—perhaps on a quarterly basis—these lapses can be addressed by the owners.
There’s still more work for IT to do after setting up the folder access policies and engaging in periodic entitlement reviews. They also should be continuously monitoring shared folders. Why? Making a resource available on the network is a great way to boost collaboration between employees, but this also comes with security obligations.
With data breaches now a common occurrence, IT staff should be analyzing network file activity for signs that outside hackers or malware have taken over the credentials of internal users, or that internal users may be up to no good. In other words, IT should be reviewing file access activity with an eye towards looking for unusual patterns—for example, spikes in activity, permission changes to existing folders, and sensitive content that’s experiencing above average viewings. Here again the use of automation, especially real-time alerting mechanisms, is a far better way to implement monitoring then manually reviewing logs.
On a more operational level, IT should also analyze shared activity as a way to tighten up permissions –for example, users and groups that have folder access permissions that are never used—or to spot whether sensitive data is accessible and/or being viewed by non-authorized employees. The results of this analysis can be then be brought up during entitlement reviews to help tighten up access.
While it’s natural for IT to be busy thinking about setting up network file shares and managing existing shares, sometime life cycle issues can be pushed into the background. Remember: all data has a life-span and the older the contents gets, the less relevant it becomes. So IT should have in place data retention policies as well. This is not just a matter of saving on disk space by removing and archiving stale data, but this also has data security implications.
There’s an approach to data security known as privacy by design, which has had a strong influence on data compliance—both industry standards as well as legal regulations. One of the ideas in privacy by design is that companies should minimize the data they collect and then set retention limits for files and folders. The security advantage of putting a shelf life on data is that there would be less for thieves to steal. This is a basic defensive strategy, but an effective one.
To help put some bite into the retention limits, IT pros suggest you charge users on a per byte basis for storage. If department heads or group managers then don’t want to pay for their slice of shared storage from their budgets, IT can remove it or copy the data to secondary storage.
To start you thinking about a retention policy, we list below a few factors that should be taken into account:
- Determine the age at which each type of data that has not been accessed would be considered stale – 1 year? 2 years? 5 years?
- Implement a solution that can identify where stale data is located based on actual usage (not just file timestamps)
- Automate the classification of data based on content, activity, accessibility, data sensitivity and data owner involvement
- Automatically archive or delete data that is meets your retention guidelines
- Automatically migrate data that is stale but contains sensitive information to a secure folder or archive with access limited to only those people who need to have access (e.g. the General Counsel)
- Make sure your solution can provide evidence (e.g. reports) of your defensible data retention and disposal policy
Network file sharing is an essential service in any organization and the starting point for implementing collaborative solutions. However, shared content also comes with its own administrative and security overhead. Overall, IT should have in places policies for file sharing that encompass the ideas in this paper. We’ve discussed a basic model for folder permissions and groups, but your organization may evolve its own strategies—mileage may vary. But even in the simplest policies, the complexity for managing folder access rights for more than a few users would require automation in order to ensure the policies are effectively enforced.
David Gibson has more than 20 years of technology and marketing experience. He frequently speaks about cybersecurity and technology best practices at industry conferences, and has been quoted in The New York Times, USA Today, The Washington Post and numerous security news sources.