Search
Close this search box.

Reducing Classified Spillage Incidents

Table of Contents
    Add a header to begin generating the table of contents
    Scroll to Top

    Recent Posts

    Transfer sensitive information online progress bar

    Overview

    This blog post delves into the complexities of managing classified spillage incidents in digital environments, where classified information is inadvertently stored or transmitted through unaccredited systems. It outlines the procedural and technical steps necessary for cybersecurity and IT professionals to identify, contain, and remediate such incidents, emphasizing the importance of a detailed investigation to trace the spillage’s origin and scope. The post also proposes strategic measures for preventing future incidents, including the establishment of a centralized classification review process and the implementation of a designated file system for document review, aimed at streamlining the classification of documents and minimizing the risk of spillage.

    Recommendations

    • Forensically clean mailboxes, local computers, network storage, and backups to remediate classified data spillage, tailoring the cleanup process to the incident’s complexity.
    • Establish a single, secure location for storing documents awaiting classification review, using a separate file system to simplify remediation and avoid backup contamination.
    • Utilize Access Control Lists (ACLs) and a specialized script to automatically transfer documents from a general upload folder to a restricted access folder, ensuring that only authorized personnel can access sensitive information.
    • Implement robust logging and monitoring for the classification review process, including the scheduled tasks and scripts used for file transfers, to facilitate spillage investigation and system integrity.
    • Regularly review and update classification and spillage response protocols to reflect evolving best practices and technological advancements, ensuring efficient and secure handling of classified information.

    Details

    A classified spillage (commonly referred to as a spill) incident occurs when information is transmitted, processed, or stored on an information system that is not accredited to contain that level of information.  Usually, this occurs when a classified document (confidential, secret, top secret, etc.) is created, stored, or emailed on unclassified systems or networks.

    The remediation of spillage incidents typically falls upon Cyber and IT employees.  During a spillage investigation, a timeline must be determined of when the classified data was introduced to the unclassified system(s) as well as how the data was introduced (vector).  These cleanup investigations can take hours to months depending on the size and scope of the incident.  For example, a classified document may be discovered on an unclassified computer and during the investigation, it is discovered the document was emailed via the unclassified mail system to 15 users.  As the investigation continues, it is found that the document arrived a year ago and it has since been emailed to hundreds of users.  Cybersecurity and IT employees must then clean mailboxes, local computers, network storage, and backups to remediate the spill.

    Many agencies that deal with classified data have specific employees that review information to determine whether or not something is classified, and if it is, how to classify it.  Usually, these employees work in a classification office and employees who generate potentially classified documents ask the classification office to review their work to determine the classification level.  Documents may be sent to the classification office via email, placed on a shared network location, or hand carried to a classification officer.  In a perfect world, all of these requests would be done within a classified environment, however due to a variety of reasons that is not always possible.

    To help reduce the potential of a classified spillage incident, agencies should consider having a single point for documents to be maintained while awaiting classification review.  Since many networks span the country if not the globe, hand carrying documents to a classification officer is not always possible.  While email is convenient, it is not recommended for use during classification reviews because cleaning Exchange is difficult, backups become problematic, and it is far to easy to forward messages which compound the cleanup.  If email is hosted externally (such as Microsoft or Google), then the cleanup becomes even more complex in a cloud environment.

    To help reduce the likelihood of a spill and to reduce the impact when one does occur, it is recommended to have a single location designated for all users to upload data awaiting classification review.  To make remediation easier if a spillage occurs, this single point should be a network shared location on its own file system and separate from other network shares. For example, if an agency has a 500 TB Storage Area Network (SAN), they can carve 100 GB of space out of the SAN and give it a separate file system than the rest of the SAN.  This 100 GB would be for classification reviews and would be excluded from any backups.  This way, IT should never have to pull backups and destroy them, potentially destroying good data that is commingled with the spilled data.

    Once the classification review file system is created, create two folders in this file system.  One folder should be named “UPLOAD” and be shared to all authenticated users.  This will become the internal dropbox for users to upload their documents for classification review.  The second folder should be named “RESTRICTED”, which will be locked down based on NTFS file permissions to only those that need to access the folder.  In a typical environment, this would be IT administrators, Cybersecurity, and the classification office.

    To maintain the Need to Know principle, documents within the UPLOAD directory will be automatically moved to the RESTRICTED folder via a script.  Once in the RESTRICTED folder, only those with a Need to Know will be able to view them.

    A script will be needed (an example is below) and a service account should be created.  By creating a scheduled task on a server using the service account created, the scheduled task can run the script on regular intervals to move files from UPLOAD to RESTRICTED.  The service account must have read/write permissions to the RESTRICTED folder in order to move the files over.

    Create the new file system and call the volume “CLASSIFICATION.”  Within the CLASSIFICATION volume, the two new folders discussed above will reside: UPLOAD and RESTRICTED.  An example is shown below:

    Screenshot
    View of the classification folder with two shared folders

    The RESTRICTED folder permissions should be set so administrators have “full control” and the classification reviewers should have “modify” permissions.  Preferably, the classification reviewers within the classification office should have their own Active Directory (AD) group so only the group must be added.

    The UPLOAD directory needs special permissions to ensure the Need to Know is preserved.  IT administrators and Cybersecurity should have full control, the service account should have full control, and Authenticated Users need special permission.  The special permissions allows users to delete individual files, but not folders.  This way, they cannot delete another user’s entire folder upload, but can delete individual files that they place in the UPLOAD directory. The permissions for Authenticated Users is shown below:

    Screenshot
    Screenshot of File Permissions for Authenticated Users of UPLOAD directory
    Screenshot of File Permissions for Authenticated Users of UPLOAD directory

    Now that the folders are in place and the permissions are set, the script can be created using the service account discussed already.  The example script using Microsoft\’s Robocopy is below:


    @echo off

    REM This script is designed to move files from the \\{name of volume}\classification\upload directory to the \\{name of volume}\classification\restricted directory.

    REM The upload directory had read/write permissions for all authenticated users. The restricted directory is available to the classification office and CIRT

    REM The service account running this script is {name of service account}

    REM Script written by Josh Moulin – Cyber Security xxx-xxx-xxxx

    REM ### START SCRIPT ###

    REM ### SETS DATE AND TIME FUNCTION FOR CUSTOMIZED FOLDER TO ELIMINATE DUPLICATION ISSUES ###

    for /f “delims=” %%a in (‘wmic OS Get localdatetime ^| find “.”‘) do set dt=%%a

    set datestamp=%dt:~0,8%

    set timestamp=%dt:~8,6%

    set YYYY=%dt:~0,4%

    set MM=%dt:~4,2%

    set DD=%dt:~6,2%

    set HH=%dt:~8,2%

    set Min=%dt:~10,2%

    set Sec=%dt:~12,2%

    set stamp=%YYYY%-%MM%-%DD%_%HH%%Min%%Sec%

    REM ### MOVES ALL FILES AND FOLDERS FROM UPLOAD DIRECTORY TO RESTRICTED DIRECTORY AND CREATES A FOLDER WITH DATE AND TIME EACH TIME SCRIPT RUNS ###

    Robocopy \\{name of volume}\classification\Upload\ \\{name of volume}\classification\Restricted\%YYYY%-%MM%-%DD%_%HH%%Min%%Sec% /s /MOV /xa:SH /xf Thumbs.db /r:5 /w:30 /LOG+:\\{servername}\classification_transfer\log.txt

    Rem ### REMOVES ANYTHING LEFT IN THE UPLOAD DIRECTORY AFTER THE TRANSFER ###

    del /f /s /q /a:sh \\{name of volume}\classification\Upload\

    for /D %%I in (“\\{name of volume}\classification\Upload\*”) do rmdir /s/q “%%I”

    del /q \\{name of volume}\classification\Upload\*

    REM ### REMOVES EMPTY FOLDERS IN RESTRICTED FOLDER THAT IS CREATED WHEN THE SCRIPT RUNS BUT HAS NOTHING TO TRANSFER ###

    for /f “usebackq delims=” %%d in (`”dir /ad/b/s “\\{name of volume}\classification\Restricted\” | sort /R”`) do rd “%%d”

    Microsoft’s Robust Copy (robocopy) is used to transfer the files. The options are as follows:

    /s – Copies all subdirectories except empty directories. This is done just in case the permissions change on the Upload folder and users are allowed to copy a folder into the directory.

    /MOV – This moves all files and then deletes the files after the confirmation of the move.

    /xa:SH – This moves all files EXCEPT files that have hidden and system file attributes. This prevents some malware from being moved but also does not copy of system files like Thumbs.db and other hidden files that can be difficult to delete.

    /xf Thumbs.db – This prohibits the transfer of any file named Thumbs.db. The Thumbs.db file is created by default by the Windows Operating System for certain media such as PowerPoints, videos, and pictures. There is no reason to copy these files over.

    /r:5 – This instructs robocopy to retry failed copies 5 times before failing.

    /w:30 – This is the time (30 seconds) that is waited between each of the 5 retries above.

    /LOG+ – This turns on verbose logging and writes the log file to \\{nameofserver}\classification_transfer\log.txt

    Del /f /s /q /a:sh – This runs the delete command again to ensure everything is deleted out of the upload share. The /f forces the deletion of read-only files, the /s forces the deletion of subdirectories, the /q runs it in quite mode, and the /a:sh deletes files with the system and hidden files attributes.

    For additional information on the robocopy command, see this link:  https://technet.microsoft.com/en-us/library/cc733145.aspx.  For further information on the Del command, see this link:  https://technet.microsoft.com/en-us/library/cc771049.aspx.

    When this script executes, it will create a new folder within the \\\\{name of volume}\\Classification\\Restricted folder that is named by the year, month, day, hour, minute, and second (e.g. 2015-08-20_113245). This naming convention was selected to eliminate the chance of having duplicative named folders. Inside this new folder will be anything in the Upload folder at the time the script ran.  A screenshot is below:

    Screenshot
    Example of folders created by script

    Create the scheduled task using the service account.  It is recommended to have the scheduled task run every 30 minutes, however you could do more or less frequently depending on your environment.  One thing to consider is that if the scheduled task runs while a user is uploading files, it may only transfer part of the upload.  By having the script run every 30 to 60 minutes, this potential is reduced as most uploads will not take that long over normal network speeds.  The settings of the scheduled task should be as follows:

    Screenshot
    These These are the suggested settings for the scheduled task

    The scheduled task should be setup to run the batch script created (the script above using Robocopy):

    Screenshot
    Scheduled task launches the .bat script created with the Robocopy script above

    Part of the script enabled logging.  It is suggested to have the log file written to the system that is running the scheduled task with the script.  This will assist in the event of a spill, or to monitor the successes and failures of the script.  The script will continue to append to itself and could also be sent to another network share if desired, or consumed by a SIEM such as Splunk:

    Screenshot
    Logging from the Robocopy script

    It is also recommended to monitor the system running the scheduled task to ensure the task starts successfully.  If the service account is locked or the system is down, it would prevent this system from working correctly.  By monitoring the Windows event logs on the system running the script and checking for failures, alerts can be sent to the necessary individuals.

    Implementing the above can be done within a few hours and potentially save an agency thousands of dollars in man-hours of spillage cleanup or fines for non-compliance.  Additionally, this process will provide employees with a single consistent approach to requesting classification reviews, reduce the potential for classified information to be spilled on unclassified systems or be backed-up, reduce the exposure to and disclosure of classified information to those who do not have a Need to Know, and finally, it reduces the complexity of cleanups.

    Picture of Josh Moulin

    Josh Moulin

    Josh Moulin has been in the cybersecurity field for over two decades and worked in a variety of roles. He is the founder and principal of Natsar, a cybersecurity company in New York, USA. Previously, he has served in roles including the Senior VP of Operations at the Center for Internet Security (CIS), commander of an FBI cybercrimes task force, director of an ASCLD/LAB accredited digital forensics lab, Chief Information Officer (CIO) and Chief Information Security Officer (CISO) of a national security program within the United States nuclear weapons enterprise, and an Executive Partner at Gartner, the world’s largest research and advisory company. Josh is considered an expert in cybersecurity, risk management, and organizational leadership and frequently engages with companies around the world on these and other topics. He has a Master of Science Degree in Information Security Assurance and the following certifications: CAWFE, CEH, CFCE, CHFI, CISSP, CNDA, DFCP, GCFA, GCFR, GCIA, GIME, and GSEC.

    Leave a Comment

    Your email address will not be published. Required fields are marked *

    Scroll to Top

    Contact Natsar

    Fill out the form below, and we will be in touch shortly.
    Please enable JavaScript in your browser to complete this form.
    Name