FileFly 4.0 Administration Guide

version 4.0 | document revision 1

1 Overview

1.1 Introduction

DataCore FileFly provides policy-based file tiering for Windows files and SMB shares. It automates and manages the movement of files from primary storage locations to lower cost object storage residing on-premises or on public clouds. The source locations may be Windows File Servers and SMB file shares exported by NetApp and Isilon NAS devices. The destinations include DataCore Swarm Object Storage as well as public cloud storage providers adhering to the S3 protocol such as Amazon S3, Microsoft Azure, Google Cloud Storage, Wasabi Cloud and others.
Files are migrated from primary storage locations to the object store. Files are demigrated transparently when accessed by a user or application. FileFly also provides a range of Disaster Recovery options.

1.1.0.1 What is Migration?

From a technical perspective, file migration can be summarized as follows: first, the file content and corresponding metadata are copied to secondary storage as an MWI file/object. Next, the original file is marked as a 'stub' and truncated to zero physical size (while retaining the original logical size for the benefit of users and the correct operation of applications). The resulting stub file remains on primary storage in this state until such time as a user or application requests access to the file content, at which point the data is automatically returned to primary storage.
Each stub encapsulates the location of the corresponding MWI data on secondary storage, without the need for a database or other centralized component.

1.2 Conventions used in this Book

References to labels, values and literals in the software are in 'quoted italics'.
References to actions, such as clicking buttons, are in bold.
References to commands and text typed in are in fixed font.
Notes are denoted: Note: This is a note.
Important notes are denoted: Important: Important point here.

1.3 System Components

1.1 provides an overview of a FileFly deployment. All communication between FileFly components is secured with Transport Layer Security (TLS). The individual components are described below.

 
FileFly System Overview

1.3.0.1 DataCore FileFly Admin Portal

FileFly Admin Portal is the system's policy manager. It provides a centralized web-based configuration interface, and is responsible for task scheduling, server monitoring and file reporting. It lies outside the data path for file transfers.

1.3.0.2 DataCore FileFly Agent

DataCore FileFly Agent performs file operations as directed by Admin Portal Policies. The FileFly Agent is also responsible for retrieving file data from secondary storage upon user / application access.
File operations include migration and demigration, as well as a range of operations to assist disaster recovery. Data is streamed directly between agents and storage without any intermediary staging on disk.
When installed in a Gateway configuration, FileFly Agent does not allow migration of files from that server.
Optionally, Gateways can be configured for High-Availability (HA).

1.3.0.3 DataCore FileFly FPolicy Server

FileFly FPolicy Server provides migration support for NetApp filers using the NetApp FPolicy protocol. This component is the equivalent of DataCore FileFly Agent for NetApp filers.
FileFly FPolicy Server may also be configured for High-Availability (HA).

1.3.0.4 DataCore FileFly LinkConnect Server

FileFly LinkConnect Server provides link-based migration support for either Dell EMC OneFS or as an alternative method for migrating files from Windows Server volumes in the case where an agent may not be installed directly on the file server. This component performs a similar role to DataCore FileFly Agent for SMB shares.
FileFly LinkConnect Server may also be configured for High-Availability (HA).

1.3.0.5 DataCore FileFly DrTool

DataCore FileFly DrTool is an additional application that assists in Disaster Recovery.
Note: This functionality is not included with Community Edition licenses.

1.4 FileFly Admin Portal Concepts

DataCore FileFly Admin Portal is the web-based interface that provides central management of a FileFly deployment. It is installed as part of the FileFly Tools package.
When entering the FileFly Admin Portal, the 'Dashboard' displays – more on the dashboard in §1.5. The remainder of this section follows the Admin Portal's navigation menu.

1.4.1 Servers

The 'Servers' page displays the installed and activated agents across the deployment of FileFly. Health information and statistics are provided for each server or cluster node. Use this page when activating the other components in your system.
Click a Server's ellipsis control to:

  • view additional server information

  • configure storage plugins

  • add / retire / restart cluster nodes

  • upgrade a standalone server to high-availability

  • view detailed charts of recent activity

  • edit server-specific configuration (see Appendix E)

1.4.2 Sources

Sources describe volumes or folders to which Policies may be applied (e.g., locations on the network from which files may be Migrated).
A Source location is specified by a URI. Platform-specific information for all supported sources is detailed in Chapter 5. A filesystem browser is provided to assist in setting the URI location interactively.

1.4.2.1 Subdirectory Filtering

Within a given Source, individual directory subtrees may be included or excluded to provide greater control over which files are eligible for policy operations. Excluded directories are not traversed.
In the Source editor, the directory tree may be expanded and explored in the 'Subdirectory Filtering' section. By default, the entire source is included.

1.4.3 Destinations

Destinations are storage locations that Policies may write files to (e.g., locations on the network to which files are Migrated). Platform-specific information for all supported sources is detailed in Chapter 5.
Optionally, a Destination may be configured to use Write Once Read Many (WORM) semantics for migration operations. No attempt is made thereafter to update the resultant secondary storage objects. This option is useful when the underlying storage device has WORM-like behavior, but is exposed using a generic protocol.

1.4.4 Rules

Rules allow a specific subset of files within a Source or Sources to be selected for processing.
Rules can match a variety of metadata: filename / pathname, size, timestamps / age, file owner, and attribute flags. A rule matches if all of its specified criteria match the file's metadata. However, rules can be negated or compounded as necessary to perform more complex matches.
You will be able to simulate your Rules against your Sources during Policy creation.
Some criteria are specified as comma-separated lists of patterns:

  • wildcard patterns, e.g. *.doc (see Appendix A.1)

  • regular expressions, e.g. /2004-06-[0-9][0-9]\.log/ (see Appendix A.2)

Note that:

  • files match if any one of the patterns in the list match

  • whitespace before and after each pattern is ignored

  • patterns are case-insensitive

  • filename patterns starting with '/' match the path from the point specified by the Source URI

  • filename patterns NOT starting with '/' match files in any subtree

  • literal commas within a pattern must be escaped with a backslash

1.4.5 Policies

A Policy specifies an operation to perform on a set of files. Depending on the type of operation, a Policy will specify Source(s) and/or Destination(s), and possibly Rules to limit the Policy to a subset of files.
Each operation has different parameters, refer to Chapter 4 for a full reference.

1.4.6 Tasks

A Task schedules one or more Policies for execution. Tasks can be scheduled to run at specific times, or can be run on-demand using the Quick Run control on the 'Dashboard'.
While a Task is running, its status is displayed in the 'Running Tasks' panel of the 'Dashboard'. When Tasks finish they are moved to the 'Recent Tasks' panel.
Operation statistics are updated in real-time as the task runs. Operations will automatically be executed in parallel, see E for more details.
If multiple Tasks are scheduled to start simultaneously, Policies on each Source are grouped such that only a single traversal of each file system is required.

1.4.6.1 Completion Notification

When a Task finishes running, regardless of whether it succeeds or fails, a completion notification email may be sent as a convenience to the administrator. This notification email contains summary information similar to that available in the 'Recent Tasks' panel on the 'Dashboard'.
To use this feature, either:

  • check the 'Notify completion' option when configuring the Task, or

  • click the notify icon on a running task on the 'Dashboard'

1.4.7 Reports

Reports – generated by Gather Statistics Policies – contain charts detailing:

  • a 30-day review of access and change activity

  • a long-term trend chart to assist with planning migration strategy

  • a breakdown of the most common file types

  • optionally, a breakdown of file ownership

1.4.8 Recovery

The 'Recovery' page provides access to multiple versions of the recovery files produced by each Create Recovery File From Source/Destination Policy. Retention options may be adjusted in 'Settings'.
Refer to Chapter 6 for more information on performing recovery operations.

1.4.9 Settings

The FileFly Admin Portal 'Settings' page allows the configuration of a wide range of global settings including:

  • email notification

  • configuration backup (see §3.3)

  • work hours

  • Admin Portal logging

  • user interface language selection

It is also possible to suspend the scheduler, to prevent scheduled Tasks starting while maintenance procedures are being performed.
Server-specific settings and plugin configuration are available on the 'Servers' page.

1.4.10 Help

The 'Help' page provides version information, as well as links to documentation and support resources. You may also view the global log, or generate a system diagnostic file (support.zip) for use when contacting DataCore Support.

1.5 FileFly Admin Portal Dashboard

The 'Dashboard' provides a concise view of the FileFly system status, current activity, and recent task history. It may also be used to run Tasks on-demand using the Quick Run control.
The 'Notices' panel, displayed on the expandable graph bar, summarizes system issues that need to be addressed by the administrator. This panel will guide you through initial setup tasks such as license installation.
The circular 'Servers' display shows high-level health information for the servers/clusters in the FileFly deployment.

1.5.1 Storage Charts

'Primary' and 'Secondary' storage charts may be read together to gain insight into the impact of currently configured migration policies on primary and secondary storage consumption over time. Each bar indicates an amount of storage space consumed or released. Consumed storage is indicated by a positive bar, while released storage is shown in the negative. Stacked bars indicate the contributions of the different operations by color.
A Migration Policy consumes secondary storage in order to release primary storage.
Demigration consumes primary storage immediately, but defers release until later. Either the primary storage is released by a Quick-Remigrate, or the associated secondary storage is released by a Scrub.
In a complex environment, these charts provide insight into patterns of user-behavior and policy activity.
Click on a bar to zoom in to an hourly breakdown for the chosen day.

1.5.2 Other Charts

The 'Processed' line chart graphs both the rate of operations successfully performed and data processed over time. Data transfer and bytes Quick-Remigrated (i.e. without any transfer required) are shown separately.
The 'Operations' breakdown chart shows successful activity by operation type across the whole system over time. Additionally, per-server operations charts are available using the 'Servers' page – see §1.4.1.
The 'Operations' radar chart shows a visual representation of the relative operation profile across your deployment. Two figures are drawn, one for each of the two preceding 7-day periods. This allows behavioral change from week to week to be seen at a glance.

1.5.3 Task Control & History

Per-file operation details (including any error messages) may be viewed by clicking a Task's log icon. It is also possible to start and stop Tasks, update task configuration, or request a completion notification for a task that is already in progress.

2 Deployment

Refer to these instructions during initial deployment and when adding new components. For upgrade instructions, please refer to §3.7 instead.
For further information about each supported storage platform, refer to Chapter 5.

2.1 Installing FileFly Tools

The DataCore FileFly Tools package consists of the FileFly Admin Portal and the FileFly DrTool application (not licensed for Community Edition users). FileFly Tools must be installed before any other components.

2.1.0.1 System Requirements

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. using NTP)

2.1.0.2 Setup

  1. Run DataCore FileFly Tools.exe

  2. Follow the instructions on screen

After completing the installation process, FileFly Tools must be configured using the Admin Portal web interface. The FileFly Admin Portal will be opened automatically and can be found later using the Start Menu.
The web interface will lead you through the process of initial configuration: refer to the 'Notices' panel on the 'Dashboard' to verify all steps are completed.

2.2 Installing FileFly Agents

Each FileFly Agent server may fulfill one of two roles, selected at installation time.
In the 'FileFly Agent for migration' role, an agent assists the operating system to migrate and demigrate files. It is essential for the agent to be installed on all machines from which files will be migrated.
In the 'FileFly Gateway agent' role, an agent provides access to external devices and storage services. While it does allow access to local disk and mounted SAN volumes, it does not provide local migration source support. Storage plugins will normally be deployed on Gateways.

2.2.1 High-Availability Gateway Configuration

A high-availability gateway configuration is recommended. Such FileFly Gateways must be activated as 'High-Availability FileFly Gateways'.

2.2.1.1 High-Availability Gateway DNS Setup

At least two FileFly Gateways are required for High-Availability.

  1. Add each FileFly Gateway server to DNS

  2. Create an FQDN that resolves to all of the IP addresses

  3. Use this FQDN when activating the HA Servers

  4. Use this FQDN (or a CNAME alias to it) in FileFly Destination URIs

Example:

  • gw-1.example.com → 192.168.0.1

  • gw-2.example.com → 192.168.0.2

  • gw.example.com → 192.168.0.1, 192.168.0.2

Note: The servers that form the High-Availability Gateway cluster must NOT be members of a Windows failover cluster.

2.2.2 DataCore FileFly Agent for Windows Servers

2.2.2.1 System Requirements

  • Supported Windows Server operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. using NTP)

Note: When installed in the Gateway role, a dedicated server is required, unless it is to be co-located on the FileFly Tools server. When co-locating, create separate DNS aliases to refer to the Gateway and the FileFly Admin Portal web interface.

2.2.2.2 Setup

  1. Run the DataCore FileFly Agent.exe

  2. Follow the instructions to activate the agent using FileFly Admin Portal

2.2.3 DataCore FileFly FPolicy Server for NetApp Filers

A DataCore FileFly FPolicy Server provides migration support for one or more NetApp Filers through the FPolicy protocol. This component is the equivalent of DataCore FileFly Agent for NetApp Filers. Typically FileFly FPolicy Servers are installed in a high-availability configuration.

2.2.3.1 System Requirements

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. using NTP)

2.2.3.2 Setup

Installation of the FileFly FPolicy Server software requires careful preparation of the NetApp Filer and the FileFly FPolicy Server machines. Instructions are provided in §5.3.

2.2.4 DataCore FileFly LinkConnect Server

A DataCore FileFly LinkConnect Server provides link-based migration support for one or more Dell EMC OneFS or Windows SMB shares. This component performs a similar role to DataCore FileFly Agent without the need for software to be installed directly on the NAS or file server.

2.2.4.1 System Requirements

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

  • Minimum 2GB disk space for log files (on the system volume)

  • Minimum 1TB disk space for LinkConnect Cache (as a single NTFS volume)

  • RAM: 8GB base, plus:

    • 4GB per TB of LinkConnect Cache

    • 0.5GB per billion link-migrated files

  • Active clock synchronization (e.g. using NTP)

2.2.4.2 Setup

Installation of the FileFly LinkConnect Server software requires careful configuration of both the NAS / file server and the FileFly LinkConnect Server machines. Instructions are provided in §5.4 for OneFS and §5.2 for Windows file servers. Other devices are not supported.

2.3 LinkConnect Client Deployment

2.3.0.1 Installation

Having deployed one or more LinkConnect Servers, all Windows clients that will need to access link-migrated files will require the LinkConnect Client Driver to be installed as follows:

  1. Verify the client machine is joined to the Active Directory domain

  2. Run DataCore FileFly LinkConnect Client Driver.exe

  3. Follow the prompts

Alternatively, to ease deployment, the installer may be run in silent mode by specifying /S on the command line. Note that when upgrading the driver silently, the updated driver will not be loaded until the next reboot.
Important: Client Driver versions newer than the installed FileFly LinkConnect Server version should not be deployed.

2.3.0.2 Deployment Considerations

Access to NAS / file server shares containing files that have been link-migrated must use the domain credentials of the logged-in Windows desktop session. When a user accesses a link-migrated file, the client driver will transparently redirect the access to the FileFly LinkConnect Server if required. This redirected access will use the same logged-in Windows desktop session credentials.
Installation of the client driver will enable remote symlink evaluation in Windows. If remote symlink evaluation was disabled prior to client driver installation (this is the default behavior in Windows 10), the driver will continue to prevent remote symlink access for other symlinks. Do not disable remote symlink evaluation (e.g. by group policy) after installation since performing this causes the client driver to stop functioning.

3 Usage

3.1 DNS Best Practice

Storage locations in DataCore FileFly are referred to by URI. Relationships between files must be maintained over a long period of time. Verify the FQDNs used in these URIs are valid long-term, even as individual server roles are changed or consolidated.
In a production deployment, always use Fully Qualified Domain Names (FQDNs) in preference to bare IP addresses.
It is recommended to create DNS aliases for each logical storage role for each server. For example, use different DNS aliases when storing your finance department's data as opposed to your engineering department's data – even if they initially reside on the same server.

3.2 Getting Started

3.2.1 Analyzing Volumes

Once the software has been installed, the first step in any new FileFly deployment is to analyze the characteristics of the primary storage volumes. The following steps describe how to generate file statistics reports for each volume.
In the FileFly Admin Portal web interface:

  1. Create Sources for each volume to analyze

  2. Create a 'Gather Statistics' Policy and select all defined Sources

  3. Create a Task for the 'Gather Statistics' Policy

    • For now, disable the schedule

  4. On the 'Dashboard', click the quick run icon

  5. Run the Task

  6. When the Task has finished, view the report(s) on the 'Reports' page3.2.2 Migrating Files

Using the information from the reports, create a rule to select files for migration. A typical rule might limit migrations to files modified more than six months ago. The reports' long-term trend charts indicate the amount of data migrated by a 'modified more than n months ago' rule – adjust the age cutoff as necessary to suit your filesystems.
To avoid unnecessary migration of active files, be conservative with your first Migration Rule – it can be updated to migrate more recently modified files on subsequent runs.
Once the Rule has been created:

  1. Create a Destination to store your migrated data

    • see Chapter 5 for platform-specific instructions

  2. Create a Migration Policy and add the Source(s), Rule, and Destination

  3. Use the 'Simulate rule matching…' button to explore the effect of your rule

  4. Create a Task for the new Policy

  5. Run the task

When the task is completed, check the corresponding 'Recent Tasks' entries on the 'Dashboard'. Click on the log icon to review any errors in detail.
Migration is typically performed periodically: configure a schedule on the Migration Task.

3.2.3 Next Steps

Chapter 4 describes all FileFly Policy Operations in detail and helps to get the most out of FileFly.
The remainder of this chapter gives guidance on using FileFly in a production environment.

3.3 Configuration Backup

This section describes how to backup DataCore FileFly configuration (for primary and secondary storage backup considerations, see §3.4).

3.3.1 FileFly Tools

Backing up the DataCore FileFly Tools configuration preserves policy configuration and server registrations as well as per-server settings and storage plugin configuration.

3.3.1.1 Backup Process

Configuration backup can be scheduled on the Admin Portal's 'Settings' page. A default schedule is created at installation time to backup configuration once a week.
Configuration backup files include:

  • Policy configuration

  • Server registrations

  • Per-Server settings, including plugin configuration, keys, etc.

    • Note: FileFly FPolicy Server configuration is not included – see §3.3.2.1

  • Recovery files

  • Settings from the Admin Portal 'Settings' page

  • Settings specified when FileFly Tools was installed

It is strongly recommended that these backup files are retrieved and stored securely as part of your overall backup plan. These backup files can be found at:
C:\Program Files\DataCore FileFly\data\AdminPortal\configBackups
Additionally, log files may be backed up from:

  • C:\Program Files\DataCore FileFly\logs\AdminPortal\

  • C:\Program Files\DataCore FileFly\logs\DrTool\

3.3.1.2 Restore Process

  1. Verify the server to be restored and has the same FQDN as the original server

  2. If present, uninstall DataCore FileFly Tools

  3. Run the installer: DataCore FileFly Tools.exe

    • use the same version that was used to generate the backup file

  4. On the 'Installation Type' page, select 'Restore from Backup'

  5. Choose the backup zip file and follow the instructions

  6. Optionally, log files may be restored from server backups to:

    • C:\Program Files\DataCore FileFly\logs\AdminPortal\

    • C:\Program Files\DataCore FileFly\logs\DrTool\

3.3.2 Per-Server Configuration

Backing up the configuration on each server will allow for easier redeployment of agents in the event of a disaster.

3.3.2.1 Windows Backup Process

On each Windows Server backup the entire installation directory.
e.g. C:\Program Files\DataCore FileFly\

3.3.2.2 Windows Restore Process

On each replacement server:

  1. Reinstall with the same version of the installer

  2. Stop the 'DataCore FileFly Agent' service

  3. Restore the contents of the following directories from the backup:

    • C:\Program Files\DataCore FileFly\data\FileFly Agent\

    • C:\Program Files\DataCore FileFly\logs\FileFly Agent\

  4. Restart the 'DataCore FileFly Agent' service

3.4 Storage Backup

Each stub on primary storage is linked to a corresponding MWI file on secondary storage. During the normal process of migration and demigration, the relationship between the stub and MWI file is maintained.
The recommendations below guarantee the consistency of this relationship is maintained even after files are restored from the backup.

3.4.1 Backup Planning

Verify the restoration of stubs is included as part of the backup & restore test regimen.
When using Scrub policies, verify the Scrub grace period is sufficient to cover the time from when a backup is taken to when the restore and Post-Restore Revalidate steps are completed (see §3.4.2).
It is strongly recommended to set the global minimum grace period accordingly to guard against the accidental creation of scrub policies with insufficient grace. This setting may be configured on that FileFly Admin Portal 'Settings' page.
Important: It is NOT possible to safely restore stubs or MigLinks from a backup set taken more than one grace period ago.

3.4.1.1 Additional Planning

To complement standard backup and recovery solutions, and to allow the widest range of recovery options, it is recommended to schedule a 'Create Recovery File From Source' Policy to run after each migration.

3.4.2 Restore Process

  1. Suspend the scheduler in FileFly Admin Portal

  2. Restore the primary volume

  3. Run a 'Post-Restore Revalidate' policy against the primary volume

    • To verify all stubs are revalidated, run this policy against the entire primary volume, NOT against the migration source

    • This policy is not required when only WORM destinations are in use

  4. Restart the scheduler in FileFly Admin Portal

If restoring the primary volume to a different server (a server with a different FQDN), the following preparatory steps are required:

  1. On the 'Servers' page, retire the old server (unless still in use for other volumes)

  2. Install FileFly Agent on the new server

  3. Update Sources as required to refer to the FQDN of the new server

  4. Perform the restore process as above

3.4.3 Platform-specific Considerations

3.4.3.1 Windows

Most enterprise Windows backup software respect FileFly stubs and back them up correctly without causing any unwanted demigrations. For some backup software, it may be necessary to refer to the software documentation for options regarding Offline files.
When testing backup software configuration, test that backup of stubs does not cause unwanted demigration.
Additional backup testing may be required if Stub Deletion Monitoring is required. Please refer to Appendix E for more details.

3.4.3.2 NetApp Filers

Please consult §5.3.5 for information regarding snapshot restore on NetApp Filers.

3.5 Production Readiness Checklist

3.5.0.1 Backup

  1. Check your FileFly configuration is adequately backed up – see §3.3

  2. Review the storage backup and restore procedures described in §3.4

  3. Check backup software can backup stubs without triggering demigration

  4. Check backup software restores stubs and that they can be demigrated

  5. Schedule regular 'Create Recovery File From Source' Policies on your migration sources – see §4.10

3.5.0.2 Antivirus

Generally, antivirus software does not cause demigrations during normal file access. Some antivirus software demigrates files when performing scheduled file system scans.
Prior to production deployment, always check that installed antivirus software does not cause unwanted demigrations. Some software must be configured to skip offline files in order to avoid these inappropriate demigrations. Consult the antivirus software documentation for further details.
If the antivirus software does not provide an option to skip offline files during a scan, DataCore FileFly Agent may be configured to deny demigration rights to the antivirus software. Refer to Appendix E for more information.
It may be necessary for some antivirus products to exempt the DataCore FileFly Agent process from real-time protection (scan-on-access). If the exclusion configuration requires the path of the executable to be specified, be sure to update the exclusion whenever FileFly is upgraded (since the path changes on upgrade).

3.5.0.3 Other System-wide Applications

Check for other applications that open all the files on the whole volume. Audit scheduled processes on file servers – if such processes cause unwanted demigration, it may be possible to block them (see Appendix E).

3.5.0.4 Monitoring and Notification

To facilitate proactive monitoring, it is recommended to:

  1. Configure email notifications to monitor system health and Task activity

  2. Enable syslog – see Appendix E

3.5.0.5 Platform Considerations

For further information on platform-specific interoperability considerations, please refer to the appropriate sections of Chapter 5.

3.6 Policy Tuning

Periodically re-assess file distribution and access behavior:

  1. Run 'Gather Statistics' Policies

    • Examine reports

  2. Examine Server statistics – see §1.4.1

    • For more detail, examine demigrates in file server agent.log files

Consider:

  • Are there unexpected peaks in demigration activity?

  • Are there any file types that should not be migrated?

  • Should different rules be applied to different file types?

  • Is the Migration Policy migrating data that is regularly accessed?

  • Are the Rules aggressive enough or too aggressive?

  • What is the data growth rate on primary and secondary storage?

  • Are there subtrees on the source file system that should be addressed by separate policies or excluded from the source entirely?

3.7 System Upgrade

Do not attempt a FileFly upgrade until all running tasks have been completed. The FileFly scheduler (responsible for launching scheduled tasks) must also be disabled before performing an upgrade.

When a FileFly deployment is upgraded from a previous version, FileFly Tools must always be upgraded first, followed by all Server components.
Run:

  • DataCore FileFly Tools.exe

3.7.1 Automated Server Upgrade

Where possible, it is advisable to upgrade Server agents using the automated upgrade feature by clicking the upgrade system icon on the 'Servers' page.
The automated process transfers installers to each server and performs the upgrades in parallel to minimize downtime. If a server fails or is offline during the upgrade, manually upgrade it later. Once the automated upgrade procedure is finalized, the 'Servers' page will update to display the health of the upgraded servers.
Following the upgrade, resolve any warnings displayed on the 'Dashboard'.

3.7.2 Manual Server Upgrade

Follow the instructions appropriate for the platform of each server as described below.

3.7.2.1 FileFly Agent for Windows

  1. Run DataCore FileFly Agent.exe and follow the instructions

  2. Resolve any warnings displayed on the 'Dashboard'

3.7.2.2 FileFly NetApp FPolicy Server

  1. Run DataCore FileFly NetApp FPolicy Server.exe and follow the instructions

  2. Run DataCore FileFly NetApp Cluster-mode Config.exe and follow the instructions

  3. Resolve any warnings displayed on the 'Dashboard'

3.7.2.3 FileFly LinkConnect Server

  1. Run DataCore FileFly LinkConnect Server.exe and follow the instructions

  2. Resolve any warnings displayed on the 'Dashboard'

4 Policy Operation Reference

This chapter describes the various operations that may be performed on selected files by FileFly Admin Portal policies.

4.1 Gather Statistics Operation

Requires: Source(s) Included in Community Edition: yes
Generate statistics report(s) for file sets at the selected Source(s). Optionally include statistics by the file owner. By default, owner statistics are omitted which generally results in a faster policy run. Additionally, rules may be used to specify a subset of files on which to report rather than the whole source.

4.2 Migrate Operation

Requires: Source(s), Rule(s), Destination Included in Community Edition: yes
Migrate file data from selected Source(s) to a Destination. Stub files remain at the Source location as placeholders until files are demigrated. File content is transparently demigrated (returned to primary storage) when accessed by a user or application. Stub files retain the original logical size and file metadata. Files containing no data are not migrated.
Each Migrate operation is logged as a Migrate, Remigrate, or Quick-Remigrate.
A Remigrate is the same as a Migrate except it explicitly recognizes that a previous version of the file had been migrated in the past and that stored data pertaining to that previous version is no longer required and so is eligible for removal using a Scrub policy.
A Quick-Remigrate occurs when a file has been demigrated and NOT modified. In this case, it is not necessary to retransfer the data to secondary storage so the operation can be performed very quickly. Quick-remigration does not change the secondary storage location of the migrated data.
Optionally, quick-remigration of files demigrated within a specified number of days may be prevented. This option can be used to avoid quick-remigrations occurring in an overly aggressive fashion.
Additionally, this policy may be configured to pause during the globally configured work hours.
Note: For Sources using a FileFly LinkConnect Server, such as Dell EMC OneFS shares, see §4.9 instead.

4.3 Quick-Remigrate Operation

Requires: Source(s), Rule(s) Included in Community Edition: yes
Quick-Remigrate demigrated files that do not require data transfer, enabling space to be reclaimed quickly. This operation acts only on files that have not been altered since the last migration.
Optionally, files demigrated within a specified number of days may be prevented. This option can be used to avoid quick-remigrations occurring in an overly aggressive fashion.
Additionally, this policy may be configured to pause during the globally configured work hours.

4.4 Scrub Destination Operation

Requires: Destination (non-WORM) Included in Community Edition: yes
Remove unnecessary stored file content from a migration destination. This is a maintenance policy that should be scheduled regularly to reclaim space.
A grace period must be specified which is sufficient to cover the time from when a backup is taken to when the restore and corresponding Post-Restore Revalidate policy would complete. The grace period effectively delays the removal of data sufficiently to accommodate the effects of restoring primary storage from backup to an earlier state.
Use of scrub is usually desirable to maximize storage efficiency. In order to also maximize performance benefits from quick-remigration, it is advisable to schedule migration / quick-remigration policies more frequently than the grace period.
To avoid interactions with migration policies, Scrub tasks are automatically paused while migration-related tasks are in progress.
Scrub Policies may be configured to generate log output only without actually removing files.
Important: Source(s) MUST be backed up within the grace period.

4.5 Post-Restore Revalidate Operation

Requires: Source(s) Included in Community Edition: yes
Scan all stubs present on a given Source, revalidating the relationship between the stubs and the corresponding files on secondary storage. This operation is required following a restore from backup and should be performed on the root of the restored source volume.
If only Write Once Read Many (WORM) destinations are in use, this policy is not required.
Important: This revalidation operation MUST be integrated into backup/restore procedures, see §3.4.1.

4.6 Demigrate Operation

Requires: Source(s), Rule(s) Included in Community Edition: yes
Return migrated file content back to files on the selected Source(s). This is useful when a large batch of files must be demigrated in advance.
Prior to running a Demigrate policy, be sure that there is sufficient primary storage available to accommodate the demigrated data.
This operation may be used with both Migrated and Link-Migrated files.

4.7 Advanced Demigrate Operation

Requires: Source(s), Rule(s) Included in Community Edition: yes
Demigrates files with advanced options:

  • Disconnect files from destination – remove destination information from demigrated files (both files demigrated by this policy and files that have already been demigrated); it is not possible to quick-remigrate these files

  • A Destination Filter may optionally be specified in order to demigrate/disconnect only files that were migrated to a particular destination

Prior to running an Advanced Demigrate policy, be sure that there is sufficient primary storage available to accommodate the demigrated data.

4.8 Premigrate Operation

Requires: Source(s), Rule(s), Destination Included in Community Edition: yes
Premigrate file data from selected Source(s) to a Destination in preparation for migration. Files on primary storage are not converted to stubs until a Migrate or Quick-Remigrate Policy is run. Files containing no data are not premigrated.
This can assist with:

  • a requirement to delay the stubbing process until secondary storage backup or replication has occurred

  • reduction of excessive demigrations while still allowing an aggressive Migration Policy.

Premigration is, as the name suggests, intended to be followed by full migration/quick-remigration. If this is not done, a large number of files in the premigrated state may slow down further premigration policies, as the same files are rechecked each time.
By default, files already premigrated to another destination are skipped when encountered during a premigrate policy.
This policy may also be configured to pause during the globally configured work hours.
Note: Most deployments do not use this operation, but use a combination of Migrate and Quick-Remigrate instead.

4.9 Link-Migrate Operation

Requires: Source(s), Rule(s), Destination Included in Community Edition: yes
For platforms that do not support standard stub-based migration, Link-Migrate file data from selected Source(s) to a Destination.
Files at the source location are replaced with FileFly-encoded links (MigLinks) which allow client applications to transparently read data without returning files to primary storage. If an application attempts to modify a link, the file is automatically returned to primary storage and then modified in-place. Files containing no data are skipped by this policy.
MigLinks present the original logical size and file metadata.
Since MigLinks remain links when read by client applications, there is no analogue of quick-remigration for link-migrate.
This policy may be configured to pause during the globally configured work hours.
Note: To perform link-migration to Swarm targets, the destination should use the s3generic scheme, see §5.8.

4.10 Create Recovery File From Source Operation

Requires: Source(s), Rule(s) Included in Community Edition: no
Generate a disaster recovery file for DataCore FileFly DrTool by analyzing files at the selected Source(s). FileFly DrTool can use the generated file(s) to recover or update source files.
Note: Recovery files generated from Source account for renames.

4.11 Create Recovery File From Destination Operation

Requires: Destination Included in Community Edition: no
Generate a disaster recovery file for DataCore FileFly DrTool by reading the index and analyzing files at the selected Destination without reference to the associated primary storage files.
Note: Recovery files from Destination may not account for renames.
Important: It is strongly recommended to use 'Create Recovery File From Source' in preference where possible.

4.12 Erase Cached Data Operation

Requires: Source(s), Rule(s) Included in Community Edition: yes
Erases cached data associated with files by the Partial Demigrate feature (NetApp Sources only).
Important: The Erase Cached Data operation is not enabled by default. It must be enabled in 'Settings''Additional Options'.

5 Source and Destination Reference

The following pages describe the characteristics of the Sources and Destinations supported by DataCore FileFly. Planning, setup, usage, and maintenance considerations are outlined for each storage platform.
IMPORTANT: Read any relevant sections of this chapter prior to deploying FileFly in a production environment.

5.1 Microsoft Windows

5.1.1 Migration Support

Windows NTFS volumes may be used as migration sources. On Windows Server 2016 and above, ReFS volumes are also supported as migration sources.
Windows stub files can be identified by the 'O' (Offline) attribute in Explorer. Depending on the version of Windows, files with this flag may be displayed with an overlay icon.
Note: If it is not possible to install the DataCore FileFly Agent directly on the file server, see §5.2 for an alternative solution using Link-Migration.

5.1.2 Planning

5.1.2.1 Prerequisites

  • A license that includes an appropriate entitlement for Windows

When creating a production deployment plan, please refer to §3.5.

5.1.2.2 Cluster Support

Clustered volumes managed by Windows failover clusters are supported. However, the Cluster Shared Volume (CSVFS) feature is NOT supported. On Windows Server 2012 and above, when configuring a 'File Server' role in the Failover Cluster Manager, 'File Server for general use' is the only supported File Server Type. The 'Scale-Out File Server for application data' File Server Type is NOT supported.
When using clustered volumes in FileFly URIs, verify that the resource FQDN appropriate to the volume is specified rather than the FQDN of any individual node.

5.1.3 Setup

5.1.3.1 Installation

See Installing FileFly Agent for Windows §2.2.2

5.1.4 Interoperability

This section describes Windows-specific considerations only and should be read in conjunction with §3.5.

5.1.4.1 Microsoft Storage Replica

FileFly supports Microsoft Storage Replica.
If Storage Replica is configured for asynchronous replication, a disaster failover effectively reverts the volume to a previous point in time. This kind of failover is directly equivalent to a volume restore operation (albeit to a very recent state).
As with any restore, a Post-Restore Revalidate Policy (see §4.5) should be run across the restored volume within the scrub grace period window. This verifies the correct operation of any future scrub policies by accounting for discrepancies between the demigration state of the files on the (failed) replication source volume and the replication destination volume.
Important: integrate this process into your recovery procedures prior to the production deployment of asynchronous storage replication.

5.1.4.2 Microsoft DFS Namespaces (DFSN)

DFSN is supported. FileFly Sources must be configured to access volumes on individual servers directly rather than through a DFS namespace. Users and applications may continue to access files and stubs using DFS namespaces as normal.

5.1.4.3 Microsoft DFS Replication (DFSR)

DFSR is supported for:

  • Windows Server 2019

  • Windows Server 2016

  • Windows Server 2012 R2

FileFly Agents must be installed (selecting the migration role during installation) on EACH member server of a DFS Replication Group prior to running migration tasks on any of the group's Replication Folders.
If adding a new member server to an existing Replication Group where FileFly is already in use, FileFly Agent must be installed on the new server first.
When running policies on a Replicated Folder, sources should be defined such that each policy acts upon only one replica. DFSR replicates the changes to the other members as usual.
Read-only (one-way) replicated folders are NOT supported. However, read-only SMB shares can be used to prevent users from writing to a particular replica as an alternative.
Due to the way DFSR is implemented, care should be taken to avoid writing to stub files that are being concurrently accessed from another replica.
In the rare event that DFSR-replicated data is restored to a member from backup, verify DFSR services on all members are running and replication is fully up-to-date (check for the DFSR 'finished initial replication' Windows Event Log message), then run a Post-Restore Revalidate Policy using the same source used for migration.

5.1.4.4 Retiring a DFSR Replica

Retiring a replica effectively creates two independent copies of each stub, without updating secondary storage. To avoid any potential loss of data:

  1. Delete the contents of the retired replica (preferably by formatting the disk, or at least disable Stub Deletion Monitoring during the deletion)

  2. Run a Post-Restore Revalidate Policy on the remaining copy of the data

If it is strictly necessary to keep both, now independent, copies of the data and stubs, then run a Post-Restore Revalidate Policy on both copies separately (not concurrently).

5.1.4.5 Preseeding a DFSR Replicated Folder Using Robocopy

The most common use of Robocopy with FileFly stubs is to preseed or stage initial synchronization. When performing such a preseeding operation:

  • for new Replicated Folders, verify the 'Primary member' is set to be the original server, not the preseeded copy

  • both servers must have FileFly Agent installed before preseeding

  • add a "Process Exclusion" to Windows Defender for robocopy.exe (allow a while for the setting to take effect)

  • on the source server, preseed by running robocopy with the /b flag (to copy stubs as-is to the new server)

  • once preseeding is complete and replication is fully up-to-date (check for the DFSR 'finished initial replication' Windows Event Log message), it is recommended to run a Post-Restore Revalidate Policy on the original FileFly Source

Note: If the process above is aborted, be sure to delete all preseeded files and stubs (preferably by formatting the disk, or at least disable Stub Deletion Monitoring during the deletion) and then run a Post-Restore Revalidate Policy on the original FileFly Source.

5.1.4.6 Robocopy (Other Uses)

Robocopy demigrates stubs as they are copied by default. This is the same behavior as Explorer copy-paste, xcopy, etc.
Robocopy with the /b flag (backup mode – must be performed as an administrator) copies stubs as-is.
Robocopy /b is not recommended. If stubs are copied in this fashion, the following must be considered:

  • for a copy from one server to another, both servers must have DataCore FileFly Agent installed

  • this operation is essentially a backup and restore in one step, and thus inappropriately duplicates stubs that are intended to be unique

    • after the duplication, one copy of the stubs should be deleted immediately

    • run a Post-Restore Revalidate policy on the remaining copy

    • this process renders the corresponding secondary storage files non-scrubbable, even after they are demigrated

  • to prevent Windows Defender triggering demigrations when the stubs are accessed in this fashion:

    • always run the robocopy from the source end (the file server with the stubs)

    • add a "Process Exclusion" to Windows Defender for robocopy.exe (allow a while for the setting to take effect)

5.1.4.7 Windows Data Deduplication

If a Windows source server is configured to use migration policies and Windows Data Deduplication, it should be noted that a given file can either be deduplicated or migrated, but not both at the same time. FileFly migration policies automatically skip files already deduplicated. Windows skips FileFly stubs when deduplicating.
When using both technologies, it is recommended to configure Data Deduplication and Migration based on file type such that the most efficacious strategy is chosen for each type of file.
Note: Microsoft's legacy Single Instance Storage (SIS) feature is not supported. Do not use SIS on the same server as DataCore FileFly Agent.

5.1.4.8 Windows Shadow Copy

Windows Shadow Copy – also known as Volume Snapshot Service (VSS) – allows previous versions of files to be restored, e.g. from Windows Explorer. This mechanism cannot be used to restore a stub. Restore stubs from backup instead – see §3.4.

5.1.5 Behavioral Notes

5.1.5.1 Symbolic Links

Symbolic links (symlinks) are skipped during traversal of the file system. This guarantees files are not seen – and thus acted upon – multiple times during a single execution of a given policy. If it is intended that a policy should apply to files within a directory referred to by a symbolic link, either verify the Source encompasses the real location at the link's destination, or specify the link itself as the Source.

5.1.5.2 Mount-DiskImage

On Windows 8 or above, VHD and ISO images may be mounted as normal drives using the PowerShell Mount-DiskImage cmdlet. This functionality can also be accessed using the Explorer context menu for an image file.
A known limitation of this cmdlet is that it does not permit sparse files to be mounted (see Microsoft KB2993573). Since migrated image files are always sparse, they must be demigrated prior to mounting. This can be achieved either by copying the file or by removing the sparse flag with the following command:
fsutil sparse setflag <file_name> 0

5.1.6 Stub Deletion Monitoring

On Windows, the FileFly Agent can monitor stub deletions to identify secondary storage files that are no longer referenced in order to maximize the usefulness of Scrub Policies. This feature extends not only to stubs that are directly deleted by the user, but also to other cases of stub file destruction such as overwriting a stub or renaming a different file over the top of a stub.
Stub Deletion Monitoring is disabled by default. To enable it, please refer to Appendix E.

5.2 Microsoft Windows using FileFly LinkConnect Server

5.2.1 Link-Migration Support

This section details the configuration of a DataCore FileFly LinkConnect Server to enable Link-Migration of files from Windows Server SMB shares. This option should be used when it is not possible to install DataCore FileFly Agent directly on the Windows file server in question. For other cases – where FileFly Agent can be installed on the server – please refer to 5.1.
Refer to §4.2 and §4.9 for details of the Migrate and Link-Migrate operations respectively.
Link-Migration works by pairing a Windows SMB share with a corresponding LinkConnect Cache Share. Typically a top-level share on each Windows file server volume is mapped to a unique share (or subdivision) on a FileFly LinkConnect Server. Multiple file server shares may use Cache Shares / subdivisions on the same FileFly LinkConnect Server if desired.
Once this configuration is completed, Link-Migrate policies convert files on the source Windows Server SMB share to links pointing to the destination files using the LinkConnect Cache Share, according to configured rules.
Link-Migrated files can be identified by the 'O' (Offline) attribute in Explorer. Depending on the version of Windows, files with this flag may be displayed with an overlay icon.

5.2.2 Planning

5.2.2.1 Prerequisites

  • An NTFS Cache Volume of at least 1TB – see §2.2.4

  • A FileFly license that includes an entitlement for FileFly LinkConnect Server.

  • A supported secondary storage destination (excluding scsp and scspdirect)

When creating a production deployment plan, please refer to §3.5.

5.2.2.2 File Server System Requirements

  • Windows Server 2016 or higher

  • The server must NOT have the Active Directory Domain Services role

5.2.2.3 Client Requirements

Windows clients require a supported 64-bit Windows operating system:

  • Windows 10

  • Windows Server 2019

  • Windows Server 2016

  • Windows Server 2012 R2

In order to access link-migrated files, the LinkConnect Client Driver must be installed on each client machine – see §2.3.

5.2.2.4 Network

Place the DataCore FileFly LinkConnect Server on the same subnet and same switch as the corresponding Windows file server(s) to minimize latency.
Additionally, the FileFly LinkConnect Server must be joined to the same domain as the Windows file server and the Windows client machines.

5.2.2.5 Antivirus Considerations

Verify Windows Defender or any other antivirus product installed on the FileFly LinkConnect Server is configured to omit scanning/screening on the LinkConnect Cache Volume and any Windows file server SMB shares.

5.2.2.6 High-Availability for FileFly LinkConnect Server

Consider whether High-Availability (HA) is required in your environment (either now or in the future). If so, LinkConnect Servers must be installed in a DFSN configuration from the outset.
LinkConnect Cache Shares are configured for HA by exposing the share name at the domain level using DFSN. If not using HA, it is possible to use either a simple share on a standalone server, or a share exposed at the domain level using DFSN. The latter is always recommended to allow transition to an HA configuration in the future.

5.2.2.7 Regular Maintenance Activity

Each configured MigLink source is periodically scanned to perform maintenance tasks such as MigLink ACL propagation and Link Deletion Monitoring (see §5.2.2.8).
In an HA configuration, this scanning activity is performed by a single caretaker node, as can be seen on the Admin Portal Servers page. A standalone FileFly LinkConnect Server always performs the caretaker role.

5.2.2.8 Link Deletion Monitoring

Link Deletion Monitoring (LDM) identifies secondary storage files that are no longer referenced in order to facilitate the recovery of storage space by Scrub Policies. This feature extends not only to MigLinks that are demigrated or directly deleted by the user, but also to other cases such as overwriting a MigLink or renaming a different file over the top of a MigLink.
Unlike SDM, LDM requires a number of maintenance scans to determine that a given secondary storage file is no longer referenced. Interrupting the maintenance process (e.g. by restarting the caretaker node or transitioning the caretaker role) delays the detection of unreferenced secondary storage. For optimal and timely storage space recovery, verify LinkConnect Servers can run uninterrupted for extended periods.
Warning: in order to avoid LDM incorrectly identifying files as deleted – leading to unwanted data loss during Scrub – it is critical to verify users cannot move/rename MigLinks out of the scanned portion of the directory tree within the filesystem. This can be achieved by always creating the share used for your 'miglinkSource' at the root of the filesystem. An additional share may be created solely for this purpose.
To utilize LDM, it must first be enabled on a per-share basis.

5.2.3 Setup

5.2.3.1 Create a LinkConnect User

Provision a user on the Windows domain for the exclusive use of your LinkConnect service(s). This user does not need to be a member of Domain Admins.

5.2.3.2 Configure Windows File Server

On the file server:

  1. Add the LinkConnect User to the the local Administrators group

  2. Add 'Full Control' permissions for this user to each share

    • be sure to configure the share, not the folder permissions

5.2.3.3 Installation

On each FileFly LinkConnect Server machine:

  1. Add the user created above to the local 'Administrators' group

  2. Assign the 'Log on as a service' privilege to this user

  3. Run the DataCore FileFly LinkConnect Server.exe

  4. Follow the prompts to complete the installation

  5. Follow the instructions to activate the installation

    • the Servers page report the server is unconfigured

5.2.3.4 Cache Share Creation

On your cache volume (e.g. X:), navigate to X:\1bf8ce99-8c8a-4092-9c98-2b9c850c57a1\shares.
To create each Cache Share:

  • Create a new folder with the desired share name

  • Right click → Properties → Sharing → Advanced Sharing…

  • Tick 'Share this folder'

  • Share name must match the folder name exactly (including case)

  • Permissions:

    • Everyone: Allow 'Read' only

    • No other permissions

5.2.3.5 Service Configuration

On the Admin Portal 'Servers' page, edit the configuration of the FileFly LinkConnect Server. In the 'Manual Overrides' panel, add the following options:

linkconnect.config.linkConnectAlias=ALIAS_FQDN

where ALIAS_FQDN is either the FQDN of the FileFly LinkConnect Server (standalone mode), or of the DFSN domain (standalone or high-availability).
For each share mapping, add:

linkconnect.config.MAPPING_NUMBER.miglinkSourceType=win

linkconnect.config.MAPPING_NUMBER.miglinkSource=WIN_FQDN/WIN_SHARE

linkconnect.config.MAPPING_NUMBER.linkConnectTarget=CACHE_SHARE\SUBDIV

linkconnect.config.MAPPING_NUMBER.key=SECRET_KEY

linkconnect.config.MAPPING_NUMBER.linkDeletionMonitoring.enabled=<bool>

where:

  • miglinkSourceType must be set to exactly win

  • MAPPING_NUMBER starts at 0 for the first share mapping in this file – mappings must be numbered consecutively

  • WIN_FQDN/WIN_SHARE describes the file server share to be mapped

  • CACHE_SHARE is a LinkConnect Cache Share name (created above

    • this value is CASE-SENSITIVE

  • SUBDIV must be the single decimal digit 1

  • SECRET_KEY is at least 40 ASCII characters – this key protects against counterfeit link creation

    • recommendation: use a password generator with 64 'All Chars'

  • linkDeletionMonitoring.enabled may be set to true or false to enable/disable Link Deletion Monitoring on this share – see warning above

If clients access the storage using nested sub-shares rather than only the top-level configured MigLink Source share, the known sub-shares should be added as follows:

linkconnect.config.MAPPING_NUMBER.knownSubShares=share1,share2

This list of sub-shares can be updated later as more subdirectories are shared. Where MigLink access occurs on unexpected shares, warnings are written to the LinkConnect agent.log.
Save the configuration and restart the DataCore FileFly Agent service.
Important: Refer to §3.3.1 to verify the configuration on this FileFly LinkConnect Server is included in your backup. If the FileFly LinkConnect Server needs to be rebuilt, the secret key is required to enable previously link-migrated files to be securely accessed.

5.2.3.6 DFSN Configuration

If DFSN is to be used (even if not yet using HA), namespaces and folders must be configured as follows:

  1. Add a DFSN namespace:

    • the namespace must not be hosted on a LinkConnect node

    • the namespace name must match the LinkConnect Cache Share name exactly (including case)

    • the namespace must be 'Domain-based'

  2. Add a folder to the namespace:

    • folder name must be of the form: SUBDIV_MwClC_1 e.g. 1_MwClC_1

    • Add folder target:

      • \\NODE\CACHE_SHARE\SUBDIV_MwClC_1

      • where NODE is a LinkConnect node that exports CACHE_SHARE

      • where CACHE_SHARE matches the namespace name exactly (including case)

      • where SUBDIV_MwClC_1 matches the new folder name exactly (including case)

      • the folder target already exists – it was created by the FileFly LinkConnect Server in the previous section

      • DO NOT enable replication

    • For HA configurations, add additional targets to the same folder for the remaining LinkConnect node(s)

For example, \\example.com\CacheA\1_MwClC_1 may refer to both of the following locations:

\\server1.example.com\CacheA\1_MwClC_1



\\server2.example.com\CacheA\1_MwClC_1

(optional 2nd node)

5.2.3.7 Recovery of Lost Secret Key

The LinkConnect configuration, including the secret key, for each FileFly LinkConnect Server is synchronized with the FileFly Admin Portal. These details are part of the Admin Portal configuration backup.
In rare cases where the keys have been completely lost and a DataCore FileFly LinkConnect Server needs to be rebuilt, it is possible to temporarily disable the Counterfeit Link Protection (CLP) and re-sign all links with a new key. To enable this behavior, recreate the configuration as above (with a new secret key), and add a line similar to the following:

linkconnect.config.disableSignatureSecurityUntil=2020-04-14T01:00:00Z

Regular scanning of the configured share mapping updates the links present in all scanned links to use the new key, and any user-generated access to these links functions without verifying the signatures until the configured cutoff time, specified as Zulu Time (GMT). For a large system, it may be necessary to allow several days before the cutoff, to enable key update to complete. Users may continue to access the system during this period.

5.2.4 Usage

5.2.4.1 URI format

smb://{server}/{nas}/{share}/[{path}/]
Where:

  • server – FQDN of a FileFly LinkConnect Server that is configured to support the file server share

  • nas – Windows file server FQDN

  • share – Windows file server SMB share

  • path – path within the share

Example:
smb://link.example.com/winserver.example.com/pub/projects/

5.3 NetApp Filer

This section describes support for NetApp Filers.

5.3.1 Migration Support

Migration support for sources on NetApp Vservers (Storage Virtual Machines) is provided using NetApp FPolicy. This requires the use of a DataCore FileFly FPolicy Server. Client demigrations can be triggered using SMB or NFS client access.
Please note that NetApp Filers currently support FPolicy for Vservers with FlexVol volumes but not Infinite volumes.
When accessed using SMB on a Windows client, NetApp stub files can be identified by the 'O' (Offline) attribute in Explorer. Files with this flag may be displayed with an overlay icon. The icon may vary depending on the version of Windows on the client workstation.

5.3.2 Planning

5.3.2.1 Prerequisites

  • NetApp Filer(s) must be licensed for the particular protocol(s) to be used (FPolicy requires an SMB license)

  • A FileFly license that includes an entitlement for FileFly NetApp FPolicy Server

DataCore FileFly FPolicy Servers require EXCLUSIVE use of SMB connections to their associated NetApp Vservers. This means Explorer windows must not be opened, drives must not be mapped, nor should any UNC paths to the filer be accessed from the FileFly FPolicy Server machine. Failure to observe this restriction results in unpredictable FPolicy disconnections and interrupted service.
When creating a production deployment plan, please refer to §3.5.

5.3.2.2 Filer System Requirements

DataCore FileFly FPolicy Server requires that the Filer is running:

  • Data ONTAP version 9.x

5.3.2.3 Network

Each FileFly FPolicy Server should have exactly one IP address.
Place the FPolicy Servers on the same subnet and same switch as their corresponding Vservers to minimize latency.

5.3.2.4 Antivirus Considerations

Verify Windows Defender or any other antivirus product installed on FileFly FPolicy Server machines is configured to omit scanning/screening NetApp shares.
Antivirus access to NetApp files interfere with the correct operation of the FileFly FPolicy Server software. Antivirus protection should still be provided on client machines and/or the NetApp Vservers themselves as normal.

5.3.2.5 High-Availability for FileFly FPolicy Servers

It is strongly recommended to install DataCore FileFly FPolicy Servers in a High-Availability configuration. This configuration requires the installation of DataCore FileFly FPolicy Server on a group of machines which are addressed by a single FQDN. This provides High-Availability for migration and demigration operations on the associated Vservers.
A pair of FileFly FPolicy Servers operating in HA service all Vservers on a NetApp cluster.
Note: The servers that form the High-Availability FileFly FPolicy Server configuration must not be members of a Windows failover cluster.

5.3.2.6 DNS Configuration

All Active Directory Servers, DataCore FileFly FPolicy Servers, and NetApp Filers, must have both forward and reverse records in DNS.
All hostnames used in Filer and FileFly FPolicy Server configuration must be FQDNs.

5.3.3 Setup

5.3.3.1 Setup Parameters

Before starting the installation the following parameters must be considered:

  • Management Interface IP Address: the address for management access to the Vserver (not to be confused with cluster or node management addresses)

  • SMB Privileged User: a domain user for the exclusive use of FPolicy

5.3.3.2 Preparing Vserver Management Access

For each Vserver, verify 'Management Access' is allowed for at least one network interface. Check the network interface in OnCommand System Manager –- if Management Access is not enabled, create a new interface just for Management Access. Note that using the same interface for management and data access may cause firewall problems.
Management authentication may be configured to use either passwords or client certificates. Management connections may be secured using TLS – this is mandatory when using certificate-based authentication.
For password-based authentication:

  1. Open a command line session to the cluster management address

  2. Add a user for Application 'ontapi' with Role 'vsadmin'

    • security login create -user-or-group-name <username> -application ontapi -authentication-method password -role vsadmin -vserver <vserver fqdn>

  3. Record the username and password for later use on the 'Management' tab in DataCore FileFly NetApp Cluster-mode Config

Alternatively, for certificate-based authentication:

  1. Create a client certificate with common name <Username>

  2. Open a command line session to the cluster management address

  3. Upload the CA Certificate (or the client certificate itself if self-signed):

    • security certificate install -type client-ca -vserver <vserver-name>

    • Paste the contents of the CA Certificate at the prompt

  4. Add a user for Application 'ontapi' with Role 'vsadmin'

    • security login create -username <Username> -application ontapi -authmethod cert -role vsadmin -vserver <vserver-name>

5.3.3.3 Configuring SMB Privileged Data Access

If it has not already been created, create the SMB Privileged User on the domain. Each FileFly FPolicy Server uses the same SMB Privileged User for all Vservers it manages.
Open a command line session to the cluster management address:

  1. Create a new local 'Windows' group

    • cifs users-and-groups local-group create -group-name <Name> -vserver <vserver fqdn>

  2. Assign ALL available privileges to the local group

    • cifs users-and-groups privilege add-privilege -user-or-group-name <Group Name> -privileges SeTcbPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeSecurityPrivilege SeChangeNotifyPrivilege -vserver <vserver fqdn>

  3. Add the CIFS Privileged User to this group

    • cifs users-and-groups local-group add-members -group-name <Name> -member-names <Domain\User or Group Name> -vserver <vserver fqdn>

  4. Allow a few minutes for the change to take effect (or FPolicy Server operations may fail with access denied errors)

5.3.3.4 Installation

On each FileFly FPolicy Server machine:

  1. Close any SMB sessions open to Vserver(s) before proceeding

  2. Verify the SMB Privileged User has the 'Log on as a service' privilege

  3. Run the DataCore FileFly NetApp FPolicy Server.exe

  4. Follow the prompts to complete the installation

  5. Follow the instructions to activate the installation

5.3.3.5 Installing 'DataCore FileFly NetApp Cluster-mode Config'

  • Run the installer: DataCore FileFly NetApp Cluster-mode Config.exe

5.3.3.6 Configuring Components

Run DataCore FileFly NetApp Cluster-mode Config.
On the 'FPolicy Config' tab:

  • Enter the FQDN used to register the FileFly FPolicy Server(s) in FileFly Admin Portal

  • Enter the SMB Privileged User

On the 'Management' tab:

  • Provide the credentials for management access (see above)

On the 'Vservers' tab:

  • Click Add…

  • Enter the SMB and management interface details

  • If using TLS for Management, click Get Server CA

  • Click Apply to Filer

Once the configuration is complete, click Save.

5.3.3.7 Apply Configuration to FileFly FPolicy Servers

  1. Verify the netapp_clustered.cfg file has been copied to the correct location on all FileFly FPolicy Server machines

    • C:\Program Files\DataCore FileFly\data\FileFly Agent\netapp_clustered.cfg

  2. Restart the DataCore FileFly Agent service on each machine

5.3.4 Usage

SMB shares that will be used in FileFly Policies must be configured to Hide symbolic links. If a different setting is required for other SMB clients, create a new share at the same location just for FileFly traversal that does hide links. To modify the symlink behavior on a share:

  1. Open a command line session to the cluster management address

  2. For each share:

    • cifs share modify -share-name <sharename> -symlink-properties hide -vserver <vserver fqdn>

5.3.4.1 URI Format

netapp://{FPolicy Server}/{NetApp Vserver}/{SMB Share}/[{path}/]
Where:

  • FPolicy Server – FQDN alias that points to all FileFly FileFly FPolicy Servers for the given Vserver

  • NetApp Vserver – FQDN of the Vserver's Data Access interface

  • SMB Share – NetApp SMB share name

Example:
netapp://fpol-svrs.example.com/vs1.example.com/data/

5.3.5 Snapshot Restore

5.3.5.1 Volume Restore

After an entire volume containing stubs is restored from the snapshot, a Post-Restore Revalidate Policy must be run, as per the restore procedure described in §3.4.

5.3.5.2 Individual Stub Restore

Users cannot perform self-service restoration of stubs. However, an administrator may restore specific stubs or sets of stubs from snapshots by following the procedure outlined below. Be sure to provide this procedure to all administrators.
IMPORTANT: The following instructions mandate the use of Robocopy specifically. Other tools, such as Windows Explorer copy or the 'Restore' function in the Previous versions dialog, DO NOT correctly restore stubs.
To restore one or more stubs from a snapshot-folder like:
\\<filer>\<share>~snapshot\<snapshot-name>\<path>
to a restore folder on the same Filer like:
\\<filer>\<share>\<restore-path>
perform the following steps:

  1. Go to an FileFly FPolicy Server machine

  2. Open a command window

  3. robocopy <snapshot-folder> <folder> [<filename>…] [/b]

  4. On a client machine (NOT the FileFly FPolicy Server), open all of the restored file(s) or demigrate them using a Demigrate Policy

    • Check that the file(s) have demigrated correctly

IMPORTANT: Until the demigration above is performed, the restored stub(s) may occupy space for the full size of the file.
As with any other FileFly restore procedure, be sure to run a Post-Restore Revalidate Policy across the volume before the next Scrub – see §3.4.

5.3.6 Interoperability

5.3.6.1 NDMP Backup

NDMP Backup products require ONTAP 9.2+ for interoperability with FileFly.

5.3.6.2 Robocopy

Except when following the procedure in §5.3.5 Robocopy must not be used with the /b (backup mode) switch when copying FileFly NetApp stubs.
When in backup mode, robocopy attempts to copy stub files as-is rather than demigrating them as they are read. This behavior is not supported.
Note: The /b switch requires Administrator privilege – it is not available to normal users.

5.3.7 Behavioral Notes

5.3.7.1 Unix Symbolic Links

Unix Symbolic links (also known as symlinks or softlinks) may be created on a Filer using an NFS mount. Symbolic links are not seen during FileFly Policy traversal of a NetApp file system (since only shares which hide symbolic links are supported for traversal). If it is intended that a policy should apply to files within a folder referred to by a symbolic link, verify the Source encompasses the real location at the link's destination. A Source URI may NOT point to a symbolic link – use the real folder that the link points to instead.
Client-initiated demigrations using symbolic links operate as expected.

5.3.7.2 QTree and User Quotas

NetApp QTree and user quotas are measured in terms of logical file size. Thus, migrating files has no effect on quota usage.

5.3.7.3 Snapshot Traversal

FileFly automatically skips snapshot directories when traversing shares using the netapp scheme.

5.3.8 Skipping Sparse Files

It is often undesirable to migrate files that are highly sparse since sparseness is not preserved by the migration process.
To enable sparse files to be skipped during migration policies, tick 'Settings''Additional Options''Enable sparse file skipping' in FileFly Admin Portal.
Skipping sparse files may then be configured per migration policy.

5.3.9 Advanced Configuration

5.3.9.1 Alternative Engine IP Addresses

Alternative engine IP addresses may be provided on the FileFly NetApp Cluster-mode Config 'Advanced' tab if filer communication is to be performed on a different IP address than that used for Admin Portal to FPolicy Server communication. This allows each node to have two IP addresses. Care must be taken that ALL communication – in both directions – between filer and FileFly FPolicy Server node occurs using the engine address.
Ordinarily, one IP address per server is sufficient. Contact DataCore Support if an advanced network configuration is required.

5.3.9.2 Cache First Block

When migrating files, the first block of the file may optionally be cached. This allows small reads to file headers to be completed immediately, without accessing secondary storage. By default this feature is disabled. This feature may be enabled on the 'Advanced' tab. The 'Prefix size' field allows the amount cached on disk after a migration to be tuned.

5.3.10 Troubleshooting

5.3.10.1 Troubleshooting Management Login

  • Open a command line session to the cluster management address

  • security login show -vserver <vserver-name>

    • There should be an entry for the expected user for application 'ontapi' with role 'vsadmin'

5.3.10.2 Troubleshooting TLS Management Access

  • Open a command line session to the cluster management address

  • vserver context -vserver <vserver-name>

  • security certificate show

    • There should be a 'server' certificate for the Vserver management FQDN (NOT the bare hostname)

    • If using certificate-based authentication, there should be a 'client-ca' entry

  • security ssl show

    • There should be an enabled entry for the Vserver management FQDN (NOT the bare hostname)

5.3.10.3 Troubleshooting Vserver Configuration

Vserver configuration can be validated using DataCore FileFly NetApp Cluster-mode Config.

  • Open the netapp_clustered.cfg in FileFly NetApp Cluster-mode Config

  • Go to the 'Vservers' tab

  • Select a Vserver

  • Click Edit…

  • Click Verify

5.3.10.4 Troubleshooting 'ERR_ADD_PRIVILEGED_SHARE_NOT_FOUND'

If the FileFly FPolicy Server reports privileged share not found, there is a misconfiguration or SMB issue. Please attempt the following steps:

  • Check all the configuration using troubleshooting steps described above

  • Verify the FileFly FPolicy Server has no other SMB sessions to Vservers

    • run net use from Windows Command Prompt

    • remove all mapped drives

  • Reboot the server

  • Retry the failed operation

    • Check for new errors in agent.log

5.4 Dell EMC PowerScale OneFS

This section describes FileFly's capabilities when used with OneFS on Dell EMC PowerScale / Isilon platforms.

5.4.1 Link-Migration Support

OneFS does not provide an interface for performing FileFly stub-based migration. As an alternative, FileFly provides a link-based migration mechanism using a FileFly LinkConnect Server. See §4.9 for details of the Link-Migrate operation.
Link-Migration works by pairing a OneFS SMB share with a corresponding LinkConnect Cache Share. Typically a top-level share on each OneFS device is mapped to a unique share (or subdivision) on a FileFly LinkConnect Server. Multiple OneFS systems may use shares/subdivisions on the same FileFly LinkConnect Server if desired.
Once this configuration is completed, Link-Migrate policies convert files on the source OneFS share to links pointing to the destination files using the LinkConnect Cache Share, according to configured rules.
Link-Migrated files can be identified by the 'O' (Offline) attribute in Explorer. Depending on the version of Windows, files with this flag may be displayed with an overlay icon.

5.4.2 Planning

5.4.2.1 Prerequisites

  • An NTFS Cache Volume of at least 1TB – see §2.2.4

  • A FileFly license that includes an entitlement for FileFly LinkConnect Server.

  • A supported secondary storage destination (excluding scsp and scspdirect)

When creating a production deployment plan, please refer to §3.5.

5.4.2.2 NAS System Requirements

  • DataCore FileFly LinkConnect Server requires OneFS version 8.1.2.0 or higher

5.4.2.3 Client Requirements

Windows clients require a supported 64-bit Windows operating system:

  • Windows 10

  • Windows Server 2019

  • Windows Server 2016

  • Windows Server 2012 R2

In order to access link-migrated files, the LinkConnect Client Driver must be installed on each client machine – see §2.3.

5.4.2.4 Network

Place the DataCore FileFly LinkConnect Server on the same subnet and same switch as the corresponding OneFS system to minimize latency.
Additionally, the FileFly LinkConnect Server must be joined to the same domain as the OneFS NAS and the Windows client machines.

5.4.2.5 Antivirus Considerations

Verify Windows Defender or any other antivirus product installed on the FileFly LinkConnect Server is configured to omit scanning/screening on the LinkConnect Cache Volume and any OneFS SMB shares.

5.4.2.6 High-Availability for FileFly LinkConnect Server

Consider whether High-Availability (HA) is required in your environment (either now or in the future). If so, LinkConnect Servers must be installed in a DFSN configuration from the outset.
LinkConnect Cache Shares are configured for HA by exposing the share name at the domain level using DFSN. If not using HA, it is possible to use either a simple share on a standalone server, or a share exposed at the domain level using DFSN. The latter is always recommended to allow transition to an HA configuration in the future.

5.4.2.7 Regular Maintenance Activity

Each configured MigLink source is periodically scanned to perform maintenance tasks such as MigLink ACL propagation and Link Deletion Monitoring (see §5.4.2.8).
In an HA configuration, this scanning activity is performed by a single caretaker node, as can be seen on the Admin Portal Servers page. A standalone FileFly LinkConnect Server always performs the caretaker role.

5.4.2.8 Link Deletion Monitoring

Link Deletion Monitoring (LDM) identifies secondary storage files that are no longer referenced in order to facilitate the recovery of storage space by Scrub Policies. This feature extends not only to MigLinks that are demigrated or directly deleted by the user, but also to other cases such as overwriting a MigLink or renaming a different file over the top of a MigLink.
Unlike SDM, LDM requires a number of maintenance scans to determine that a given secondary storage file is no longer referenced. It should be noted that interrupting the maintenance process (e.g. by restarting the caretaker node or transitioning the caretaker role) delays the detection of unreferenced secondary storage. For optimal and timely storage space recovery, verify LinkConnect Servers can run uninterrupted for extended periods.
Warning: in order to avoid LDM incorrectly identifying files as deleted – leading to unwanted data loss during Scrub – it is critical to verify users cannot move/rename MigLinks out of the scanned portion of the directory tree within the filesystem. This can be achieved by always creating the share used for your 'miglinkSource' at the root of the filesystem. An additional share may be created solely for this purpose.
To utilize LDM, it must first be enabled on a per-share basis.

5.4.3 Setup

5.4.3.1 Create a LinkConnect User

Provision a user on the Windows domain for the exclusive use of your LinkConnect service(s). This user does not need to be a member of Domain Admins.

5.4.3.2 Configure OneFS

Using the OneFS Storage Administration web console:

  1. Navigate to Access → Membership & Roles → Roles

  2. Edit the BackupAdmin role

    • add the LinkConnect user to this role

  3. Navigate to Protocols → Windows Sharing (SMB) → SMB Shares

  4. Edit the share to be paired with a LinkConnect Cache Share

    • Add the LinkConnect user as a new member

    • Specify 'Run as root' permission

    • Move the new member to the top of the members list

5.4.3.3 Installation

On each FileFly LinkConnect Server machine:

  1. Add the user created above to the local 'Administrators' group

  2. Assign the 'Log on as a service' privilege to this user

  3. Run the DataCore FileFly LinkConnect Server.exe

  4. Follow the prompts to complete the installation

  5. Follow the instructions to activate the installation

    • the Servers page will report that the server is unconfigured

5.4.3.4 Cache Share Creation

On your cache volume (e.g. X:), navigate to X:\1bf8ce99-8c8a-4092-9c98-2b9c850c57a1\shares.
To create each Cache Share:

  • Create a new folder with the desired share name

  • Right click → Properties → Sharing → Advanced Sharing…

  • Tick 'Share this folder'

  • Share name must match the folder name exactly (including case)

  • Permissions:

    • Everyone: Allow 'Read' only

    • No other permissions

5.4.3.5 Service Configuration

On the Admin Portal 'Servers' page, edit the configuration of the FileFly LinkConnect Server. In the 'Manual Overrides' panel, add the following options:

linkconnect.config.linkConnectAlias=ALIAS_FQDN

where ALIAS_FQDN is either the FQDN of the FileFly LinkConnect Server (standalone mode), or of the DFSN domain (standalone or high-availability).
For each share mapping, add:

linkconnect.config.MAPPING_NUMBER.miglinkSourceType=isilon

linkconnect.config.MAPPING_NUMBER.miglinkSource=ONEFS_FQDN/ONEFS_SHARE

linkconnect.config.MAPPING_NUMBER.linkConnectTarget=CACHE_SHARE\SUBDIV

linkconnect.config.MAPPING_NUMBER.key=SECRET_KEY

linkconnect.config.MAPPING_NUMBER.linkDeletionMonitoring.enabled=<bool>

where:

  • miglinkSourceType must be set to exactly isilon

  • MAPPING_NUMBER starts at 0 for the first share mapping in this file – mappings must be numbered consecutively

  • ONEFS_FQDN/ONEFS_SHARE describes the OneFS share to be mapped

  • CACHE_SHARE is a LinkConnect Cache Share name (created above)

    • this value is CASE-SENSITIVE

  • SUBDIV must be the single decimal digit 1

  • SECRET_KEY is at least 40 ASCII characters – this key protects against counterfeit link creation

    • recommendation: use a password generator with 64 'All Chars'

  • linkDeletionMonitoring.enabled may be set to true or false to enable/disable Link Deletion Monitoring on this share – see warning above

If clients access the storage using nested sub-shares rather than only the top-level configured MigLink Source share, the known sub-shares should be added as follows:

linkconnect.config.MAPPING_NUMBER.knownSubShares=share1,share2

This list of sub-shares can be updated later as more subdirectories are shared. Where MigLink access occurs on unexpected shares, warnings are written to the LinkConnect agent.log.
Save the configuration and restart the DataCore FileFly Agent service.
Important: Refer to §3.3.1 to verify the configuration on this FileFly LinkConnect Server is included in your backup. If the FileFly LinkConnect Server needs to be rebuilt, the secret key is required to enable previously link-migrated files to be securely accessed.

5.4.3.6 DFSN Configuration

If DFSN is to be used (even if not yet using HA), namespaces and folders must be configured as follows:

  1. Add a DFSN namespace:

    • the namespace must not be hosted on a LinkConnect node

    • the namespace name must match the LinkConnect Cache Share name exactly (including case)

    • the namespace must be 'Domain-based'

  2. Add a folder to the namespace:

    • folder name must be of the form: SUBDIV_MwClC_1 e.g. 1_MwClC_1

    • Add folder target:

      • \\NODE\CACHE_SHARE\SUBDIV_MwClC_1

      • where NODE is a LinkConnect node which exports CACHE_SHARE

      • where CACHE_SHARE matches the namespace name exactly (including case)

      • where SUBDIV_MwClC_1 matches the new folder name exactly (including case)

      • the folder target already exists – it was created by the FileFly LinkConnect Server in the previous section

      • DO NOT enable replication

    • For HA configurations, add additional targets to the same folder for the remaining LinkConnect node(s)

For example, \\example.com\CacheA\1_MwClC_1 may refer to both of the following locations:

\\server1.example.com\CacheA\1_MwClC_1



\\server2.example.com\CacheA\1_MwClC_1

(optional 2nd node)

5.4.3.7 Recovery of Lost Secret Key

The LinkConnect configuration, including the secret key, for each FileFly LinkConnect Server is synchronized with the FileFly Admin Portal. These details are part of the Admin Portal configuration backup.
However, in rare cases where the keys have been completely lost and a DataCore FileFly LinkConnect Server needs to be rebuilt, it is possible to temporarily disable the Counterfeit Link Protection (CLP) and re-sign all links with a new key. To enable this behavior, recreate the configuration as above (with a new secret key), and add a line similar to the following:

linkconnect.config.disableSignatureSecurityUntil=2020-04-14T01:00:00Z

Regular scanning of the configured share mapping updates the links present in all scanned links to use the new key, and any user-generated access to these links functions without verifying the signatures until the configured cutoff time, specified as Zulu Time (GMT). For a large system, it may be necessary to allow several days before the cutoff, to enable key update to complete. Users may continue to access the system during this period.

5.4.4 Usage

5.4.4.1 URI format

smb://{server}/{nas}/{share}/[{path}/]
Where:

  • server – FQDN of a FileFly LinkConnect Server that is configured to support the OneFS share

  • nas – OneFS FQDN

  • share – OneFS SMB share

  • path – path within the share

Example:
smb://link.example.com/onefs.example.com/pub/projects/

5.5 DataCore Swarm

5.5.1 Introduction

DataCore Swarm provides a multi-tenanted object storage platform built upon Swarm storage nodes. Swarm may be used as a migration destination only.
Swarm (SCSP) traffic may optionally be encrypted in transit with TLS. Additionally, the plugin can employ client-side encryption to protect migrated data at rest.

5.5.2 Planning

Before proceeding with the installation, the following are required:

  • Cloud Gateway 3.0.0 or above

  • Swarm 8 or above

  • a license that includes entitlement for Swarm

5.5.2.1 Policy Limitations

The following Policy limitations apply to this scheme:

  • it may not be used as a Link-Migration destination

  • it may not be used as the new destination for Change Destination Tier policies

  • it may not be used as the new destination for Retarget Destination policies

5.5.2.2 Firewall

The TCP port used to access the Swarm Content Gateway using HTTP or HTTPS must be allowed by any firewalls between the DataCore FileFly Gateway and the Swarm endpoint. For further information regarding firewall configuration see Appendix B.

5.5.2.3 Named and Unnamed Objects

Swarm domain names used with FileFly must be valid FQDNs that resolve to one or more Content Gateway endpoints.
Migrated files may be stored as either unnamed objects (accessed by UUID), or as named objects residing in a bucket. Bucket creation must be performed ahead of time, prior to configuring FileFly.

5.5.2.4 Certificate

In order to utilize an HTTPS endpoint, the endpoint's Root CA certificate must be trusted by the relevant FileFly components. In most cases, the Root CA is already trusted as a pre-installed public root or enterprise-deployed CA. Where this is not the case, install the Root CA (or self-signed certificate) in the Local Computer Trusted Root Certification Authorities store on each Gateway and the Admin Portal machine.

5.5.2.5 Authentication

When using buckets, it is a requirement that the configured credentials for accessing the bucket are also permitted to perform HEAD requests at the root of the domain in order to obtain domain information. This must be considered when provisioning buckets.

5.5.3 Usage

In DataCore FileFly Admin Portal, navigate to the 'Servers' page and configure the Server on which the plugin is enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as Domain credentials. Swarm Destinations require an index to be created prior to use. Once credentials have been supplied, click Create new index to create a new index and corresponding migration Destination.
Additional indexes can be added at a later date to further subdivide storage if required. Multiple migration destinations may be created in the same bucket by specifying different partition names.
Important: Each FileFly Admin Portal must have its own destination indexes; DO NOT share indexes across multiple FileFly implementations.

5.5.3.1 Metadata Options

Enable 'Include metadata headers' to store per-file HTTP metadata with the destination objects, such as original filename and location, content-type, owner, and timestamps – see §5.5.6 for details.
Also, enable 'Include Content-Disposition' to include the original filename for use when downloading the target objects directly using a web browser.

5.5.4 Legacy URIs

URIs created on previous versions of FileFly using the CloudScaler scheme continue to function as expected. Existing destinations should NOT be updated to use the scsp scheme. The CloudScaler scheme is an alias for the scsp scheme.

5.5.5 Disaster Recovery Considerations

During migration, each newly migrated file is recorded in the corresponding index. The index may be used in disaster scenarios where:

  1. stubs have been lost, and

  2. a Create Recovery File from Source file is not available, and

  3. no current backup of the stubs exists

Index performance is optimized for migrations and demigrations, not for Create Recovery File from Destination policies.
Create Recovery File from Source policies are the recommended means to obtain a Recovery file for restoring stubs. This method provides better performance and the most up-to-date stub location information.
It is recommended to regularly run Create Recovery File from Source policies following Migration policies.

5.5.6 Swarm Metadata Headers

The following metadata fields are supported:

  • X-Alt-Meta-Name – the original source file's filename (excluding directory path)

  • X-Alt-Meta-Path – the original source file's directory path (excluding the filename) in a platform-independent manner such that '/' is used as the path separator and the path starts with '/', followed by drive/volume/share if appropriate, but not end with '/' (unless this path represents the root directory)

  • X-FileFly-Meta-Partition – the Destination URI partition – if no partition is present, this header is omitted

  • X-Source-Meta-Host – the FQDN of the original source file's server

  • X-Source-Meta-Owner – the owner of the original source file in a format appropriate to the source system (e.g. DOMAIN\username)

  • X-Source-Meta-Modified – the Last Modified timestamp of the original source file at the time of migration in RFC3339 format

  • X-Source-Meta-Created – the Created timestamp of the original source file in RFC3339 format

  • X-Source-Meta-Attribs – a case-sensitive sequence of characters {AHRS} representing the original source file's file flags: Archive, Hidden, Read-Only, and System

    • all other characters are reserved for future use and should be ignored

  • Content-Type – the MIME Type of the content, determined based on the file-extension of the original source filename

Note: Timestamps may be omitted if the source file timestamps are not set.
Non-ASCII characters are stored using RFC2047 encoding, as described in the Swarm documentation. Swarm decodes these values prior to indexing in Elasticsearch.

5.6 DataCore Swarm (Direct Node Access)

5.6.1 Introduction

The scspdirect scheme should only be used when accessing Swarm storage nodes directly. Swarm may be used as a migration destination only.
Swarm (SCSP) traffic is not encrypted in transit when using this scheme. Optionally, the plugin can employ client-side encryption to protect migrated data at rest.
Normally, Swarm is accessed using a Swarm Content Gateway, in which case the scsp scheme must be used instead, see §5.5.

5.6.2 Planning

Before proceeding with the installation, the following are required:

  • Swarm 8 or above

  • a license that includes entitlement for Swarm

5.6.2.1 Policy Limitations

The following Policy limitations apply to this scheme:

  • it may not be used as a Link-Migration destination

  • it may not be used as the new destination for Change Destination Tier policies

  • it may not be used as the new destination for Retarget Destination policies

5.6.2.2 Firewall

The Swarm storage node port must be allowed by any firewalls between the DataCore FileFly Gateway and the Swarm storage nodes. For further information regarding firewall configuration see Appendix B.

5.6.2.3 Domains and Endpoints

Swarm storage locations are accessed using a configured endpoint FQDN. Add several Swarm storage node IP addresses to DNS under a single endpoint FQDN (4-8 addresses are recommended). If Swarm domains are in use, the FQDN must be the name of the domain in which the FileFly data is stored. If domains are NOT in use (i.e. data is stored in the default cluster domain), it is strongly recommended that the FQDN be the name of the cluster for best Swarm performance.
When using multiple Swarm domains, verify that each domain FQDN is added to DNS as described above.

5.6.2.4 Named and Unnamed Objects

Migrated files may be stored as either unnamed objects (accessed by UUID), or as named objects residing in a bucket. Bucket creation must be performed ahead of time, prior to configuring FileFly.

5.6.3 Usage

In DataCore FileFly Admin Portal, navigate to the 'Servers' page and configure the Server on which the plugin is enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options and encryption settings. Swarm Destinations require an index to be created prior to use: click Create new index to create a new index and corresponding migration Destination.
Additional indexes can be added at a later date to further subdivide storage if required. Multiple migration destinations may be created in the same bucket by specifying different partition names.
Important: Each FileFly Admin Portal must have its own destination indexes; DO NOT share indexes across multiple FileFly implementations.

5.6.3.1 Metadata Options

Enable 'Include metadata headers' to store per-file HTTP metadata with the destination objects, such as original filename and location, content-type, owner, and timestamps – see §5.5.6 for details.
Also, enable 'Include Content-Disposition' to include the original filename for use when downloading the target objects directly using a web browser.

5.6.4 Legacy URIs

URIs created on previous versions of FileFly using the swarm scheme will continue to function as expected. Existing destinations should NOT be updated to use the scspdirect scheme. The swarm scheme is an alias for the scspdirect scheme.

5.6.5 Disaster Recovery Considerations

Refer to §5.5.5.

5.7 Amazon S3

5.7.1 Introduction

Amazon S3 may be used as a migration destination only.
S3 traffic is encrypted in transit with TLS. Additionally, the plugin can employ client-side encryption to protect migrated data at rest.
This section strictly pertains to Amazon S3.

5.7.2 Planning

Before proceeding with the installation, the following will be required:

  • an Amazon Web Services (AWS) Account

  • a license that includes entitlement for Amazon S3

Dedicated buckets – without versioning enabled – should be used for FileFly migration data. However, do not create any S3 buckets at this stage.

5.7.2.1 Firewall

The HTTPS port (TCP port 443) must be allowed by any firewalls between the DataCore FileFly Gateway and the Internet.

5.7.3 Usage

In DataCore FileFly Admin Portal, navigate to the ‘Servers' page and configure the Server on which the plugin will be enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as S3 account credentials. Once credentials have been supplied, click on the manage buckets icon to create buckets and edit bucket-specific settings.
When the configuration is complete, click the 'create migration destination’ icon next to the desired bucket.
Partitions may be used to subdivide a bucket into multiple migration destinations. A greater number of smaller migration destinations may be helpful in a recovery scenario where destinations can be recovered in order of priority.

5.7.3.1 Transfer Acceleration

Transfer acceleration allows data to be uploaded using the fastest data center for your location, regardless of the actual location of the bucket.
This per-bucket option provides a way to upload data to a bucket in a remote AWS region while minimizing the adverse effects on migration policies that would otherwise be caused by the correspondingly higher latency of using the remote region.
Additional AWS charges may apply for using transfer acceleration at upload time, but for archived data, these initial charges may be significantly outweighed by reduced storage costs in the target region. For further details, please consult AWS pricing.

5.7.3.2 Infrequent Access Storage Class

This per-bucket option allows eligible files to be uploaded directly into Infrequent Access Storage (STANDARD_IA) instead of the Standard storage class. This can dramatically reduce costs for infrequently accessed data.
Please consult AWS pricing for further details.

5.7.3.3 Migration Layout

By default, migrated data is stored in the Standard migration layout within the object store. The standard layout supports encryption at rest.
Alternatively, migrated data may be stored in a manner that preserves the original filename information. This layout does not support encryption and is subject to limitations such as path/filename length imposed by the object store. This option is useful in specific circumstances where data at a migration destination must be read directly by other applications. Files are stored under <bucket>/<partition>/FILES. FileFly-specific metadata is stored under <bucket>/<partition>/HDR and should not be made accessible to other applications.

5.7.4 Extended Metadata Fields

Extended metadata fields are written when the 'Migrate with original filenames' option is selected for a migration destination bucket.

Header Field

Content

x-amz-meta-orig-host

Source server FQDN

x-amz-meta-orig-name

Original filename (without path)

x-amz-meta-orig-modified-time

Modified timestamp

x-amz-meta-orig-created-time

Creation timestamp

x-amz-meta-orig-attribs



Subset of characters {AHRS}

representing the original source file's flags

Content-Disposition (optional)

Original name for web browser download

Security Details

as appropriate

x-amz-meta-orig-owner

File owner – e.g. Domain\JoeUser

x-amz-meta-orig-sddl

Microsoft SDDL format security descriptor

Notes:

  • headers will be sent in UTF-8 using RFC2047 encoding as necessary to unambiguously represent the original metadata values (in accordance with the HTTP/1.1 specification – see RFC2616/2.2)

  • due to Amazon-specific limitations, sequences of adjacent whitespace within x-amz-meta-orig-name may be returned as a single space by some client software

  • all timestamps are stored as UTC in RFC3339 format

5.8 Generic S3 Endpoint

5.8.1 Introduction

Other generic or third-party storage devices and services that support the Amazon S3 protocol may be addressed using the 'Generic S3 Endpoint' feature. Such endpoints may be used as migration destinations only.
S3 traffic may optionally be encrypted in transit with TLS. Additionally, the plugin can employ client-side encryption to protect migrated data at rest.

5.8.2 Planning

Important: Prior to production deployment, please confirm with DataCore the chosen device or service is certified for compatibility to verify it is covered by the support agreement.
Prerequisites:

  • suitable S3 API credentials

  • a license that includes entitlement for generic S3 endpoints

Dedicated buckets – without versioning enabled – should be used for FileFly migration data. However, do not create any S3 buckets at this stage.

5.8.2.1 Firewall

The S3 port must be allowed by any firewalls between the DataCore FileFly Gateway and the storage endpoint.

5.8.2.2 Certificate

In order to utilize an HTTPS endpoint, the endpoint's Root CA certificate must be trusted by the relevant FileFly components. In most cases, the Root CA will already be trusted as a pre-installed public root or enterprise-deployed CA. Where this is not the case, install the Root CA (or self-signed certificate) in the Local Computer Trusted Root Certification Authorities store on each Gateway and the Admin Portal machine.

5.8.3 Usage

In DataCore FileFly Admin Portal, navigate to the ‘Servers' page and configure the Server on which the plugin will be enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as S3 account credentials. Once credentials have been supplied, click on the manage buckets icon to create buckets and edit bucket-specific settings.
When the configuration is complete, click the 'create migration destination’ icon next to the desired bucket.
Partitions may be used to subdivide a bucket into multiple migration destinations. A greater number of smaller migration destinations may be helpful in a recovery scenario where destinations can be recovered in order of priority.

5.8.3.1 Omit ISO date from path

Normally, when FileFly migrates a file to S3, a timestamp is included in each resulting S3 object key (name). Amazon S3 implements a flat, uniform keyspace – there is no concept of a directory structure within an Amazon storage bucket. However, some S3-compatible devices map the keyspace to an underlying directory structure or other non-uniform or hierarchical namespace. On such systems, the inclusion of the timestamp may result in excessive directory creation which may adversely impact performance and/or resource consumption. For such devices, use the 'Omit ISO date from path' option to omit the timestamp.

5.8.3.2 Virtual Host Access

The S3 protocol supports a virtual-host-style bucket access method, for example, https://bucket.s3.example.com rather than only https://s3.example.com/bucket. This facilitates connecting to a node in the correct region for the bucket, rather than requiring a redirect.
Generally, the 'Use Virtual Host Access' option should be enabled (the default) to verify optimal performance and correct operation. If the generic S3 endpoint in question does not support this feature at all, Virtual Host Access may be disabled.
Note: When using Virtual Host Access in conjunction with HTTPS (recommended) it is important to verify the endpoint's TLS certificate has been created correctly. For example, if the endpoint FQDN is s3.example.com, the certificate must contain Subject Alternative Names (SANs) for both s3.example.com and *.s3.example.com.

5.8.3.3 Migration Layout

By default, migrated data is stored in the Standard migration layout within the object store. The standard layout supports encryption at rest.
Alternatively, migrated data may be stored in a manner that preserves the original filename information. This layout does not support encryption, and is subject to limitations such as path/filename length imposed by the object store. This option is useful in specific circumstances where data at a migration destination must be read directly by other applications. Files are stored under <bucket>/<partition>/FILES. FileFly-specific metadata is stored under <bucket>/<partition>/HDR and should not be made accessible to other applications.

5.8.4 Extended Metadata Fields

Please refer to §5.7.4 for S3 metadata field details.

5.9 Microsoft Azure Storage

5.9.1 Introduction

Microsoft Azure may be used as a migration destination only.
Azure traffic is encrypted in transit with TLS. Additionally, the plugin can employ client-side encryption to protect migrated data at rest.

5.9.2 Planning

Before proceeding with the installation, the following will be required:

  • a Microsoft Azure Account

  • a Storage Account within Azure – both General Purpose and Blob Storage (with Hot and Cool access tiers) account types are supported

  • a FileFly license that includes an entitlement to Microsoft Azure

5.9.2.1 Firewall

The HTTPS port (TCP port 443) must be allowed by any firewalls between the DataCore FileFly Gateway and the Internet.

5.9.3 Usage

In DataCore FileFly Admin Portal, navigate to the ‘Servers' page and configure the Server on which the plugin will be enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as Azure Storage Accounts. Once credentials have been supplied, click on the manage containers icon to create and view containers.
When the configuration is complete, click the 'create migration destination’ icon next to the desired container.

5.9.3.1 Advanced Encryption Options

The 'Allow unencrypted filenames' option greatly increases performance when creating Recovery files from an Azure Destination. This is facilitated by recording stub filenames in Azure metadata in unencrypted form, even when encryption at rest is enabled.

5.10 Google Cloud Storage

5.10.1 Introduction

Google Cloud Storage is used only as a migration destination with FileFly.
Google Cloud Storage traffic is encrypted in transit with TLS. Additionally, the plugin can employ client-side encryption to protect migrated data at rest.

5.10.2 Planning

Before proceeding with the installation, the following will be required:

  • a Google Account

  • a FileFly license that includes entitlement to Google Cloud Storage

5.10.2.1 Firewall

The HTTPS port (TCP port 443) must be allowed by any firewalls between the DataCore FileFly Gateway and the Internet.

5.10.3 Storage Bucket Preparation

Using the Google Cloud Platform web console, create a new Service Account in the desired project for the exclusive use of FileFly. Create a P12 format private key for this Service Account. Record the Service Account ID and store the downloaded private key file securely for use in later steps.
Create a Storage Bucket exclusively for FileFly data.
For FileFly use, bucket names must:

  • be 3-40 characters long

  • contain only lowercase letters, numbers and dashes (-)

  • not begin or end with a dash

  • not contain adjacent dashes

Edit the bucket's permissions to add the new Service Account as a member with the 'Storage Object Admin' role.

5.10.4 Usage

In DataCore FileFly Admin Portal, navigate to the ‘Servers' page and configure the Server on which the plugin will be enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as Google Storage Accounts. Once credentials have been supplied, click on the manage buckets icon to register previously created buckets.
When the configuration is complete, click the 'create migration destination’ icon next to the desired bucket.

6 Disaster Recovery

6.1 Introduction

The FileFly DrTool application allows for the recovery of files where normal backup and restore procedures have failed. Storage backup recommendations and considerations are covered in §3.4.
It is recommended to regularly run a 'Create Recovery File From Source' Policy to generate an up-to-date list of source–destination mappings.
FileFly DrTool is installed as part of DataCore FileFly Tools.
Note: Community Edition licenses do not include FileFly DrTool functionality.

6.2 Recovery Files

Recovery files are normally generated by running 'Create Recovery File From Source' Policies in FileFly Admin Portal. To open a file previously generated by FileFly Admin Portal:

  1. Open DataCore FileFly DrTool from the Start Menu

  2. Go to File → Open From FileFly Admin Portal…→ Recovery File From Source

  3. Select a Recovery file to open

Older versions of Recovery files may be found using the 'Recovery' page in FileFly Admin Portal.

6.3 Filtering Results

In FileFly DrTool, click Filter to filter results by source file properties. Filter options are described below.
Note: When a Filter is applied, Save only saves the filtered results.

6.3.0.1 Scheme Pattern

In the 'Scheme Pattern' field, use the name of the Scheme only (e.g. win, not win:// or win://servername ). This field may be left blank to return results for all schemes.
This field matches against the scheme section of a URI:

  • {scheme}://{servername}/[{path}]

6.3.0.2 Server Pattern

In the 'Server Pattern' field, use the full server name or a wildcard expression.
This field matches against the servername section of a URI:

  • {scheme}://{servername}/[{path}]

Examples:

  • server65.example.com – will match only the specified server

  • *.finance.example.com – will match all servers in the 'finance' subdomain

6.3.0.3 File Pattern

The 'File Pattern' field will match either filenames only (and search within all directories), or filenames qualified with directory paths in the same manner as filename patterns in FileFly Admin Portal Rules – see Appendix A.
For the purposes of file pattern matching, the top-level directory is considered to be the top level of the entire URI path. This may be different from the top-level of the original Source URI.

6.3.0.4 Using the Analyze Button

Analyze assists in creating simple filters.

  1. Click Analyze

    • Analyze will display a breakdown by scheme, server, and file type

  2. Select a subset of the results by making a selection in each column

  3. Click Filter to create a filter based on the selection

6.4 Recovering Files

6.4.0.1 Selected Files

To recover files interactively:

  • Select the results for which files will be recovered

  • Click Edit → Recover File…

6.4.0.2 All Files

All files may be recovered either as a batch process using the command line (see §6.7) or interactively as follows:

  • Click Edit → Recover All Files…

Note: Missing folders will be recreated as required to contain the recovered files. However, these folders will not have ACLs applied to them so care should be taken when recreating folder structures in sensitive areas.

6.5 Recovering Files to a New Location

When recovering to a new location, always use an up-to-date Recovery file generated by a 'Create Recovery File From Source' Policy.
To rewrite source file URIs to the new location, use the -csu command line option to update the prefix of each URI. Once these URI substitutions have been applied (and checked in the GUI) files may be recovered as previously outlined. The -csu option is further detailed in §6.7.
Important: DO NOT create stubs in a new location and then continue to use the old location. To avoid incorrect reference counts, only one set of stubs should exist at any given time.

6.6 Updating Sources to Reflect Destination URI Change

In FileFly DrTool, source files may be updated to reflect a destination URI change through the use of the -cmu command line option – detailed in §6.7.
To apply the destination URI substitution to existing files on the source, select 'Update All Source Files…' from the Edit menu. When given the option, elect to update substituted entries only.
Note: This operation must always be performed using an up-to-date Recovery file generated by a 'Create Recovery File From Source' Policy.

6.7 Using FileFly DrTool from the Command Line

Important: DO NOT create stubs in a new location and then continue to use the old location. To avoid incorrect reference counts, only one set of stubs should exist at any given time.
Use an Administrator command prompt. By default, DrTool is located in:
C:\Program Files\DataCore FileFly\AdminTools\DrTool\

6.7.0.1 Interactive Usage:

DrTool [Recovery file] [extra options]
Opens the FileFly DrTool in interactive (GUI) mode with the desired options and optionally opens a Recovery file.

6.7.0.2 Batch Usage

DrTool [<operation> <Recovery file>] [<options>]
Run the FileFly DrTool without a GUI to perform a batch operation on all entries in the input file.
Note: The Recovery file provided as input is usually created by saving (possibly filtered) results to the hard disk from the interactive DrTool GUI.

6.7.0.3 Command Line Options

  • operation – is either:

    • -recoverFiles

    • -updateSource

      • if combined with -cmu, only matching entries will be updated

    • -updateSourceAll

      • all entries will be updated, even when -cmu is specified

    • if operation is omitted, the GUI will be opened with any supplied options

  • Recovery file – the file to open

  • options (related to the operation are):

    • -csu {from} {to} – to change Source URI prefix, this option can be specified multiple times

    • -cmu {from} {to} – to change Migrated URI prefix, this option can be specified multiple times

6.7.0.4 Examples:

All the following examples are run from the FileFly DrTool directory.

  • DrTool -recoverFiles result.txt – recover all files from the result.txt file

  • DrTool -updateSource result.txt -cmu scsp://oldfqdn/ scsp://newfqdn/ – update existing files to point to a new storage location

  • DrTool -recoverFiles result.txt -csu win://old1/ win://new1/ -csu win://old2/ win://new2/ -cmu scsp://oldfqdn/ scsp://newfqdn/ – recover files to different servers and update the secondary storage location simultaneously

6.8 Querying a Destination

While it is strongly recommended to obtain Recovery files from a 'Create Recovery File From Source' Policy, where this has been overlooked it is possible to obtain Recovery files from the destination. However, some changes in the source file system, such as renames and deletions, may not be reflected in these results.

6.8.0.1 Querying the Destination from FileFly Admin Portal

Run a 'Create Recovery File From Destination' Policy, see §4.10.

Appendix A Pattern Matching Reference

This appendix details the specifics of the pattern-matching syntax for filename and owner patterns used in Rules (see §1.4.1).

A.1 Wildcard Patterns

The following wildcards are accepted:

  • ? – matches one character (except '/')

  • * – matches zero or more characters (except '/')

  • ** – matches zero or more characters, including '/'

  • /**/ – matches zero or more directory components

Literal commas within a pattern must be escaped with a backslash.
Examples of Supported Wildcard Patterns:

  • * – all filenames

  • *.doc – filenames ending with .doc (including '.doc')

  • ?*.doc – filenames ending with .doc (excluding '.doc')

  • *.do? – filenames matching *.doc, *.dot, *.dop, etc. but not e.g. *.docx

  • ???.* – filenames beginning with any three characters, followed by a period, followed by any number of characters

  • \, – filenames containing a comma

Examples of Using * and ** in Wildcard Patterns:

  • /*.doc – matches files ending with *.doc directly within the Source URI location, but not within its subdirectories

  • public/* – matches all files directly within any directory named 'public'

  • public/** – matches all files at any depth within any directory named 'public'

  • public/*/.pdf – matches all .pdf files at any depth within any directory named 'public'

  • /home/.archived/* – matches the contents of any directory ending with '.archived' directly within the home directory (<Source URI>/home)

  • //public/* – matches all files at any depth with any directory named 'public' where the public directory is exactly one level deep within the Source

  • ///public/* – matches all files at any depth with any directory named 'public' where the public directory is at least one level deep within the Source

A.1.1 Directory Exclusion Patterns

As shown above, wildcard patterns ending with '/**' match all files in a particular tree.
Directory inclusion and exclusion can also be performed using Subdirectory Filtering (see §1.4.2.1).

A.2 Regular Expressions

More complex pattern matching can be achieved using regular expressions. Patterns in this format must be enclosed in a pair of '/' characters. e.g. /[a-z].*/
To assist with correctly matching file path components, the '/' character is only matched if used explicitly. Specifically:

  • . does NOT match the '/' char

  • the subpattern (.|/) is equivalent to the normal regex '.' (i.e. ALL characters)

  • [abc] does NOT match '/' (i.e. it behaves like [/abc])

Additionally,

  • Commas must be escaped with a backslash

  • Patterns are matched case-insensitively

It is recommended to avoid regex matching where wildcard matching is sufficient to improve readability.
Examples of Regular Expressions:

  • /.*/ – all filenames

  • /.*\.doc/ – filenames ending with .doc

  • /~[w|$].+/ – filenames beginning with ~w or ~$ followed by one or more chars

  • /.*\.[0-9]{3}/ – filenames with an extension of three digits

  • /public/.+\.html?/ – .htm and .html files directly within any 'public' directory

  • /public/(.|/)/ – equivalent to wildcard pattern public/*

  • /public/((.|/)+/)index.html/ – equivalent to public/*/index.html

Appendix B Network Ports

The default ports required for FileFly operation are listed below.

B.1 FileFly Tools

The following ports must be free before installing FileFly Tools:

  • 443 (Admin Portal web interface – configurable during installation)

  • 8005

The following ports are used for outgoing connections:

  • 4604-4609 (inclusive)

Any firewall should be configured to allow incoming and outgoing communication on the above ports.

B.2 FileFly Agent / FileFly FPolicy Server / FileFly LinkConnect Server

The following ports must be free before installing FileFly server components:

  • 4604-4609 (inclusive)

Any firewall should be configured to allow incoming and outgoing communication on the above ports.

B.2.0.1 Other Ports

FileFly plugins may require other ports to be opened in any firewalls to access storage devices/services from FileFly Gateway machines.
Please consult specific device or service documentation for further information.

Appendix C Admin Portal Security Configuration

C.1 Updating the Admin Portal TLS Certificate

The webserver TLS certificate may be updated using the following procedure:

  1. Go to C:\Program Files\DataCore FileFly\AdminTools\

  2. Run Update Webserver Certificate

  3. Provide a PKCS#12 certificate and private key pair

Important: the new certificate MUST appropriately match the original Admin Portal FQDN specified at the install time.

C.2 Password Reset

Normally, the administration password is changed on the 'Settings' page as needed.
However, should the system administrator forget the username or password entirely, the credentials may be reset as follows:

  1. Go to C:\Program Files\DataCore FileFly\AdminTools\

  2. Run Reset Web Password

  3. Follow the instructions to provide new credentials

Note: If FileFly Admin Portal has been configured to use LDAP for authentication (e.g. to use Active Directory login), then passwords should be changed/reset by the directory administrator – this section applies only to local credentials configured during installation.

C.3 Authentication with Active Directory

Active Directory authentication is configured during the installation of FileFly Tools.

Appendix D Service Probe

To remotely test whether the DataCore FileFly Webapps service is responding, perform an HTTP GET request on the following resource:
https://<serverFQDN>[:<port>]/ffap/probe
For example, to probe with curl:
curl -i -k 'https://server.example.com/ffap/probe'
The service will respond with 200 OK.

Appendix E Advanced FileFly Agent Configuration

FileFly Agents may be configured on a per-server basis using the Admin Portal 'Servers' page.
When the configuration options are saved, they are pushed to the target server to be loaded on the next service restart. In the case of a cluster, all nodes will receive the same updated configuration.

E.0.0.1 Logging

Log location and rotation options may be adjusted if required. Debug mode may impact performance and should only be enabled following advice from DataCore Support.
Additionally, FileFly can be configured to send UDP syslog messages in either RFC5424 or RFC3164 format. Syslog output is not enabled by default.

E.0.0.2 Stub Deletion Monitoring

As described in §5.1.6, on Windows file systems, FileFly can monitor stub deletion events in order to make corresponding secondary storage files eligible for removal using Scrub Policies.
This feature is not enabled by default. It must be enabled on a per-volume basis either by specifying volume GUIDs (preferred) or drive letters. Volume GUIDs may be determined by running the Windows mountvol command or Powershell Get-WmiObject -Class win32_volume. For Windows clustered volumes, the cluster volume must be specified using a volume GUID.
Note: This feature should not be configured to monitor events on backup destination volumes. In particular, some basic backup tools such as Windows Server Backup copy individual files to VHDX backup volumes in a manner which is not supported and so such volumes must not be configured for Stub Deletion Monitoring. Of course, deletions may still be monitored on source data volumes.

E.0.0.3 Parallelization Tuning

When a Policy is executed on a Source, operations will automatically be executed in parallel. Parallelization parameters may be tuned for each Server if necessary.

E.0.0.4 Deny Demigrations

Applications may be denied the right to demigrate stubs. Such an application – specified either by application binary name or full path – will be unable to access a stub and demigrate the file contents (an error will be returned to the application instead).
Note: Only local applications (applications running directly on the file server) may be blocked.

E.0.0.5 Enabled / Available Plugins

Storage plugins may be configured and enabled as necessary for each server. For plugin-specific details, refer to the appropriate section of Chapter 5.

E.0.0.6 Manual Overrides

Additional options may be manually entered as specified elsewhere by the product documentation or under the direction of a FileFly support engineer.

E.0.0.7 Upload Configuration

Under some circumstances, it may be necessary to upload a configuration file under the direction of a FileFly support engineer. The configuration for this server will be REPLACED in its entirety.

Appendix F Troubleshooting

Before contacting DataCore Support, please review log files for error messages.

F.1 Log Files

F.1.0.1 Admin Portal Logs

Admin Portal logs describing each attempted policy operation are accessed through the Recent Tasks panel on the 'Dashboard'. This is the first place to look when investigating a Policy problem.
The FileFly Admin Portal also maintains a 'Global Log' (accessible from the 'Help' page) which summarizes Policy start/stop activity.
For other issues, including failure of user-initiated demigrations, it will often be necessary to consult the FileFly Agent logs on the servers in question.

F.1.0.2 Server Statistics

In addition to log files, FileFly Admin Portal also provides per-cluster and per-node charts of operation successes and failures on each Server's 'Server Details' page. This includes information about failed demigrates over time which may be useful in conjunction with a Server's log files to troubleshoot user-initiated demigration issues.

F.1.0.3 FileFly Agent Logs

Location: C:\Program Files\DataCore FileFly\logs\FileFly Agent
There are two types of FileFly Agent log file. The agent.log contains all FileFly Agent messages, including startup, shutdown, and error information, as well as details of each individual file operation (migrate, demigrate, etc.). Use this log to determine which operations have been performed on which files and to check any errors that may have occurred.
The messages.log contains a subset of the FileFly Agent messages, related to startup, shutdown, critical events and system-wide notifications.
Log messages in both logs are prefixed with a timestamp and thread tag. The thread tag (e.g. <A123>) can be used to distinguish messages from concurrent threads of activity.
Log files are regularly rotated to keep the size of individual log files manageable. Old rotations are compressed as gzip (.gz) files, and can be read using many common tools such as 7-zip, WinZip, or zless. To adjust logging parameters, including how much storage to allow for log files before removing old rotations, see Appendix E.
Log information for operations performed as the result of an Admin Portal Policy will also be available using the web interface.

F.1.0.4 DrTool Logs

Location: C:\Program Files\DataCore FileFly\logs\DrTool
FileFly DrTool operations such as recovering files are logged in this location. FileFly DrTool will provide the exact name of the log file in the interface.

F.2 Interpreting Errors

Logged errors are typically recorded in an 'error tree' format, which enables user diagnosis of errors/issues in the environment or configuration, as well as providing sufficient detail for further investigation by support engineers if necessary.
Error trees are structured to show WHAT failed, and WHY, at various levels of detail. This section provides a rough guide to extracting the salient features from an error tree.
Each numbered line consists of the following fields:

  • WHAT failed – e.g. a migration operation failed

  • WHY the failure occurred – the '[ERR_ADD…]' code

  • optionally, extra DETAILS about the failure – e.g. the path to a file

As can be seen in the example below, most lines only have a WHAT component, as the reason is further explained by the following line.

F.2.0.1 A Simple Error

ERROR demigrate win://server.test/G/source/data.dat [0] ERR_DMAGENT_DEMIGRATE_FAILED [] [] [1] ERR_DMMIGRATESUPPORTWIN_DEMIGRATE_FAILED [] [] [2] ERR_DMAGENT_DEMIGRATEIMP_FAILED [] [] [3] ERR_DMAGENT_COPYDATA_FAILED [] [] [4] ERR_DMSTREAMWIN_WRITE_FAILED [ERR_ADD_DISK_FULL] [112: There is not enough space on the disk (or a quota has been reached).]
To expand the error above into English:

  • demigration failed for the file: win://server.test/G/source/data.dat

  • because copying the data failed

  • because one of the writes failed with a disk full error

    • the full text of the Windows error (112) is provided

So, G: drive on server.test is full (or a quota has been reached).

F.2.0.2 Errors with Multiple Branches

Some errors result in further action being taken which may itself fail. Errors with multiple branches are used to convey this to the administrator. Consider an error with the following structure:
[0] ERR... [1] ERR... [2] ERR... [3] ERR... [4] ERR... [5] ERR... [6] ERR... [3] ERR... [4] ERR... [5] ERR...
Whatever ultimately went wrong in line 6 caused the operation in question to fail. However, the function at line 2 chose to take further action following the error – possibly to recover from the original error or to clean up after it. This action also failed, the details of which are given by the additional errors in lines 3, 4, and 5 at the end.

F.2.0.3 Check the Last Line First

For many errors, the most salient details are to be found in the last line of the error tree (or the last line of the first branch of the error tree). Consider the following last line:
[11] ERR_DMSOCKETUTIL_GETROUNDROBINCONNECTEDSOCKET_FAILED [ERR_ADD_COUL D_NOT_RESOLVE_HOSTNAME] [host was [svr1279.example.com]]
It is fairly clear that this error represents a failure to resolve the server hostname svr1279.example.com. As with any other software, the administrator's next steps will include checking the spelling of the DNS name, the server's DNS configuration, and whether the hostname is indeed present in DNS.

F.3 Contacting Support

If an issue cannot be resolved after reviewing the logs, contact DataCore Support at: https://support.datacore.com/swarm
Include the following information in any support request (where possible):

  1. DataCore FileFly version

  2. Swarm version

  3. A description of the issue – be sure to include:

    • how long the issue has been present

    • how regularly the issue occurs

    • any changes made to the environment or configuration

    • any specific circumstances which trigger the issue

    • does the issue occur for a particular file and/or server?

  4. Operating System(s)

  5. Saved system info (.NFO) from msinfo32.exe

  6. Plugins in use

  7. Source and Destination URIs

  8. Applicable Log Files

    • see Appendix F.1 for log locations

    • include Admin Portal logs

    • include source agent logs

    • include destination/gateway agent logs

    • include all nodes in each agent cluster

    • zip the entire log folders wherever possible

  9. Generate a system configuration file (support.zip) from the Admin Portal 'Help' page

  10. Any other error messages – include screenshots if necessary

Important: Failure to include all relevant details will delay the resolution of your issue.

Appendix G Glossary

ACL - Access Control List; file/folder/share level metadata encapsulating permissions granted to users or other entities
CA - Certificate Authority; specifically an X.509 certificate which issues (signs) other certificates such that during certificate validation a chain of trust may be established by verifying the signatures along the certificate chain up to a trusted Root CA certificate, e.g. to facilitate secure connection to a webserver - see also Root CA
Caretaker - a specific node within a cluster performing maintenance tasks to run on a single node at a time
CLP - Counterfeit Link Protection
Demigrate - to return migrated file content data to its original location, e.g. in response to user access
DFS - Microsoft's Distributed File System; comprised of DFSN and DFSR
DFSN - DFS Namespace; a Windows mechanism allowing for the presentation of multiple SMB shares as a single logical share
DFSR - DFS Replication; an SMB share-based file replication technology, see also Storage Replica as an alternative from Windows Server 2016 onwards
DR - Disaster Recovery
Enterprise CA - a privately-created Root Certificate Authority, promulgated as a trusted Root CA across an organization
FPolicy - a component of NetApp Data ONTAP that enables the extension of native Filer functionality by other applications
FPolicy Server - a server that connects a NetApp Filer using the FPolicy protocols in order to provide extended functionality
FQDN - Fully Qualified Domain Name, e.g. server1.example.com
GUID - A globally unique identifier
HA - High-Availability; specifically the provision of redundant instances of a resource in a manner that guarantees the availability of service, even in the event of the failure of a particular instance
LDM - Link Deletion Monitoring
Link-Migrate - to transparently relocate file content data to secondary storage, replacing the original file with a MigLink
MigLink - a placeholder for a file that has been Link-Migrated; applications accessing the MigLink are transparently redirected to the corresponding FileFly LinkConnect Server to facilitate data access
Migrate - to transparently relocate file content data to secondary storage without removing the file itself; the existing file becomes a stub
MWI file - a file on secondary storage that encapsulates the file content data of a corresponding primary storage stub file or MigLink
NTP - Network Time Protocol, a protocol for clock synchronization between computer systems over a network
Quick-Remigrate - to quickly return a previously demigrated (but unmodified) file back to its migrated state without the need to re-transfer file content data
Root CA - a Certificate Authority at the end (root) of a chain of certificates; a Root CA is self-signed and must be trusted per se by the validating server (e.g. by inclusion in the computer's Trusted Root Certificate Authorities store)
Recovery File - a text file describing the relationships between stubs/MigLinks and their corresponding MWI files
Scheduler - the Admin Portal component responsible for starting scheduled Tasks
SDM - Stub Deletion Monitoring
Self-Signed Certificate - an X.509 certificate which is not attested to by another Certificate Authority, i.e. its Issuer is the same as its Subject; such certificates include Root CAs as well as 'standalone' self-signed server certificates such as may be created automatically during an application's installation process. Self-signed server certificates should generally be replaced with properly issued certificates from a trusted source.
Stub - a file whose content data has been transparently migrated to a secondary storage location
Storage Replica - a Windows Server volume replication technology offering synchronous or asynchronous replication modes
Syslog - a protocol used to send system log or event messages, e.g. to a centralized Syslog collector
TLS - Transport Layer Security; a protocol used for establishing secure connections between servers (formerly known as SSL)
UUID - A universally unique identifier

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.