Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

version 4.0 | document revision 1

...

FileFly FPolicy Server provides migration support for NetApp filers via using the NetApp FPolicy protocol. This component is the equivalent of DataCore FileFly Agent for NetApp filers.
FileFly FPolicy Server may also be configured for High-Availability (HA).

...

A Task schedules one or more Policies for execution. Tasks can be scheduled to run at specific times, or can be run on-demand via using the Quick Run control on the 'Dashboard'.
While a Task is running, its status is displayed in the 'Running Tasks' panel of the 'Dashboard'. When Tasks finish they are moved to the 'Recent Tasks' panel.
Operation statistics are updated in real time as the task runs. Operations will automatically be executed in parallel, see E for more details.
If multiple Tasks are scheduled to start simultaneously, Policies on each Source are grouped such that only a single traversal of each file system is required.

...

The 'Dashboard' provides a concise view of the FileFly system status, current activity and recent task history. It may also be used to run Tasks on-demand via using the Quick Run control.
The 'Notices' panel, displayed on the expandable graph bar, summarizes system issues that need to be addressed by the administrator. This panel will guide you through initial setup tasks such as license installation.
The circular 'Servers' display shows high-level health information for the servers / clusters in the FileFly deployment.

...

The 'Processed' line chart graphs both the rate of operations successfully performed and data processed over time. Data transfer and bytes Quick-Remigrated (i.e. without any transfer required) are shown separately.
The 'Operations' breakdown chart shows successful activity by operation type across the whole system over time. Additionally, per-server operations charts are available via using the 'Servers' page – see 1.4.1.
The 'Operations' radar chart shows a visual representation of the relative operation profile across your deployment. Two figures are drawn, one for each of the two preceding 7-day periods. This allows behavioral change from week to week to be seen at a glance.

...

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. via using NTP)

Anchor
setup
setup
2.1.0.2 Setup

...

After completing the installation process, FileFly Tools must be configured via using the Admin Portal web interface. The FileFly Admin Portal will be opened automatically and can be found later via using the Start Menu.
The web interface will lead you through the process of initial configuration: refer to the 'Notices' panel on the 'Dashboard' to verify all steps are completed.

...

  • Supported Windows Server operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. via using NTP)

Note: When installed in the Gateway role, a dedicated server is required, unless it is to be co-located on the FileFly Tools server. When co-locating, create separate DNS aliases to refer to the Gateway and the FileFly Admin Portal web interface.

...

  1. Run the DataCore FileFly Agent.exe

  2. Follow the instructions to activate the agent via using FileFly Admin Portal

Anchor
deployment:netapp-fpolicy
deployment:netapp-fpolicy
2.2.3 DataCore FileFly FPolicy Server for NetApp Filers

...

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

    • Windows Server 2012 R2

    • Windows Server 2012

  • Minimum 4GB RAM

  • Minimum 2GB disk space for log files

  • Active clock synchronization (e.g. via using NTP)

Anchor
setup-2
setup-2
2.2.3.2 Setup

...

  • A dedicated server with a supported operating system:

    • Windows Server 2019

    • Windows Server 2016

  • Minimum 2GB disk space for log files (on the system volume)

  • Minimum 1TB disk space for LinkConnect Cache (as a single NTFS volume)

  • RAM: 8GB base, plus:

    • 4GB per TB of LinkConnect Cache

    • 0.5GB per billion link-migrated files

  • Active clock synchronization (e.g. via using NTP)

Anchor
setup-3
setup-3
2.2.4.2 Setup

Installation of the FileFly LinkConnect Server software requires careful configuration of both the NAS / file server and the FileFly LinkConnect Server machines. Instructions are provided in 5.4 for OneFS and 5.2 for Windows file servers. Other devices are not supported.

...

Requires: Source(s), Rule(s), Destination Included in Community Edition: yes
Migrate file data from selected Source(s) to a Destination. Stub files remain at the Source location as placeholders until files are demigrated. File content is transparently demigrated (returned to primary storage) when accessed by a user or application. Stub files retain the original logical size and file metadata. Files containing no data are not migrated.
Each Migrate operation is logged as a Migrate, Remigrate, or Quick-Remigrate.
A Remigrate is the same as a Migrate except it explicitly recognizes that a previous version of the file had been migrated in the past and that stored data pertaining to that previous version is no longer required and so is eligible for removal via using a Scrub policy.
A Quick-Remigrate occurs when a file has been demigrated and NOT modified. In this case it is not necessary to retransfer the data to secondary storage so the operation can be performed very quickly. Quick-remigration does not change the secondary storage location of the migrated data.
Optionally, quick-remigration of files demigrated within a specified number of days may be prevented. This option can be used to avoid quick-remigrations occurring in an overly aggressive fashion.
Additionally, this policy may be configured to pause during the globally configured work hours.
Note: For Sources using a FileFly LinkConnect Server, such as Dell EMC OneFS shares, see 4.9 instead.

...

FileFly supports Microsoft Storage Replica.
If Storage Replica is configured for asynchronous replication, a disaster failover effectively reverts the volume to a previous point in time. As such, this This kind of failover is directly equivalent to a volume restore operation (albeit to a very recent state).
As with any restore, a Post-Restore Revalidate Policy (see 4.5) should be run across the restored volume within the scrub grace period window. This verifies correct operation of any future scrub policies by accounting for discrepancies between the demigration state of the files on the (failed) replication source volume and the replication destination volume.
Important: integrate this process into your recovery procedures prior to production deployment of asynchronous storage replication.

...

DFSN is supported. FileFly Sources must be configured to access volumes on individual servers directly rather than through a DFS namespace. Users and applications may continue to access files and stubs via using DFS namespaces as normal.

...

On Windows 8 or above, VHD and ISO images may be mounted as normal drives using the PowerShell Mount-DiskImage cmdlet. This functionality can also be accessed via using the Explorer context menu for an image file.
A known limitation of this cmdlet is that it does not permit sparse files to be mounted (see Microsoft KB2993573). Since migrated image files are always sparse, they must be demigrated prior to mounting. This can be achieved either by copying the file or by removing the sparse flag with the following command:
fsutil sparse setflag <file_name> 0

...

On Windows, the FileFly Agent can monitor stub deletions to identify secondary storage files that are no longer referenced in order to maximize the usefulness of Scrub Policies. This feature extends not only to stubs that are directly deleted by the user, but also to other cases of stub file destruction such as overwriting a stub or renaming a different file over the top of a stub.
Stub Deletion Monitoring is disabled by default. To enable it, please refer to E.

Anchor
platformref:winNAS
platformref:winNAS
5.2 Microsoft Windows

...

using FileFly LinkConnect Server

This section details the configuration of a DataCore FileFly LinkConnect Server to enable Link-Migration of files from Windows Server SMB shares. This option should be used when it is not possible to install DataCore FileFly Agent directly on the Windows file server in question. For other cases – where FileFly Agent can be installed on the server – please refer to 5.1.
Refer to 4.2 and 4.9 for details of the Migrate and Link-Migrate operations respectively.
Link-Migration works by pairing a Windows SMB share with a corresponding LinkConnect Cache Share. Typically a top-level share on each Windows file server volume is mapped to a unique share (or subdivision) on a FileFly LinkConnect Server. Multiple file server shares may use Cache Shares / subdivisions on the same FileFly LinkConnect Server if desired.
Once this configuration is completed, Link-Migrate policies convert files on the source Windows Server SMB share to links pointing to the destination files via using the LinkConnect Cache Share, according to configured rules.
Link-Migrated files can be identified by the 'O' (Offline) attribute in Explorer. Depending on the version of Windows, files with this flag may be displayed with an overlay icon.

...

If clients access the storage via using nested sub-shares rather than only the top-level configured MigLink Source share, the known sub-shares should be added as follows:

...

Migration support for sources on NetApp Vservers (Storage Virtual Machines) is provided via using NetApp FPolicy. This requires the use of a DataCore FileFly FPolicy Server. Client demigrations can be triggered via using SMB or NFS client access.
Please note that NetApp Filers currently support FPolicy for Vservers with FlexVol volumes but not Infinite volumes.
When accessed via using SMB on a Windows client, NetApp stub files can be identified by the 'O' (Offline) attribute in Explorer. Files with this flag may be displayed with an overlay icon. The icon may vary depending on the version of Windows on the client workstation.

...

DataCore FileFly FPolicy Servers require EXCLUSIVE use of SMB connections to their associated NetApp Vservers. This means Explorer windows must not be opened, drives must not be mapped, nor should any UNC paths to the filer be accessed from the FileFly FPolicy Server machine. Failure to observe this restriction will result results in unpredictable FPolicy disconnections and interrupted service.
When creating a production deployment plan, please refer to 3.5.

...

Verify Windows Defender or any other antivirus product installed on FileFly FPolicy Server machines is configured to omit scanning/screening NetApp shares.
Antivirus access to NetApp files will interfere with the correct operation of the FileFly FPolicy Server software. Antivirus protection should still be provided on client machines and/or the NetApp Vservers themselves as normal.

...

It is strongly recommended to install DataCore FileFly FPolicy Servers in a High-Availability configuration. This configuration requires the installation of DataCore FileFly FPolicy Server on a group of machines which are addressed by a single FQDN. This provides High-Availability for migration and demigration operations on the associated Vservers.
Typically a A pair of FileFly FPolicy Servers operating in HA will service all of the Vservers on a NetApp cluster.
Note: The servers that form the High-Availability FileFly FPolicy Server configuration must not be members of a Windows failover cluster.

...

For each Vserver, verify 'Management Access' is allowed for at least one network interface. Check the network interface in OnCommand System Manager –- if Management Access is not enabled, create a new interface just for Management Access. Note that using the same interface for management and data access may cause firewall problems.
Management authentication may be configured to use either passwords or client certificates. Management connections may be secured via using TLS – this is mandatory when using certificate-based authentication.
For password-based authentication:

...

If it has not already been created, create the SMB Privileged User on the domain. Each FileFly FPolicy Server will use uses the same SMB Privileged User for all Vservers that it will managemanages.
Open a command line session to the cluster management address:

...

Users cannot perform self-service restoration of stubs. However, an administrator may restore specific stubs or sets of stubs from snapshots by following the procedure outlined below. Be sure to provide this procedure to all administrators.
IMPORTANT: The following instructions mandate the use of Robocopy specifically. Other tools, such as Windows Explorer copy or the 'Restore' function in the Previous versions dialog, WILL DO NOT correctly restore stubs.
To restore one or more stubs from a snapshot-folder like:
\\<filer>\<share>~snapshot\<snapshot-name>\<path>
to a restore folder on the same Filer like:
\\<filer>\<share>\<restore-path>
perform the following steps:

...

Unix Symbolic links (also known as symlinks or softlinks) may be created on a Filer via using an NFS mount. Symbolic links are not seen during FileFly Policy traversal of a NetApp file system (since only shares which hide symbolic links are supported for traversal). If it is intended that a policy should apply to files within a folder referred to by a symbolic link, verify the Source encompasses the real location at the link's destination. A Source URI may NOT point to a symbolic link – use the real folder that the link points to instead.
Client-initiated demigrations via using symbolic links will operate as expected.

Anchor
qtree-and-user-quotas
qtree-and-user-quotas
5.3.7.2 QTree and User Quotas

...

Anchor
snapshot-traversal
snapshot-traversal
5.3.7.3 Snapshot Traversal

FileFly will automatically skip skips snapshot directories when traversing shares using the netapp scheme.

...

OneFS does not provide an interface for performing FileFly stub-based migration. As an alternative, FileFly provides a link-based migration mechanism via using a FileFly LinkConnect Server. See 4.9 for details of the Link-Migrate operation.
Link-Migration works by pairing a OneFS SMB share with a corresponding LinkConnect Cache Share. Typically a top-level share on each OneFS device is mapped to a unique share (or subdivision) on a FileFly LinkConnect Server. Multiple OneFS systems may use shares/subdivisions on the same FileFly LinkConnect Server if desired.
Once this configuration is completed, Link-Migrate policies convert files on the source OneFS share to links pointing to the destination files via using the LinkConnect Cache Share, according to configured rules.
Link-Migrated files can be identified by the 'O' (Offline) attribute in Explorer. Depending on the version of Windows, files with this flag may be displayed with an overlay icon.

...

Each configured MigLink source will be is periodically scanned to perform maintenance tasks such as MigLink ACL propagation and Link Deletion Monitoring (see below).
In an HA configuration, this scanning activity will be is performed by a single caretaker node, as can be seen on the Admin Portal Servers page. A standalone FileFly LinkConnect Server always performs the caretaker role.

...

Link Deletion Monitoring (LDM) identifies secondary storage files that are no longer referenced in order to facilitate recovery of storage space by Scrub Policies. This feature extends not only to MigLinks that are demigrated or directly deleted by the user, but also to other cases such as overwriting a MigLink or renaming a different file over the top of a MigLink.
Unlike SDM, LDM requires a number of maintenance scans to determine that a given secondary storage file is no longer referenced. It should be noted that interrupting the maintenance process (e.g. by restarting the caretaker node or transitioning the caretaker role) will delay delays the detection of unreferenced secondary storage. For optimal and timely storage space recovery, verify LinkConnect Servers can run uninterrupted for extended periods.
Warning: in order to avoid LDM incorrectly identifying files as deleted – leading to unwanted data loss during Scrub – it is critical to verify users cannot move/rename MigLinks out of the scanned portion of the directory tree within the filesystem. This can be achieved by always creating the share used for your 'miglinkSource' at the root of the filesystem. An additional share may be created solely for this purpose.
To utilize LDM, it must first be enabled on a per-share basis.

...

  • miglinkSourceType must be set to exactly isilon

  • MAPPING_NUMBER starts at 0 for the first share mapping in this file – mappings must be numbered consecutively

  • ONEFS_FQDN/ONEFS_SHARE describes the OneFS share to be mapped

  • CACHE_SHARE is a LinkConnect Cache Share name (created above)

    • this value is CASE-SENSITIVE

  • SUBDIV must be the single decimal digit 1

  • SECRET_KEY is at least 40 ASCII characters – this key protects against counterfeit link creation

    • recommendation: use a password generator with 64 'All Chars'

  • linkDeletionMonitoring.enabled may be set to true or false to enable/disable Link Deletion Monitoring on this share – see warning above

If clients will access the storage via using nested sub-shares rather than only the top-level configured MigLink Source share, the known sub-shares should be added as follows:

...

This list of sub-shares can be updated later as more subdirectories are shared. Where MigLink access occurs on unexpected shares, warnings will be are written to the LinkConnect agent.log.
Save the configuration and restart the DataCore FileFly Agent service.
Important: Refer to 3.3.1 to verify the configuration on this FileFly LinkConnect Server is included in your backup. If the FileFly LinkConnect Server needs to be rebuilt, the secret key is required to enable previously link-migrated files to be securely accessed.

...

  1. Add a DFSN namespace:

    • the namespace must not be hosted on a LinkConnect node

    • the namespace name must match the LinkConnect Cache Share name exactly (including case)

    • the namespace must be 'Domain-based'

  2. Add a folder to the namespace:

    • folder name must be of the form: SUBDIV_MwClC_1 e.g. 1_MwClC_1

    • Add folder target:

      • \\NODE\CACHE_SHARE\SUBDIV_MwClC_1

      • where NODE is a LinkConnect node which exports CACHE_SHARE

      • where CACHE_SHARE matches the namespace name exactly (including case)

      • where SUBDIV_MwClC_1 matches the new folder name exactly (including case)

      • the folder target will already exist exists – it was created by the FileFly LinkConnect Server in the previous section

      • DO NOT enable replication

    • For HA configurations, add additional targets to the same folder for the remaining LinkConnect node(s)

...

The LinkConnect configuration, including the secret key, for each FileFly LinkConnect Server will be is synchronized with the FileFly Admin Portal. These details will be are part of your the Admin Portal configuration backup.
However, in rare cases where the keys have been completely lost and a DataCore FileFly LinkConnect Server needs to be rebuilt, it is possible to temporarily disable the Counterfeit Link Protection (CLP) and re-sign all links with a new key. To enable this behavior, recreate the configuration as above (with a new secret key), and add a line similar to the following:

...

Regular scanning of the configured share mapping will update updates the links present in all scanned links to use the new key, and any user-generated access to these links will function functions without verifying the signatures until the configured cutoff time, specified as Zulu Time (GMT). For a large system, it may be necessary to allow several days before the cutoff, to enable key update to complete. Users may continue to access the system during this period.

...

Before proceeding with the installation, the following will be are required:

  • Cloud Gateway 3.0.0 or above

  • Swarm 8 or above

  • a license that includes an entitlement for Swarm

...

The TCP port used to access the Swarm Content Gateway via using HTTP or HTTPS must be allowed by any firewalls between the DataCore FileFly Gateway and the Swarm endpoint. For further information regarding firewall configuration see 8.

...

In order to utilize an HTTPS endpoint, the endpoint's Root CA certificate must be trusted by the relevant FileFly components. In most cases the Root CA will is already be trusted as a pre-installed public root or enterprise-deployed CA. Where this is not the case, install the Root CA (or self-signed certificate) in the Local Computer Trusted Root Certification Authorities store on each Gateway and the Admin Portal machine.

...

In DataCore FileFly Admin Portal, navigate to the 'Servers' page and configure the Server on which the plugin will be is enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options such as proxy and encryption, as well as Domain credentials. Swarm Destinations require an index to be created prior to use. Once credentials have been supplied, click Create new index to create a new index and corresponding migration Destination.
Additional indexes can be added at a later date to further subdivide storage if required. Multiple migration destinations may be created in the same bucket by specifying different partition names.
Important: Each FileFly Admin Portal must have its own destination indexes; DO NOT share indexes across multiple FileFly implementations.

...

URIs created on previous versions of FileFly using the cloudscaler scheme will continue to function as expected. Existing destinations should NOT be updated to use the scsp scheme. The cloudscaler scheme is an alias for the scsp scheme.

...

  • X-Alt-Meta-Name – the original source file's filename (excluding directory path)

  • X-Alt-Meta-Path – the original source file's directory path (excluding the filename) in a platform-independent manner such that '/' is used as the path separator and the path will start starts with '/', followed by drive/volume/share if appropriate, but not end with '/' (unless this path represents the root directory)

  • X-FileFly-Meta-Partition – the Destination URI partition – if no partition is present, this header is omitted

  • X-Source-Meta-Host – the FQDN of the original source file's server

  • X-Source-Meta-Owner – the owner of the original source file in a format appropriate to the source system (e.g. DOMAIN\username)

  • X-Source-Meta-Modified – the Last Modified timestamp of the original source file at the time of migration in RFC3339 format

  • X-Source-Meta-Created – the Created timestamp of the original source file in RFC3339 format

  • X-Source-Meta-Attribs – a case-sensitive sequence of characters {AHRS} representing the original source file's file flags: Archive, Hidden, Read-Only and System

    • all other characters are reserved for future use and should be ignored

  • Content-Type – the MIME Type of the content, determined based on the file-extension of the original source filename

Note: Timestamps may be omitted if the source file timestamps are not set.
Non-ASCII characters will be be are stored using RFC2047 encoding, as described in the Swarm documentation. Swarm will decode decodes these values prior to indexing in Elasticsearch.

...

The scspdirect scheme should only be used when accessing Swarm storage nodes directly. Swarm may be used as a migration destination only.
Swarm (SCSP) traffic is not encrypted in transit when using this scheme. Optionally, the plugin can employ client-side encryption to protect migrated data at rest.
Normally, Swarm will be is accessed via using a Swarm Content Gateway, in which case the scsp scheme must be used instead, see 5.5.

...

Before proceeding with the installation, the following will be are required:

  • Swarm 8 or above

  • a license that includes an entitlement for Swarm

...

Swarm storage locations are accessed via using a configured endpoint FQDN. Add several Swarm storage node IP addresses to DNS under a single endpoint FQDN (4-8 addresses are recommended). If Swarm domains are in use, the FQDN must be the name of the domain in which the FileFly data will be is stored. If domains are NOT in use (i.e. data will be is stored in the default cluster domain), it is strongly recommended that the FQDN be the name of the cluster for best Swarm performance.
When using multiple Swarm domains, verify that each domain FQDN is added to DNS as described above.

...

In DataCore FileFly Admin Portal, navigate to the 'Servers' page and configure the Server on which the plugin will be is enabled. In the 'Configuration' panel, select the plugin from the 'Enabled Plugins' or 'Available Plugins' list as appropriate.
Configure the plugin to specify options and encryption settings. Swarm Destinations require an index to be created prior to use: click Create new index to create a new index and corresponding migration Destination.
Additional indexes can be added at a later date to further subdivide storage if required. Multiple migration destinations may be created in the same bucket by specifying different partition names.
Important: Each FileFly Admin Portal must have its own destination indexes; DO NOT share indexes across multiple FileFly implementations.

...

Transfer acceleration allows data to be uploaded via using the fastest data center for your location, regardless of the actual location of the bucket.
This per-bucket option provides a way to upload data to a bucket in a remote AWS region while minimizing the adverse effects on migration policies that would otherwise be caused by the correspondingly higher latency of using the remote region.
Additional AWS charges may apply for using transfer acceleration at upload time, but for archived data these initial charges may be significantly outweighed by reduced storage costs in the target region. For further details, please consult AWS pricing.

...

Older versions of Recovery files may be found via using the 'Recovery' page in FileFly Admin Portal.

...

FileFly Agents may be configured on a per-server basis via using the Admin Portal 'Servers' page.
When the configuration options are saved, they are pushed to the target server to be loaded on the next service restart. In the case of a cluster, all nodes will receive the same updated configuration.

...

Location: C:\Program Files\DataCore FileFly\logs\FileFly Agent
There are two types of FileFly Agent log file. The agent.log contains all FileFly Agent messages, including startup, shutdown, and error information, as well as details of each individual file operation (migrate, demigrate, etc.). Use this log to determine which operations have been performed on which files and to check any errors that may have occurred.
The messages.log contains a subset of the FileFly Agent messages, related to startup, shutdown, critical events and system-wide notifications.
Log messages in both logs are prefixed with a timestamp and thread tag. The thread tag (e.g. <A123>) can be used to distinguish messages from concurrent threads of activity.
Log files are regularly rotated to keep the size of individual log files manageable. Old rotations are compressed as gzip (.gz) files, and can be read using many common tools such as 7-zip, WinZip, or zless. To adjust logging parameters, including how much storage to allow for log files before removing old rotations, see E.
Log information for operations performed as the result of an Admin Portal Policy will also be available via using the web interface.

Anchor
drtool-logs
drtool-logs
F.1.0.4 DrTool Logs

...

ACL - Access Control List; file/folder/share level metadata encapsulating permissions granted to users or other entities
CA - Certificate Authority; specifically an X.509 certificate which issues (signs) other certificates such that during certificate validation a chain of trust may be established by verifying the signatures along the certificate chain up to a trusted Root CA certificate, e.g. to facilitate secure connection to a webserver - see also Root CA
Caretaker - a specific node within a cluster performing maintenance tasks to run on a single node at a time
CLP - Counterfeit Link Protection
Demigrate - to return migrated file content data to its original location, e.g. in response to user access
DFS - Microsoft's Distributed File System; comprised of DFSN and DFSR
DFSN - DFS Namespace; a Windows mechanism allowing for the presentation of multiple SMB shares as a single logical share
DFSR - DFS Replication; an SMB share-based file replication technology, see also Storage Replica as an alternative from Windows Server 2016 onwards
DR - Disaster Recovery
Enterprise CA - a privately-created Root Certificate Authority, promulgated as a trusted Root CA across an organization
FPolicy - a component of NetApp Data ONTAP which enables extension of native Filer functionality by other applications
FPolicy Server - a server which connects a NetApp Filer via using the FPolicy protocols in order to provide extended functionality
FQDN - Fully Qualified Domain Name, e.g. server1.example.com
GUID - Globally unique identifier
HA - High-Availability; specifically the provision of redundant instances of a resource in a manner which guarantees availability of service, even in the event of the failure of a particular instance
LDM - Link Deletion Monitoring
Link-Migrate - to transparently relocate file content data to secondary storage, replacing the original file with a MigLink
MigLink - a placeholder for a file that has been Link-Migrated; applications accessing the MigLink are transparently redirected to the corresponding FileFly LinkConnect Server to facilitate data access
Migrate - to transparently relocate file content data to secondary storage without removing the file itself; the existing file becomes a stub
MWI file - a file on secondary storage which encapsulates the file content data of a corresponding primary storage stub file or MigLink
NTP - Network Time Protocol, a protocol for clock synchronization between computer systems over a network
Quick-Remigrate - to quickly return a previously demigrated (but unmodified) file back to its migrated state without the need to re-transfer file content data
Root CA - a Certificate Authority at the end (root) of a chain of certificates; a Root CA is self-signed and must be trusted per se by the validating server (e.g. by inclusion in the computer's Trusted Root Certificate Authorities store)
Recovery File - a text file describing the relationships between stubs/MigLinks and their corresponding MWI files
Scheduler - the Admin Portal component responsible for starting scheduled Tasks
SDM - Stub Deletion Monitoring
Self-Signed Certificate - an X.509 certificate which is not attested to by another Certificate Authority, i.e. its Issuer is the same as its Subject; such certificates include Root CAs as well as 'standalone' self-signed server certificates such as may be created automatically during an application's installation process. Self-signed server certificates should generally be replaced with properly issued certificates from a trusted source.
Stub - a file whose content data has been transparently migrated to a secondary storage location
Storage Replica - a Windows Server volume replication technology offering synchronous or asynchronous replication modes
Syslog - a protocol used to send system log or event messages, e.g. to a centralized Syslog collector
TLS - Transport Layer Security; a protocol used for establishing secure connections between servers (formerly known as SSL)
UUID - Universally unique identifier