Listing Cache Service

Listing Cache Service

Overview

This document provides a comprehensive overview of the Listing Cache Service (LCS) in Swarm Gateway deployments. LCS is a high availability (HA) feature designed to improve listing performance for large datasets by caching pseudo-folder listings. It was introduced with Gateway version 8.3.0 and supersedes the earlier, non-HA Listing Cache (LC) functionality available since Gateway 8.2.0. LCS enhances scalability and performance by reducing the load on Elasticsearch during repeated folder listing operations while ensuring 100% listing consistency. This document explains the architecture, hardware requirements, limitations, deployment steps, and monitoring methods for LCS.

Note

In this document, LC refers to the non-HA Listing Cache feature introduced in Gateway v8.2.0, and LCS refers to the Listing Cache Service - a dedicated, HA capable service introduced in Gateway v8.3.0.

Important

Use Content Gateway 8.3.1 with Swarm 17.0.2 or higher for any new or upgrade scenarios.

Deployment Scenarios

LCS can be deployed in different ways depending on your current environment and operational requirements.

Info

Before proceeding with the LCS installation, it is assumed that you have a fully configured and working Swarm cluster (this means you have deployed SCS, GW, ES, and Telemetry).

The following scenarios outline the most common approaches:

Adding LCS to an Existing Gateway Deployment

If you have Content Gateways already deployed without LC and want to add LCS:

  1. Deploy at least two LCS instances. Refer Setup LCS for more details.

  2. Upgrade to Content Gateway v8.3.1 (if not already running). Refer Upgrade Gateway Documentation for more details.

  3. Reconfigure the gateway.cfg of existing Content Gateways to use the LCS cluster as the LC layer. Refer Configure Gateway to Use LCS for more details.

HA Swarm Deployment with LCS

If you are deploying Swarm for the first time and want to include LCS:

  1. Deploy at least two standard Content Gateways. Refer Gateway Configuration Documentation for more details.

  2. Deploy at least two LCS instances. Refer Setup LCS for more details.

  3. On the standard Content Gateways, configure gateway.cfg to use the LCS cluster as the LC layer. Refer Configure Gateway to Use LCS for more details.

Migrating from Existing LC to LCS

If you are currently using LC (non-HA) and want to move to LCS (HA-capable):

  • Follow Migrating from LC to LCS after reviewing the architecture section.

  • This process includes deploying a minimum of two LCS instances and reconfiguring your existing Content Gateways.

Architecture and Functionality

LCS addresses scalability and performance challenges in folder listing operations by offloading pseudo-folder listing queries from Elasticsearch. In large-scale buckets, determining the presence of subfolders can result in high CPU consumption due to millions of objects being queried. LCS reduces this load by acting as a dedicated caching layer deployed between the Gateways and Elasticsearch.

Warning

Although it is possible to co-locate LCS functionality with Gateway servers, this approach is not recommended for high-throughput environments.

Example HA Swarm Deployment

image-20250915-100104.png

Two Roles in an LCS-Enabled Architecture

When implementing LCS, Content Gateways operate in one of two distinct roles:

  1. Standard Content Gateway (Client)

    • Connects to an LCS cluster to retrieve cached listing data.

  2. LCS Server

    • Runs the LCS role plus a RabbitMQ instance.

    • LCS servers form a RabbitMQ cluster used for service discovery and inter-node coordination.

Limitations

LCS depends on Gateways for all object metadata changes. Hence, to maintain consistent cached listings, all object writes and deletes must pass through gateway. Limitations include:

  • S3 Lifecycle Policies: Objects deleted via lifecycle policies bypass Gateway, leading to stale listings.

  • Swarm Delete Lifepoints: Similar to lifecycle policies, deletes are not communicated to LCS.

  • Recursive Deletes: Domain or bucket-level recursive deletes are not yet LCS-aware, leading to stale listings.

  • Custom Delimiters: Supported, but do not benefit from consistency guarantees. Synchronous indexing is recommended.

  • Swarm Replication: Only supported if remote Gateways are used. Direct-to-Swarm replication is unsupported as it would result in stale listings at the target cluster.

  • Load Distribution: Load is balanced by domain + bucket, not usage.

  • Failover Behavior - If an LCS instance fails:

    • Requests are automatically redirected to remaining LCS instances.

    • Temporary performance degradation is expected for some listing requests while the surviving LCS nodes rebuild (“inflate”) their cache.

Warning

Native S3 replication is not supported by Veeam.

When Not to Use LCS

LCS is not suitable for the following scenarios:

  • Domains or buckets are frequently deleted and recreated.

  • Buckets use active S3 lifecycle policies.

  • Swarm delete lifepoints are required.

  • Pseudo-folder structures are not used, and a flat object namespace is in place.

Hardware Requirements

Role

CPU

Memory

Java Heap

Swap Size

Disk Cache

Role

CPU

Memory

Java Heap

Swap Size

Disk Cache

Gateway-Base

4

8 GB

4 GB

8 GB

No

Gateway-CSP

8

16 GB

10 GB

16 GB

No

LCS-Base

4

8 GB

6 GB

8 GB

Yes, 200 GB XFS on SSD

LCS-CSP

8

16 GB

10 GB

16 GB

Yes, 200 GB XFS on SSD

Please make sure to set the Swap Size correctly for LCS server VM’s

Note

A minimum of 2 Gateways and 2 LCS servers is required for HA. RabbitMQ runs on the first two LCS instances. You can scale LCS instances up or down non-disruptively.

Deployment Steps

1. Provision Hardware

Provision two or more servers based on the hardware requirements. For Java Heap configuration, edit /etc/sysconfig/cloudgateway.

For normal workloads that require 6 GB heap, set:

HEAP_MIN="6144m" HEAP_MAX="6144m"

For CSP type workloads, set:

HEAP_MIN="10240m" HEAP_MAX="10240m"

2. Create Disk Cache Partition (barebone only)

Execute the following commands on each LCS server:

vgcreate swarmspool /dev/sdb lvcreate -L 200G -n diskcache swarmspool mkfs.xfs /dev/swarmspool/diskcache mkdir /var/spool/caringo mount /dev/swarmspool/diskcache /var/spool/caringo/

Add to /etc/fstab:

/dev/mapper/swarmspool-diskcache /var/spool/caringo xfs defaults 0 0

3. Setup RabbitMQ Service

RabbitMQ is used by LCS server for service discovery only. This means at startup RabbitMQ tells LCS how many LCS instances are running in your environment and after that it is only used to detect new LCS nodes or LCS nodes that go offline.

Repeat the following steps for the first two LCS instances only.

Install RabbitMQ

If you are using the SwarmContentGateway VM template , you can skip the software installation step below, and proceed to configuration step.

Due to compatibility issues with latest erlang version on RL9, you must install erlang version erlang-26.2.5-1.el9.rpm

 

For Rocky Linux 8/RHEL 8

Add the yum repo for erlang

#curl -s https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh | sudo bash

#dnf update -y

Install erlang

#dnf install -y erlang

Download and install rabbitmq

#wget https://github.com/rabbitmq/rabbitmq-server/releases/download/v4.1.4/rabbitmq- server-4.1.4-1.el8.noarch.rpm

#rpm --import https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq- release-signing-key.asc

#dnf install -y rabbitmq-server-4.1.4-1.el8.noarch.rpm

 

Configure Node Name

Edit /etc/rabbitmq/rabbitmq-env.conf:

RABBITMQ_NODENAME=rabbit@<BACKEND IP ADDRESS> RABBITMQ_USE_LONGNAME=true

Please note it is the Storage backend network you must use for RabbitMQ, and you must create this file on all your rabbitmq nodes.

Restart RabbitMQ for the change to take effect:

systemctl restart rabbitmq-server

Open Firewall Ports

Execute the following commands on each RabbitMQ server:

firewall-cmd --add-port 4369/tcp --permanent firewall-cmd --add-port 5672/tcp --permanent firewall-cmd --add-port 25672/tcp --permanent firewall-cmd --add-port 8061/tcp --permanent firewall-cmd --reload

Enable the rabbitmq-server service

systemctl enable rabbitmq-server

Note

Make sure you have done this on two LCS instances before proceeding with the HA configuration.

4. Configure HA

To configure RabbitMQ in a HA cluster, perform the following steps on LCS2:

  1. Stop the RabbitMQ APP

    rabbitmqctl stop_app
  2. Reset the RabbitMQ node.

    rabbitmqctl reset
  3. Stop the RabbitMQ service

    systemctl stop rabbitmq-server
  4. Synchronize Erlang Cookie - from LCS1 to LCS2:

    scp root@LCS1:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq
  5. Start RabbitMQ service.

    systemctl start rabbitmq-server
  6. Join the rabbitMQ cluster

    rabbitmqctl join_cluster rabbit@<LCS1-BACKEND-IP-ADDRESS>
  7. Start the RabbitMQ app

    rabbitmqctl start_app
  8. Verify cluster status

    rabbitmqctl cluster_status | grep -A4 "Running Nodes" # Make sure you see your 2 LCS instances, example: Running Nodes rabbit@172.29.1.27 rabbit@172.29.1.28

5. Add RabbitMQ User

This only needs to be done on one LCS instance.

rabbitmqctl add_user datacore ourpwdofchoicehere rabbitmqctl set_permissions datacore ".*" ".*" ".*"

This example sets up a RabbitMQ datacore/ourpwdofchoicehere user account without a virtual host (vhost).

Info

A production setup needs a more secure password.

6. Setup LCS

At this point it is assumed you have followed to regular gateway configuration steps, this must minimum include configuring the adminDomain, hosts , indexerHosts and management password.

You must disable the roles “scsp”, “s3” , “metering” and “cluster_admin”

We provide examples gateway.cfg in APPENDIX C

Install LCS

Download and install Gateway v8.3.1+ on LCS servers. Update gateway.cfg:

[lcs] enabled=true bindPort=8060 # NOTE: a second port 8061 gets opened for the actual LCS API server brokerHost=<FIRST 2 LCS BACKEND IP ADDRESSES> brokerUser=datacore brokerPassword=ourpwdofchoicehere bindAddress=BACKEND_IP_OF_LCS # Added in GW 8.3.1 serverNumThreads=500 # adjust this to match the load your gateways will push [storage_cluster] ... # 8.3.0 specific disableIndexWaitRefresh=true ...

Once LCS1 is configured, you can copy the gateway.cfg file to LCS2, as they will be identical.

Info

Additional LCS instances can be deployed at a later time without requiring configuration changes on existing instances. The two IP addresses specified in the brokerHost parameter facilitate auto-discovery.

7. Configure Gateway to use remote LCS

Warning

Do not disable or re-enable LCS usage on Content Gateways without restarting the LCS service.

When LCS is disabled on the Gateways, they stop notifying LCS of object write and delete operations. As a result, the LCS cache becomes increasingly stale as new requests are processed.

In HA deployments, restart LCS instances in a rolling manner to avoid client interruptions. Do not restart all LCS servers simultaneously.

Update /etc/caringo/cloudgateway/gateway.cfg:

[gateway] ... rootBackendId=listing_cache [listing_cache] type=ListingCache childBackendId=storage_cluster brokerHost=<FIRST 2 LCS BACKEND IP ADDRESSES> brokerUser=datacore brokerPassword=ourpwdofchoicehere # if you need XMD add, this is turned OFF by default [object_locking] extrinsicMetadata=enabled [storage_cluster] disableDocIdLookup=false extrinsicIndexRefresh=false ...

Info

  • Additional LCS instances can be deployed later without requiring configuration changes on the existing Gateway nodes. The two IP addresses specified in the brokerHost parameter enable auto-discovery.

  • The Gateway no longer incurs the additional CPU, memory, and disk resource requirements previously associated with the Listing Cache (LC) functionality. These resource requirements have now been offloaded to the dedicated LCS servers.

8. Validate Deployment

  • Verify data access and listings.

  • Check logs for errors or warnings.

  • Monitor performance metrics to ensure the hardware is sufficient.

9. Go Live

  • Enable LCS for production.

  • Monitor system health and usage.

Migrating from LC to LCS

If you previously used LC (v1.0) and want to enable HA capabilities using LCS:

  1. Deploy a minimum of two LCS instances.

  2. Reconfigure your existing Gateways to use LCS.

  3. Revert any load balancer configuration related to domain-to-Gateway pinning.

Note

The [storage_cluster] disableListingCache=false setting is no longer supported. To retain non-HA in-process LC functionality (e.g., for single-Gateway environments or testing), use the following configuration:

[gateway] ... rootBackendId=listing_cache ​[listing_cache] ​ type=ListingCache ​ childBackendId=storage_cluster ​ useLocalListingCache=true [storage_cluster] ... # 8.3.0 specific disableIndexWaitRefresh=true ...

Validation and Monitoring

Monitor Cache Hit Rate

If telemetry and Grafana are available, review the Listing Cache dashboard. A high cache hit rate indicates effective caching of folder listings.

Check Response Time

Compare the response times before and after enabling the Listing Cache. Improved response times, particularly for frequently accessed pseudo-folders, indicate that the cache is functioning as expected.

Monitor Memory and CPU Utilization

Increased memory usage and consistent CPU activity are expected in a caching system. However, high resource consumption may suggest under-provisioning and could warrant scaling out the Listing Cache Service by adding instances. This can be done non-disruptively under load.

Conversely, if memory and CPU usage on LCS instances remain consistently low, it may indicate over-provisioning. In such cases, consider scaling in by removing instances to optimize resource usage. This adjustment can also be performed non-disruptively during normal operations.

Validate LCS Cluster Status

After completing the LCS installation, validate that all LCS instances are online and participating in the cluster.

Ensure that metrics are enabled in your configuration:

[metrics] metricsEnabled=true

If metrics collection is enabled, LCS exposes status information that can be queried locally on each LCS node.

Run the following command on an LCS server:

curl -s http://127.0.0.1:9100/metrics | grep listingcacheservice_peers
### Expected Output The command returns metrics showing the LCS node status: ``` # HELP caringo_listingcacheservice_peers Online server peers. # TYPE caringo_listingcacheservice_peers gauge caringo_listingcacheservice_peers{status="owners",} N.0 caringo_listingcacheservice_peers{status="fallbacks",} 0.0 caringo_listingcacheservice_peers{status="ownersgone",} 0.0 caringo_listingcacheservice_peers{status="online",} N.0 caringo_listingcacheservice_peers{status="leaving",} 0.0 caringo_listingcacheservice_peers{status="fallbacksgone",} 0.0

Validation Criteria

The owners metric should match the number of configured LCS nodes.

  • If you configured N nodes, you should see caringo_listingcacheservice_peers{status="owners",} N.0

  • This confirms all configured nodes are actively participating in the caching service

The value reported for owners represents the number of LCS instances actively participating in caching and the value reported for online represents the number of LCS instances currently available.

Both values must match the number of deployed LCS instances.

For example:

  • With 2 LCS instances, both values should be 2.0

  • With 3 LCS instances, both values should be 3.0

If these values do not match the expected number of LCS instances, review the LCS configuration and ensure all instances are running and reachable before proceeding to production.

Appendix A: Combined Gateway and LCS Role

Not Recommended

This mode of deployment is not recommended for heavier workloads or for any CSP/MSP deployments.

You may combine Gateway and LCS roles on the same server for small-scale deployments. Use the combined configuration below:

[gateway] ... rootBackendId=listing_cache [listing_cache] type=ListingCache childBackendId=storage_cluster brokerHost=<FIRST 2 GW BACKEND IP ADDRESSES> brokerUser=datacore brokerPassword=ourpwdofchoicehere [lcs] enabled=true bindPort=8060 # NOTE: a second port 8061 gets opened for the actual LCS API server [storage_cluster] ... # 8.3.0 specific disableIndexWaitRefresh=true ...

Note

The hardware requirements for both the Swarm Gateway and the LCS, as listed in the Hardware Requirements table, must be combined. Ensure the server has sufficient resources to support both services running simultaneously.

Appendix B: Telemetry Configuration for LCS Monitoring

All Listing Cache Service (LCS) servers expose Prometheus metrics on port 9100. To enable effective monitoring using the reference dashboards, ensure that each LCS server is added to the existing gateway Prometheus job definitions.

Additionally, the node_exporter service on port 9095 must also be configured for each LCS server within the existing gateway-nodeexporter job in Prometheus.

Example: Adding Two LCS Servers

In this example, two new LCS instances with BACKEND IP addresses 172.29.10.28 and 172.29.10.29 are being added.

Edit the Prometheus configuration file:

vi /etc/prometheus/prometheus.yaml

Cloud Content Gateway Job Definition:

- job_name: 'swarmcontentgateway' static_configs: - targets: ['172.29.10.26:9100','172.29.10.27:9100','172.29.10.28:9100','172.29.10.29:9100'] relabel_configs: - source_labels: [__address__] regex: "([^:]+):\\d+" target_label: instance

Cloud Gateway Node Exporter Job Definition:

- job_name: 'gateway-nodeexporter' scrape_interval: 30s static_configs: - targets: ['172.29.10.26:9095','172.29.10.27:9095','172.29.10.28:9095','172.29.10.29:9095'] relabel_configs: - source_labels: [__address__] regex: "([^:]+):\\d+" target_label: instance

After making these changes, restart Prometheus to apply the updated configuration:

systemctl restart prometheus

Swarm Telemetry VM version 17.0.3 includes pre-provisioned Grafana dashboards for the new LCS metrics. If you are using a custom Grafana instance for monitoring, download the following dashboard id’s:

Dashboard ID

Description

Dashboard ID

Description

24029

Datacore Swarm Listing Cache Service v8.3

24030

Datacore Swarm Gateway v8.3

Appendix C: Example gateway.cfg Files for Multiple Roles

Gateway with LCS client configuration

[gateway] adminDomain = admin.swarm.demo.internal threads = 200 rootBackendId=listing_cache [storage_cluster] locatorType = static hosts = 172.29.19.68 172.29.19.69 indexerHosts = 172.29.1.20 managementUser = admin managementPassword = datacore # if you need XMD add, this is turned OFF by default disableDocIdLookup=false extrinsicIndexRefresh=false [scsp] enabled = true bindAddress = 0.0.0.0 bindPort = 80 [s3] enabled = true bindAddress = 0.0.0.0 bindPort = 8090 [cluster_admin] enabled = true bindAddress = 0.0.0.0 bindPort = 91 [metering] enabled = true [quota] enabled = false smtpHost = localhost mailFrom = donotreply@localhost [dynamic_features] maxInvokes = 10 tempDir=/var/spool/features [debug] auditLogVersion = 4 [listing_cache] type=ListingCache childBackendId=storage_cluster brokerHost=172.29.1.28 172.29.1.29 brokerUser=datacore brokerPassword=ourpwdofchoicehere # if you need XMD add [object_locking] extrinsicMetadata=enabled

LCS server configuration

[gateway] adminDomain = admin.swarm.demo.internal [storage_cluster] locatorType = static hosts = 172.29.19.68 172.29.19.69 indexerHosts = 172.29.1.20 managementUser = admin managementPassword = datacore # 8.3.0 specific disableIndexWaitRefresh=true [metering] enabled = false [quota] enabled = false [debug] auditLogVersion = 4 [lcs] enabled=true bindPort=8060 brokerHost=172.29.1.27 172.29.1.28 brokerUser=datacore brokerPassword=ourpwdofchoicehere bindAddress=172.29.1.27 # BIND to BACKEND IP OF LCS serverNumThreads=500 # adjust this to match the load your gateways will push

Appendix D: Prometheus Metrics for LCS Monitoring

This section outlines the available Prometheus metrics for both the LCS client (gateway role) and server (LCS role) components.

LCS Client Metrics (Gateway Role)

caringo_listingcacheclient_request Request latencies for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheclient_request_retries Retry counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheclient_request_redirects Redirect counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheclient_request_timeouts Timeout counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheclient_request_errors Error counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheclient_ops_pending Client read/write operations pending. Labels: operation=[read, write] caringo_listingcacheclient_endpoints Active connections to listingcache service endpoints. caringo_listingcacheclient_endpoints_connects_ok Successful connects to listingcache service endpoints. caringo_listingcacheclient_endpoints_connects_errors Failed connects to listingcache service endpoints. caringo_listingcacheclient_blockmap_version Current blockMap version.

LCS Server Metrics (LCS Role)

caringo_listingcacheservice_request Request latencies for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheservice_request_redirects Redirect counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheservice_request_errors Error counts for write/delete/list. Labels: method=[write, delete, list, list-fallback, lookupMeta, lookupConflict] caringo_listingcacheservice_connections Active connections to listingcache service. caringo_listingcacheservice_connections_errors Error count of connections to listingcache service. caringo_listingcacheservice_blockmap_version Current blockMap version. caringo_listingcacheservice_pings_send Server ping send count. caringo_listingcacheservice_pings_recv Server ping receive count. caringo_listingcacheservice_pings_errors Server ping error count. caringo_listingcacheservice_peers Online server peers. Labels: status=[online, owners, fallbacks, leaving, ownersgone, fallbacksgone] caringo_listingcacheservice_replies_pending Server replies pending.

LC Metrics (LCS Role)

caringo_listingcache_request (Summary) Request counts and latencies for write/delete/list, versioned/nonversioned. Labels: method=[write, delete, list], mode=[V, NV] caringo_listingcache_request_errors (Counter) Request error counts for write/delete/list, versioned/nonversioned. Labels: method=[write, delete, list], mode=[V, NV] caringo_listingcache_listed_recs (Counter) Total number of records returned by the listing cache, versioned/nonversioned. Labels: mode=[V, NV] caringo_listingcache_backend_query (Summary) Counts and latencies of ES queries for priming/listing, versioned/nonversioned. Labels: method=["list", "prime"], mode=[V, NV] caringo_listingcache_backend_query_recs (Counter) Number of ES records queried for priming/listing, versioned/nonversioned. Labels: method=["list", "prime"], mode=[V, NV] caringo_listingcache_cache_query (Summary) Counts and latencies of SqliteDB queries for priming/listing, versioned/nonversioned. Labels: method=["list", "prime", "reconciliation"], mode=[V, NV] caringo_listingcache_cache_query_recs (Counter) Number of SqliteDB records queried for priming/listing, versioned/nonversioned. Labels: method=["list", "prime", "reconciliation"], mode=[V, NV] caringo_listingcache_flushes_pending (Gauge) Folder updates pending flush to SqliteDB disk cache. caringo_listingcache_flushes_done (Counter) Folder updates flushed to SqliteDB disk cache. caringo_listingcache_trims_pending (Gauge) Folders pending trim in memory cache. caringo_listingcache_trims_done (Counter) Folders trimmed in memory cache. caringo_listingcache_folder_pulls_pending (Gauge) Folders marked to be internally pulled into cache. caringo_listingcache_folder_pulls_done (Counter) Folders internally pulled into cache. caringo_listingcache_mem_cached (Gauge) Folders currently in memory cache. caringo_listingcache_mem_evicted (Counter) Folders evicted from memory cache. caringo_listingcache_dbhandle_cached (Gauge) SqliteDB handles currently in memory cache. caringo_listingcache_dbhandle_evicted (Counter) SqliteDB handles evicted from memory cache. caringo_listingcache_buckets_cached Buckets currently in disk cache. caringo_listingcache_disk_cached (Gauge) SqliteDBs currently in disk cache. caringo_listingcache_disk_evicted (Counter) Folders evicted from disk cache. caringo_listingcache_disk_cached_bytes (Gauge) Size in bytes of SqliteDBs currently in disk cache. caringo_listingcache_disk_evicted_bytes (Counter) Size in bytes of SqliteDBs evicted from disk cache. caringo_listingcache_reconciliations_done (Counter) Number of cache records reconciled (versionid mismatches corrected based on etag). Labels: origin=[backend,cache] caringo_listingcache_memory_used (Gauge) Memory use as perceived by the listing cache. caringo_listingcache_disk_free (Gauge) Disk free space as perceived by the listing cache.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.