The Apache Hadoop S3 connector "S3A" works with Content Gateway S3. Here is a small but complete example, using a single-node hadoop system that can be easily run on any docker server. It shows bucket listing, distcp, and a simple mapreduce job against a bucket in a Swarm domain.
Make sure your domain (mydomain.example.com) and bucket (hadoop-test) have been Created and that your /etc/hosts or DNS are configured to resolve http://mydomain.example.com/hadoop-test [https://jam.cloud.caringo.com/hadoop-test] to your cloudgateway server's S3 port.
"S3 is not a filesystem. The Hadoop S3 filesystem bindings make it pretend to be a filesystem, but it is not. It can act as a source of data, and as a destination -though in the latter case, you must remember that the output may not be immediately visible."