Python is a popular and accessible programming language; it is easily read and provides quick results on what and where things went wrong. Python is used a lot in scripting and web development, and there are many things about it that make it a perfect fit for Swarm also.
Swarm has an /wiki/spaces/DOCS/pages/2443822991 2.6 that uses the httplib modules. The SDK has great examples on connection pooling and header manipulation, among other things, but let's go through something simpler.
Requests is a python module written by Kenneth Reitz; it makes sending HTTP 1.1 requests to webservers super simple. Requests uses urllib3 and does smart things like connection pooling and chunked transfers automatically. Here are its key features:
.netrc
SupportRequests officially supports Python 2.7 & 3.4–3.7, and runs great on PyPy.
Let's go through a few examples.
You should have Python installed; installing Requests is easy if you have pip:
pip install requests |
See the Requests documentation: http://docs.python-requests.org/en/master/user/install/#install
First, we pull in the modules to be used in the script. To keep it simple, we are pulling the complete modules, rather than specifying only the parts that we need.
import requests import json import sys |
Next, we assign two variables to hold our credentials. To read and write to a cluster that is fronted by Content Gateway, all requests should be authenticated.
user = "username" password = "password" |
First, we will get to the domain that uses our username + password. This assigns the variable "r" with the value of the request:
r = requests.get('https://tlokko.cloud.caringo.com', auth=(user, password)) |
To see the output of "r", we can print it out:
print(r) |
it will give us the following on success:
<Response [200]> |
To get the body of the response, we use r.text
. Because we're just getting a Swarm domain, the body of the response will be what we would expect if we performed a get of a node directly.
print(r.text) |
To get the headers received on the request, we use the r.headers method, which returns all of the headers that are served on the request:
print(r.headers) |
We will add a variable to hold the direct URL:
domain1= "https://tlokko.cloud.caringo.com" r = requests.get(domain1, auth=(user, password)) print(r.headers) |
So far we have just read the domain object itself; now, let's write some data.
We're using Gateway, so we want to write out data to a bucket, which we need to create. We're writing "domain1 + bucket" which is https://tlokko.cloud.caringo.com/jim/
jim
in our domain.bucket="/jim/" headers = {'Content-Type':'application/castorcontext'} r = requests.post(domain1 + bucket, data='', headers=headers, auth=(user,password)) print(r.headers) |
To write a named object, we will add new variables:
urlbucket= domain1 + bucket
filename = "patoooie"
headers = {'Content-Type':'text/html'}
urlbucket= domain1 + bucket filename = "patoooie" headers = {'Content-Type':'text/html'} |
The object creation request is like the bucket creation request except that we add text in the data field, which appears as the body of the file once it is uploaded.
r = requests.post(urlbucket + filename,data="test", headers=headers, auth=(user,password)) |
To test that this worked, we browse to the URL:
http://tlokko.cloud.caringo.com/jim/patooie |
The Gateway also supports token-based authentication. To use this authentication, we first must create a new token with the Content Portal UI, one that is an SCSP token with no secret key.
Then we'll create a web-session which will persist any cookies + tcp connections across multiple requests.
s = requests.Session()
cookies = {'token': '<token-value>'}
urlbucket
, filename
, and headers
.s = requests.Session() cookies = {'token': '<token-value>'} urlbucket= domain1 + bucket filename = "jivemasta" filename2 = "secondtest" headers = {'Content-Type':'text/html'} |
Now we can write the object twice using a session and the cookie that has our auth token, with the results printing out:
r = s.post(urlbucket + filename,data="chick ugg",headers=headers,cookies=cookies) print(r.text) print(r.headers) r = s.post(urlbucket + filename2,data="chick ugga",headers=headers,cookies=cookies) print(r.text) |
Both the first and second requests use the token-based authentication method rather than HTTP basic auth.
Now we can work with files. To post a local file to our bucket, we first create a variable called files
to define our upload file.
files = {'file': open('/home/tony/Downloads/rackd.zip', 'rb')} r = s.post(urlbucket + "rackd.zip", files=files, headers=headers, cookies=cookies) print(r.text) |
Once it runs successfully, we'll get: <html><body>New stream created</body></html>
Finally, we will download a file. This is a GET request. We also need to specify where to write the content:
r = s.get(urlbucket + "rackd.zip", cookies=cookies) open('/home/tony/Downloads/rackdfromrequests.zip', 'wb').write(r.content) |
The open directive opens a new file for write; we tell it to write r.content
, which is the content of the GET request.
|