Difference between revisions of "Rackspace API"

From Christoph's Personal Wiki
Jump to: navigation, search
m (Cloud Servers)
Line 261: Line 261:
  
 
Note: Obviously, it would be better to do the above in a [[bash]] script (or, better yet, with an SDK like [[pyrax]]).
 
Note: Obviously, it would be better to do the above in a [[bash]] script (or, better yet, with an SDK like [[pyrax]]).
 +
 +
==Cloud Image API export==
 +
 +
In this section, I will show all of the steps necessary to export an image of a Cloud Server from one DC and then import it into another DC to be used to create new Cloud Servers from that imported image.
 +
 +
The examples in this section will do the following:
 +
 +
* Create a Cloud Server in DFW;
 +
* Create an image of that Cloud Server in DFW;
 +
* Export that image to a Cloud Files container (also in DFW);
 +
* Download the exported image (which is a VHD) from the Cloud Files container in DFW;
 +
* Upload the VHD to a Cloud Files container in IAD;
 +
* Import that VHD as a saved image in IAD; and
 +
* Create a new Cloud Server in IAD using this exported image
 +
 +
NOTE: I used the API (simple cURL commands) for the entire process. Although you can accomplish some of the steps via the [https://mycloud.rackspace.com/ Cloud Control Panel] (e.g., creating a Cloud Server and creating images of a Cloud Server), the actual export and import of the image must be done via the API.
 +
 +
* Step #0: Setup your environment variables
 +
 +
$ ACCOUNT=012345
 +
$ USERNAME=myraxusername
 +
$ API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 +
$ EXPORT_REGION=dfw
 +
$ IMPORT_REGION=iad
 +
$ EXPORT_SERVER_ENDPOINT=<nowiki>https://$EXPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers</nowiki>
 +
$ IMPORT_SERVER_ENDPOINT=<nowiki>https://$IMPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers</nowiki>
 +
$ EXPORT_IMAGE_ENDPOINT=<nowiki>https://$EXPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT</nowiki>
 +
$ IMPORT_IMAGE_ENDPOINT=<nowiki>https://$IMPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT</nowiki>
 +
$ EXPORT_CONTAINER_ENDPOINT=<nowiki>https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee</nowiki>
 +
$ IMPORT_CONTAINER_ENDPOINT=<nowiki>https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee</nowiki>
 +
$ EXPORT_CONTAINER="image-exports"
 +
$ IMPORT_CONTAINER="image-imports"
 +
$ BASE_IMAGE_ID=fd8e4f18-9270-4f43-8932-c3719ae2f7fd # Centos 6.5 stock image
 +
$ FLAVOR_ID="performance1-1"
 +
$ TOKEN=`curl -s -XPOST <nowiki>https://identity.api.rackspacecloud.com/v2.0/tokens</nowiki> \
 +
      -d'{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$USERNAME'","apiKey":"'$API_KEY'"}}}' \
 +
      -H"Content-type:application/json" |\
 +
      python -c 'import sys,json;data=json.loads(sys.stdin.read());print data["access"]["token"]["id"]'`
 +
 +
* Step #1: Create a Cloud Server in DFW
 +
 +
$ EXPORT_SERVER_NAME="dfw-export-server"
 +
$ curl -s -X POST $EXPORT_SERVER_ENDPOINT \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      -H "Content-Type: application/json" \
 +
      -H "Accept: application/json" \
 +
      -d "{\"server\":{\"name\":\"$EXPORT_SERVER_NAME\",\"imageRef\":\"$BASE_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\
 +
      python -m json.tool
 +
 +
Make any configuration changes to this server before proceeding to the next step.
 +
 +
* Step #2: Create image of Cloud Server in DFW
 +
 +
$ EXPORT_SERVER_ID=c879cbb7-cdaa-4774-b333-9d5efc5deb78
 +
$ EXPORT_IMAGE_NAME="dfw-export-image"
 +
$ curl -s -X POST $EXPORT_SERVER_ENDPOINT/$EXPORT_SERVER_ID/action \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      -H "Content-Type: application/json" \
 +
      -H "Accept: application/json" \
 +
      -d "{\"createImage\":{\"name\":\"$EXPORT_IMAGE_NAME\"}}"
 +
 +
* Step #3: Export image to Cloud Files container in DFW
 +
 +
NOTE: Make sure the "<code>EXPORT_CONTAINER</code>" exists in Cloud Files (in DFW) before you run the following command.
 +
 +
$ curl -s -X POST \
 +
      $EXPORT_IMAGE_ENDPOINT/tasks
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      -H "Content-Type: application/json" \
 +
      -d "{\"type\":\"export\",\"input\":{\"receiving_swift_container\":\"$EXPORT_CONTAINER\",\"image_uuid\":\"$EXPORT_IMAGE_ID\"}}" |\
 +
      python -mjson.tool
 +
 +
You can check on the status of your export with the following command (keep checking until its status goes from "processing" to "success"):
 +
 +
$ curl -s -X GET -H "X-Auth-Token: $TOKEN" $EXPORT_IMAGE_ENDPOINT/tasks | python -m json.tool
 +
 +
* Step #4: Download the exported image (VHD) from the Cloud Files container in DFW
 +
 +
$ VHD_FILENAME=8e9db6d7-0454-4d44-8b32-817c90212297.vhd
 +
$ curl -o /tmp/$VHD_FILENAME \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      "$EXPORT_CONTAINER_ENDPOINT/$EXPORT_CONTAINER/$VHD_FILENAME"
 +
 +
After the file has successfully been downloaded to your local computer, you will want to check on the integrity of the VHD by looking at the VHD header/footer with the following commands:
 +
 +
$ head -c 512 $VHD_FILENAME | hexdump -C
 +
00000000  63 6f 6e 65 63 74 69 78  00 00 00 02 00 01 00 00  |conectix........|
 +
00000010  00 00 00 00 00 00 02 00  1a c3 dc 6a 74 61 70 00  |...........jtap.|
 +
00000020  00 01 00 03 00 00 00 00  00 00 00 05 00 00 00 00  |................|
 +
00000030  00 00 00 05 00 00 00 00  a2 8a 10 3f 00 00 00 03  |...........?....|
 +
00000040  ff ff f0 d0 20 81 21 a0  08 fb 4c 2c 99 57 fd b9  |.... .!...L,.W..|
 +
00000050  2d 6e 83 38 00 00 00 00  00 00 00 00 00 00 00 00  |-n.8............|
 +
00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 +
*
 +
00000200
 +
$ tail -c 512 $VHD_FILENAME | hexdump -C
 +
00000000  63 6f 6e 65 63 74 69 78  00 00 00 02 00 01 00 00  |conectix........|
 +
00000010  00 00 00 00 00 00 02 00  1a c3 dc 6a 74 61 70 00  |...........jtap.|
 +
00000020  00 01 00 03 00 00 00 00  00 00 00 05 00 00 00 00  |................|
 +
00000030  00 00 00 05 00 00 00 00  a2 8a 10 3f 00 00 00 03  |...........?....|
 +
00000040  ff ff f0 d0 20 81 21 a0  08 fb 4c 2c 99 57 fd b9  |.... .!...L,.W..|
 +
00000050  2d 6e 83 38 00 00 00 00  00 00 00 00 00 00 00 00  |-n.8............|
 +
00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 +
*
 +
00000200
 +
 +
The output of <code>`head`</code> and <code>`tail`</code> should be identical and you should see that "<code>conectix</code>" in the output's first bytes.
 +
 +
* Step #5: Upload VHD to Cloud Files container in IAD
 +
 +
NOTE: Make sure the "<code>IMPORT_CONTAINER</code>" exists in Cloud Files (in IAD) before you run the following command.
 +
 +
$ curl -X PUT -T /tmp/$VHD_FILENAME \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      "$IMPORT_ENDPOINT/$IMPORT_CONTAINER/$VHD_FILENAME"
 +
 +
* Step #6: Import the VHD from the Cloud Files container (in IAD)
 +
 +
$ VHD_NOTES="image-exported-from-dfw"
 +
$ curl -X POST "$IMAGES_API_ENDPOINT/tasks" \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      -H "Content-Type: application/json" \
 +
      -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\
 +
      python -mjson.tool
 +
 +
* Step #7: Create a new Cloud Server from the imported VHD/image (in IAD)
 +
 +
$ IMPORT_SERVER_NAME="iad-import-server"
 +
$ IMPORT_IMAGE_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeee
 +
$ curl -s -X POST $IMPORT_SERVER_ENDPOINT \
 +
      -H "X-Auth-Token: $TOKEN" \
 +
      -H "Content-Type: application/json" \
 +
      -H "Accept: application/json" \
 +
      -d "{\"server\":{\"name\":\"$IMPORT_SERVER_NAME\",\"imageRef\":\"$IMPORT_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\
 +
      python -m json.tool
 +
 +
The process is now complete. You have successfully exported an image of your Cloud Server from DFW and created a new Cloud Server from that (imported) image in IAD.
  
 
==External links==
 
==External links==

Revision as of 19:07, 28 March 2014

This article will be a somewhat random assortment of API calls to various Rackspace services. I plan to organize this article and add nearly all possible calls. Most of the API calls will be made using `curl` commands, but other resources will be used as well.

Notes

You will notice in many of the examples given in this article that I mix the order of my requests and headers and sometimes include whitespaces (e.g., "-X PUT" or '-H "X-Auth-Token: $MYRAXTOKEN"') and sometimes do not (e.g., "-XPUT" or '-H"X-Auth-Token:$MYRAXTOKEN"'). This is on purpose, as I am attempting to illustrate that `curl` and the API do not care about the order of these or about most whitespaces (there are some exceptions and I will point those out when necessary). I also am enclosing the endpoint URL in quotes. This, too, is usually not necessary, but it delineates the endpoint (+container/object/parameters) more clearly (and, if you are naughty and use spaces in your filenames {why are you?}, you will need to use quotes).

Environment variables

Note: This is probably not the most secure way to store your account username, API Key, account number, or token. However, it makes it simpler to illustrate using the Rackspace API in this article.

You can find all of the following (except for your 24-hour-valid token) in the Rackspace Cloud Control Panel under the "Account Settings" section.

MYRAXUSERNAME=your_account_username
MYRAXAPIKEY=your_unique_api_key  # _never_ give this out to anyone!
MYRAXACCOUNT=integer_value  # e.g., 123456
MYRAXTOKEN=your_24_hour_valid_token  # see below

I will be using the above environment variables for the remainder of this article.

Regions

The following are the Rackspace API endpoint regions (aka, the data centres where your servers/data/etc live):

  • DFW - Dallas DC
  • HKG - Hong Kong DC
  • IAD - Norther Virginia DC
  • LON - London DC
  • ORD - Chicago DC
  • SYD - Sydney DC

Authentication

For every API call it is first necessary to obtain an authenticated token. These tokens are valid for 24 hours and you must re-authenticate if it has expired. You can think of your token as a temporary "password" to access Rackspace services on your account. To authenticate, you only need your Rackspace account username ($MYRAXUSERNAME) and your API Key ($MYRAXAPIKEY).

  • Simple authentication against version 1.0 of the API (this should return, among other things, an "X-Auth-Token"):
$ curl -D - -H "X-Auth-Key: $MYRAXAPIKEY" \
       -H "X-Auth-User: $MYRAXUSERNAME" \
       -H "Content-Type: application/json" \
       -H "Accept: application/json" \
       https://identity.api.rackspacecloud.com/v1.0
  • Authenticate the same way as above, but time namelookup, connect, starttransfer, and total API calls:
$ curl -D - -H "X-Auth-Key: $MYRAXAPIKEY" \
       -H "X-Auth-User: $MYRAXUSERNAME" \
       -H "Content-Type: application/json" \
       -H "Accept: application/json" \
       -w "time_namelookup: %{time_namelookup}\ntime_connect: %{time_connect}\ntime_starttransfer: %{time_starttransfer}\ntime_total: %{time_total}\n" \
       https://identity.api.rackspacecloud.com/v1.0
  • Authenticate and receive a full listing of API v2.0 endpoints (for all of the Rackspace services active on your account):
$ curl -X POST https://identity.api.rackspacecloud.com/v2.0/tokens \
       -d '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"'$MYRAXUSERNAME'", "apiKey":"'$MYRAXAPIKEY'" } } }' \
       -H "Content-type: application/json"
  • Authenticate the same way as above, but use compression during the send/receive:
$ curl -i -X POST \
       -H 'Host: identity.api.rackspacecloud.com' \
       -H 'Accept-Encoding: gzip,deflate' \
       -H 'X-LC-Request-ID: 16491440' \
       -H 'Content-Type: application/json; charset=UTF-8' \
       -H 'Content-Length: 119' \
       -H 'Accept: application/json' \
       -H 'User-Agent: libcloud/0.13.0 (Rackspace Monitoring)' \
       --data-binary '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "$MYRAXUSERNAME", "apiKey": "$MYRAXAPIKEY"}}}' \
       --compress https://identity.api.rackspacecloud.com:443/v2.0/tokens
  • Save the "token" as an environmental variable (remember, this token will only be valid for 24 hours):
$ MYRAXTOKEN=`curl -s -XPOST https://identity.api.rackspacecloud.com/v2.0/tokens \
    -d'{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$MYRAXUSERNAME'","apiKey":"'$MYRAXAPIKEY'"}}}' \
    -H"Content-type:application/json" | \
    python -c 'import sys,json;data=json.loads(sys.stdin.read());print data["access"]["token"]["id"]'`

The token in the previous command is usually found at the beginning of the output and looks something like this: {"access":{"token":{"id":"abcd","expires":"2014-01-25T01:05:46.851Z" (where "abcd" is a string of 32 alphanumeric characters that are your token. The "expires" value tells you how long your token is valid (it will be 24 hours from when you first authenticated).

I will use this $MYRAXTOKEN environment variable (from the last command) for the remainder of this article.

Cloud Servers

Note: See the NextGen API docs or the FirstGen API docs for more details and examples. Take note of the API version numbers (e.g., "v1.0", "v2", etc.), as they are important and some are deprecated.

  • Cloud Servers "flavors": A flavor is an available hardware configuration for a server. Each flavor has a unique combination of disk space, memory capacity, and priority for CPU time.
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID               | Name                    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 2                | 512MB Standard Instance | 512       | 20   | 0         | 512  | 1     | 80.0        | N/A       |
| 3                | 1GB Standard Instance   | 1024      | 40   | 0         | 1024 | 1     | 120.0       | N/A       |
| 4                | 2GB Standard Instance   | 2048      | 80   | 0         | 2048 | 2     | 240.0       | N/A       |
| 5                | 4GB Standard Instance   | 4096      | 160  | 0         | 2048 | 2     | 400.0       | N/A       |
| 6                | 8GB Standard Instance   | 8192      | 320  | 0         | 2048 | 4     | 600.0       | N/A       |
| 7                | 15GB Standard Instance  | 15360     | 620  | 0         | 2048 | 6     | 800.0       | N/A       |
| 8                | 30GB Standard Instance  | 30720     | 1200 | 0         | 2048 | 8     | 1200.0      | N/A       |
| performance1-1   | 1 GB Performance        | 1024      | 20   | 0         |      | 1     | 200.0       | N/A       |
| performance1-2   | 2 GB Performance        | 2048      | 40   | 20        |      | 2     | 400.0       | N/A       |
| performance1-4   | 4 GB Performance        | 4096      | 40   | 40        |      | 4     | 800.0       | N/A       |
| performance1-8   | 8 GB Performance        | 8192      | 40   | 80        |      | 8     | 1600.0      | N/A       |
| performance2-120 | 120 GB Performance      | 122880    | 40   | 1200      |      | 32    | 10000.0     | N/A       |
| performance2-15  | 15 GB Performance       | 15360     | 40   | 150       |      | 4     | 1250.0      | N/A       |
| performance2-30  | 30 GB Performance       | 30720     | 40   | 300       |      | 8     | 2500.0      | N/A       |
| performance2-60  | 60 GB Performance       | 61440     | 40   | 600       |      | 16    | 5000.0      | N/A       |
| performance2-90  | 90 GB Performance       | 92160     | 40   | 900       |      | 24    | 7500.0      | N/A       |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+

Note that Performance Cloud Servers ("performance{1,2}") do not have a swap partition enabled by default. The "Disk" column is your system disk/partition and the "Ephemeral" column is your data disk/partition in an automatic setup. Standard instances do not come with a data disk. You can, of course, manually partition your virtual hard drives. You will use the flavor "ID" when choosing which flavor ("flavorId") to use when creating a server.

  • List all of the NextGen servers on your account for a given region:
$ REGION=dfw
$ curl -XGET https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers \
       -H "X-Auth-Token: ${MYRAXTOKEN}" \
       -H "Accept: application/json" | python -m json.tool
  • List all of the FirstGen servers on your account (regardless of region):
$ curl -s https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers \
       -H "X-Auth-Token: ${MYRAXTOKEN}" | python -m json.tool
  • Create a NextGen Cloud Server (in your account's default region):
$ curl -s https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers -X POST \
       -H "Content-Type: application/json" \
       -H "X-Auth-Token: ${MYRAXTOKEN}" \
       -d "{\"server\":{\"name\":\"testing\",\"imageId\":12345678,\"flavorId\":4,}}" | python -m json.tool

The above should return something similar to the following (note that "status": "BUILD" will become "Active" once the build process is complete):

{
    "server": {
    "addresses": {
    "private": [
    "10.x.x.x"
    ],
    "public": [
    "184.x.x.x"
    ]
},
"adminPass": "lR2aB3g5Xtesting",
"flavorId": 4,
"hostId": "ffffff2cd2470205bbe2f6b6f7f83d14",
"id": 87654321,
"imageId": 12345678,
"metadata": {},
"name": "testing",
"progress": 0,
"status": "BUILD"
}
}
  • Delete a Cloud Server:
$ curl -i -X DELETE -H "X-Auth-Token: ${MYRAXTOKEN}" \
       https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers/${SERVER_ID}
$ curl -i -vv -X POST \
       -H "X-Auth-Token: ${MYRAXTOKEN}" -H "Content-Type: application/json" \
       https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers/${SERVER_ID}/action \
       -d '{"resize":{"flavorRef":"2"}}'
$ curl -v -s -XPUT https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers/${SERVER_ID} \
       -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: ${MYRAXTOKEN}" \
       -d '{"server":{"name":"SERVER_NAME", "adminPass": "NEW_PASSWORD"}}'
# Normal Response Code: 204

Cloud Files

Note: See the Cloud Files docs for more details and examples.

  • Create container (e.g., region DFW):
$ REGION=dfw; curl -XPUT -H "X-Auth-Token: $MYRAXTOKEN" \
  https://storage101.${REGION}1.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff/container_name

Note: If you are always going to create containers (and put objects) in the same data centre/region (e.g., DFW), you can shorten your `curl` command like so:

$ ENDPOINT_URL=https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff
$ curl -XPUT -H "X-Auth-Token: $MYRAXTOKEN" "${ENDPOINT_URL}/${CONTAINER}"

Your API URL endpoint will be identical for all regions (except for the "dfw", "ord", etc. part). As such, I will use that "ENDPOINT_URL" for the remaining examples in this Cloud Files section.

  • Upload ("PUT") a file to a given container:
$ CONTAINER=foo; curl -X PUT -T /path/to/your/local/file.txt -D - \
   -H "Content-Type: text/plain" \
   -H "X-Auth-Token: ${MYRAXTOKEN}" \
   -H "X-Object-Meta-Screenie: Testing PUT Object" \
   "${ENDPOINT_URL}/{$CONTAINER}/file.txt"
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Last-Modified: Fri, 27 Dec 2013 19:53:35 GMT
Content-Length: 0
Etag: d41d8cd98fdfeg5fe9800998ecf8427e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1
Date: Fri, 27 Dec 2013 19:53:35 GMT

Note: If you do not explicitly define the "Content-Type" header, the API will try to guess the object type. If it can not guess which type the object you are uploading is, it will probably set it as "application/octet-stream".

  • Download ("GET") an object ("file") from a given container:
$ CONTAINER=foo; curl -X GET -o /path/where/to/download/file.txt "${ENDPOINT_URL}/{$CONTAINER}/file.txt" -H "X-Auth-Token: ${MYRAXTOKEN}"

Note: You do not have to include the "-X GET" in the above command, since that is the default request in `code`. However, I like to include it, as it makes it very clear (to me, my log files, my HISTORY, etc.) what I was requesting.

  • List all containers in a given region (note: the "-D -" dumps the header response to STDOUT. You can remove that part if you do not need/want it):
$ REGION=dfw; curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}"
  • Controlling a large list of containers (i.e., pagination; see: for details).

If you have hundreds or thousands of containers in your Cloud Files, you may not want/need to list them all out (as in the previous example). This is where setting limits, markers, and offsets (aka "pagination") becomes useful. (Note: The maximum list of containers the API will return is 10,000. Therefore, you must use pagination if you have more than that and wish to list them all.)

For an example, say you have the following Cloud Files containers:

foo
bar
baz
qux

To list only the first two (i.e., "foo" and "bar"), use the "?limit=2" parameter, like so:

$ curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}/?limit=2"
foo
bar

To list the next two containers (i.e., "baz" and "qux"), use the last container name from the previous command as the "marker" in the following command:

$ LIMIT=2; MAKER=bar; curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}/?limit=${LIMIT}&marker=${MARKER}"
baz
qux
  • Get information on a given container (e.g., container object count, the timestamp of when the container was created, bytes used, metadata, etc.; see: here for information on how to set/update metadata):
$ CONTAINER=foo; curl -I "${ENDPOINT_URL}/${CONTAINER}" -H "X-Auth-Token:${MYRAXTOKEN}"
HTTP/1.1 204 No Content
Content-Length: 0
X-Container-Object-Count: 21
Accept-Ranges: bytes
X-Container-Meta-Access-Log-Delivery: false
X-Timestamp: 1369387765.22382
X-Container-Bytes-Used: 10655668
Content-Type: text/plain; charset=utf-8
X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1
Date: Fri, 27 Dec 2013 18:40:32 GMT

A status code of 204 (No Content) indicates success. Status code 404 (Not Found) is returned when the requested container does not exist.

So, the container named "foo" has 21 objects in it and they are using 10655668 bytes (~10.66 MB) of space. Note that the "X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1" also tells you which data centre this container is stored in (i.e., the "dfw1" characters at the end of that string). Of course, you already knew that, as you told it which DC to use in your `curl` command (however, that might be useful information to store in, say, log files).

This container was created in Unix/POSIX time (UTC) on "1369387765.22382" (i.e., 2013-05-24 04:29:25.223820 in local time). You can use the following Python code to translate from a Unix timestamp to your local time:

>>> print datetime.datetime.fromtimestamp(1369387765.22382)
2013-05-24 04:29:25.223820

Or, just use the standard Linux `date` command:

$ date -d @1369387765.2238
Fri May 24 04:29:25 CDT 2013
#~OR~
$ date -ud @1369387765.2238  # If you, like me, prefer UTC time
Fri May 24 09:29:25 UTC 2013

You can also get information on a given object ("file") within a given container:

$ CONTAINER=foo; curl -I "${ENDPOINT_URL}/${CONTAINER}/testfile2" -H "X-Auth-Token:${MYRAXTOKEN}"
HTTP/1.1 200 OK
Content-Length: 0
Accept-Ranges: bytes
Last-Modified: Fri, 27 Dec 2013 19:59:42 GMT
Etag: d41d8cd98fd34fscfe9800998ecf8427e
X-Timestamp: 1388174382.97599
Content-Type: text/plain
X-Object-Meta-Screenie: Testing PUT Object
X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1
Date: Fri, 27 Dec 2013 19:59:56 GMT

Note that the "X-Object-Meta-Screenie" header was set in the previous upload/put object example (see above).

  • Print out the number of bytes all containers for each region are using:
$ for region in dfw1 iad3 ord1; do \
    echo $region $(curl -s -I "https://storage101.${region}.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff" \
      -H "X-Auth-Token:${MYRAXTOKEN}"|grep ^X-Account-Bytes-Used|sed 's/.*: \(\.*\)/\1/'); \
  done
dfw1 14745761
iad3 2046304
ord1 104906613

Note: Obviously, it would be better to do the above in a bash script (or, better yet, with an SDK like pyrax).

Cloud Image API export

In this section, I will show all of the steps necessary to export an image of a Cloud Server from one DC and then import it into another DC to be used to create new Cloud Servers from that imported image.

The examples in this section will do the following:

  • Create a Cloud Server in DFW;
  • Create an image of that Cloud Server in DFW;
  • Export that image to a Cloud Files container (also in DFW);
  • Download the exported image (which is a VHD) from the Cloud Files container in DFW;
  • Upload the VHD to a Cloud Files container in IAD;
  • Import that VHD as a saved image in IAD; and
  • Create a new Cloud Server in IAD using this exported image

NOTE: I used the API (simple cURL commands) for the entire process. Although you can accomplish some of the steps via the Cloud Control Panel (e.g., creating a Cloud Server and creating images of a Cloud Server), the actual export and import of the image must be done via the API.

  • Step #0: Setup your environment variables
$ ACCOUNT=012345
$ USERNAME=myraxusername
$ API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
$ EXPORT_REGION=dfw
$ IMPORT_REGION=iad
$ EXPORT_SERVER_ENDPOINT=https://$EXPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers
$ IMPORT_SERVER_ENDPOINT=https://$IMPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers
$ EXPORT_IMAGE_ENDPOINT=https://$EXPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT
$ IMPORT_IMAGE_ENDPOINT=https://$IMPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT
$ EXPORT_CONTAINER_ENDPOINT=https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
$ IMPORT_CONTAINER_ENDPOINT=https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
$ EXPORT_CONTAINER="image-exports"
$ IMPORT_CONTAINER="image-imports"
$ BASE_IMAGE_ID=fd8e4f18-9270-4f43-8932-c3719ae2f7fd # Centos 6.5 stock image
$ FLAVOR_ID="performance1-1"
$ TOKEN=`curl -s -XPOST https://identity.api.rackspacecloud.com/v2.0/tokens \
      -d'{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$USERNAME'","apiKey":"'$API_KEY'"}}}' \
      -H"Content-type:application/json" |\
      python -c 'import sys,json;data=json.loads(sys.stdin.read());print data["access"]["token"]["id"]'`
  • Step #1: Create a Cloud Server in DFW
$ EXPORT_SERVER_NAME="dfw-export-server"
$ curl -s -X POST $EXPORT_SERVER_ENDPOINT \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -H "Accept: application/json" \
      -d "{\"server\":{\"name\":\"$EXPORT_SERVER_NAME\",\"imageRef\":\"$BASE_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\
      python -m json.tool

Make any configuration changes to this server before proceeding to the next step.

  • Step #2: Create image of Cloud Server in DFW
$ EXPORT_SERVER_ID=c879cbb7-cdaa-4774-b333-9d5efc5deb78
$ EXPORT_IMAGE_NAME="dfw-export-image"
$ curl -s -X POST $EXPORT_SERVER_ENDPOINT/$EXPORT_SERVER_ID/action \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -H "Accept: application/json" \
      -d "{\"createImage\":{\"name\":\"$EXPORT_IMAGE_NAME\"}}"
  • Step #3: Export image to Cloud Files container in DFW

NOTE: Make sure the "EXPORT_CONTAINER" exists in Cloud Files (in DFW) before you run the following command.

$ curl -s -X POST \
      $EXPORT_IMAGE_ENDPOINT/tasks
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -d "{\"type\":\"export\",\"input\":{\"receiving_swift_container\":\"$EXPORT_CONTAINER\",\"image_uuid\":\"$EXPORT_IMAGE_ID\"}}" |\
      python -mjson.tool

You can check on the status of your export with the following command (keep checking until its status goes from "processing" to "success"):

$ curl -s -X GET -H "X-Auth-Token: $TOKEN" $EXPORT_IMAGE_ENDPOINT/tasks | python -m json.tool

  • Step #4: Download the exported image (VHD) from the Cloud Files container in DFW
$ VHD_FILENAME=8e9db6d7-0454-4d44-8b32-817c90212297.vhd
$ curl -o /tmp/$VHD_FILENAME \
      -H "X-Auth-Token: $TOKEN" \
      "$EXPORT_CONTAINER_ENDPOINT/$EXPORT_CONTAINER/$VHD_FILENAME"

After the file has successfully been downloaded to your local computer, you will want to check on the integrity of the VHD by looking at the VHD header/footer with the following commands:

$ head -c 512 $VHD_FILENAME | hexdump -C
00000000  63 6f 6e 65 63 74 69 78  00 00 00 02 00 01 00 00  |conectix........|
00000010  00 00 00 00 00 00 02 00  1a c3 dc 6a 74 61 70 00  |...........jtap.|
00000020  00 01 00 03 00 00 00 00  00 00 00 05 00 00 00 00  |................|
00000030  00 00 00 05 00 00 00 00  a2 8a 10 3f 00 00 00 03  |...........?....|
00000040  ff ff f0 d0 20 81 21 a0  08 fb 4c 2c 99 57 fd b9  |.... .!...L,.W..|
00000050  2d 6e 83 38 00 00 00 00  00 00 00 00 00 00 00 00  |-n.8............|
00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000200
$ tail -c 512 $VHD_FILENAME | hexdump -C
00000000  63 6f 6e 65 63 74 69 78  00 00 00 02 00 01 00 00  |conectix........|
00000010  00 00 00 00 00 00 02 00  1a c3 dc 6a 74 61 70 00  |...........jtap.|
00000020  00 01 00 03 00 00 00 00  00 00 00 05 00 00 00 00  |................|
00000030  00 00 00 05 00 00 00 00  a2 8a 10 3f 00 00 00 03  |...........?....|
00000040  ff ff f0 d0 20 81 21 a0  08 fb 4c 2c 99 57 fd b9  |.... .!...L,.W..|
00000050  2d 6e 83 38 00 00 00 00  00 00 00 00 00 00 00 00  |-n.8............|
00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000200

The output of `head` and `tail` should be identical and you should see that "conectix" in the output's first bytes.

  • Step #5: Upload VHD to Cloud Files container in IAD

NOTE: Make sure the "IMPORT_CONTAINER" exists in Cloud Files (in IAD) before you run the following command.

$ curl -X PUT -T /tmp/$VHD_FILENAME \
      -H "X-Auth-Token: $TOKEN" \
      "$IMPORT_ENDPOINT/$IMPORT_CONTAINER/$VHD_FILENAME"
  • Step #6: Import the VHD from the Cloud Files container (in IAD)
$ VHD_NOTES="image-exported-from-dfw"
$ curl -X POST "$IMAGES_API_ENDPOINT/tasks" \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\
      python -mjson.tool
  • Step #7: Create a new Cloud Server from the imported VHD/image (in IAD)
$ IMPORT_SERVER_NAME="iad-import-server"
$ IMPORT_IMAGE_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeee
$ curl -s -X POST $IMPORT_SERVER_ENDPOINT \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -H "Accept: application/json" \
      -d "{\"server\":{\"name\":\"$IMPORT_SERVER_NAME\",\"imageRef\":\"$IMPORT_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\
      python -m json.tool

The process is now complete. You have successfully exported an image of your Cloud Server from DFW and created a new Cloud Server from that (imported) image in IAD.

External links