Difference between revisions of "Rackspace API"
(→Authentication) |
|||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | This article will be a somewhat random assortment of example RESTful API calls to various [http://docs.rackspace.com/ Rackspace products and services]. I plan to organize this article and add nearly all possible calls. Most of the API calls will be made using <code> | + | <div style="margin: 10px; padding: 5px; border: 2px solid red; text-align: center">'''NOTE:''' This article was written in 2014 and is no longer maintained.</div> |
+ | |||
+ | This article will be a somewhat random assortment of example RESTful API calls to various [http://docs.rackspace.com/ Rackspace products and services]. I plan to organize this article and add nearly all possible calls. Most of the API calls will be made using <code>[[curl]]</code> commands, but other resources will be used as well. These examples, tutorials, HOWTOs, etc. of Rackspace API calls are (almost) always shown using the latest version of the RESTful API. | ||
==Notes== | ==Notes== | ||
Line 251: | Line 253: | ||
To list the next two containers (i.e., "baz" and "qux"), use the last container name from the previous command as the "<code>marker</code>" in the following command: | To list the next two containers (i.e., "baz" and "qux"), use the last container name from the previous command as the "<code>marker</code>" in the following command: | ||
− | $ LIMIT=2; | + | $ LIMIT=2; MARKER=bar; curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}/?limit=${LIMIT}&marker=${MARKER}" |
baz | baz | ||
qux | qux | ||
Line 273: | Line 275: | ||
This container was created in Unix/POSIX time (UTC) on "1369387765.22382" (i.e., 2013-05-24 04:29:25.223820 in local time). You can use the following [[Python]] code to translate from a Unix timestamp to your local time: | This container was created in Unix/POSIX time (UTC) on "1369387765.22382" (i.e., 2013-05-24 04:29:25.223820 in local time). You can use the following [[Python]] code to translate from a Unix timestamp to your local time: | ||
− | + | $ python -c 'import datetime; print datetime.datetime.fromtimestamp(1369387765.22382)' | |
+ | #~OR~ | ||
+ | $ echo 1369387765.22382|python -c 'import sys,datetime;d=sys.stdin.read();print datetime.datetime.fromtimestamp(float(d))' | ||
2013-05-24 04:29:25.223820 | 2013-05-24 04:29:25.223820 | ||
Or, just use the standard Linux <code>`[[date]]`</code> command: | Or, just use the standard Linux <code>`[[date]]`</code> command: | ||
Line 334: | Line 338: | ||
$ EXPORT_REGION=dfw | $ EXPORT_REGION=dfw | ||
$ IMPORT_REGION=iad | $ IMPORT_REGION=iad | ||
− | $ EXPORT_SERVER_ENDPOINT=<nowiki>https://$EXPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers</nowiki> | + | $ EXPORT_SERVER_ENDPOINT=<nowiki>https://${EXPORT_REGION}.servers.api.rackspacecloud.com/v2/${ACCOUNT}/servers</nowiki> |
$ IMPORT_SERVER_ENDPOINT=<nowiki>https://$IMPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers</nowiki> | $ IMPORT_SERVER_ENDPOINT=<nowiki>https://$IMPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers</nowiki> | ||
$ EXPORT_IMAGE_ENDPOINT=<nowiki>https://$EXPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT</nowiki> | $ EXPORT_IMAGE_ENDPOINT=<nowiki>https://$EXPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT</nowiki> |
Latest revision as of 17:30, 19 September 2019
This article will be a somewhat random assortment of example RESTful API calls to various Rackspace products and services. I plan to organize this article and add nearly all possible calls. Most of the API calls will be made using curl
commands, but other resources will be used as well. These examples, tutorials, HOWTOs, etc. of Rackspace API calls are (almost) always shown using the latest version of the RESTful API.
Contents
Notes
You will notice in many of the examples given in this article that I mix the order of my requests and headers and sometimes include whitespaces (e.g., "-X PUT
" or '-H "X-Auth-Token: $MYRAXTOKEN"
') and sometimes do not (e.g., "-XPUT
" or '-H"X-Auth-Token:$MYRAXTOKEN"
'). This is on purpose, as I am attempting to illustrate that `curl`
and the API do not care about the order of these or about most whitespaces (there are some exceptions and I will point those out when necessary). I also am sometimes enclosing the endpoint URL in quotes. This, too, is usually not necessary, but it delineates the endpoint (+container/object/parameters) more clearly (and, if you are naughty and use spaces in your filenames {why are you?}, you will need to use quotes).
Environment variables
Note: This is probably not the most secure way to store your account username, API Key, account number, or token. However, it makes it simpler to illustrate using the Rackspace API in this article.
You can find all of the following (except for your 24-hour-valid token) in the Rackspace Cloud Control Panel under the "Account Settings" section.
MYRAXUSERNAME=your_account_username MYRAXAPIKEY=your_unique_api_key # _never_ give this out to anyone! MYRAXACCOUNT=integer_value # e.g., 123456 MYRAXTOKEN=your_24_hour_valid_token # see below
I will be using the above environment variables for the remainder of this article.
Regions
The following are the Rackspace API endpoint regions (aka, the data centres where your servers/data/etc live):
- DFW - Dallas DC
- HKG - Hong Kong DC
- IAD - Norther Virginia DC
- LON - London DC
- ORD - Chicago DC
- SYD - Sydney DC
Authentication
For every API call, it is first necessary to obtain an authenticated token. These tokens are valid for 24 hours and you must re-authenticate, if it has expired. You can think of your token as a temporary "password" to access Rackspace services on your account. To authenticate, you only need your Rackspace account username ($MYRAXUSERNAME
) and your API Key ($MYRAXAPIKEY
).
- Simple authentication against version 1.0 of the API (this should return, among other things, an "
X-Auth-Token
"):
$ curl -D - -H "X-Auth-Key: $MYRAXAPIKEY" \ -H "X-Auth-User: $MYRAXUSERNAME" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ https://identity.api.rackspacecloud.com/v1.0
- Authenticate the same way as above, but time namelookup, connect, starttransfer, and total API calls:
$ curl -D - -H "X-Auth-Key: $MYRAXAPIKEY" \ -H "X-Auth-User: $MYRAXUSERNAME" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -w "time_namelookup: %{time_namelookup}\ntime_connect: %{time_connect}\ntime_starttransfer: %{time_starttransfer}\ntime_total: %{time_total}\n" \ https://identity.api.rackspacecloud.com/v1.0
- Authenticate and receive a full listing of API v2.0 endpoints (for all of the Rackspace services active on your account):
$ curl -X POST https://identity.api.rackspacecloud.com/v2.0/tokens \ -d '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"'$MYRAXUSERNAME'", "apiKey":"'$MYRAXAPIKEY'" } } }' \ -H "Content-type: application/json"
- Authenticate the same way as above, but use compression during the send/receive:
$ curl -i -X POST \ -H 'Host: identity.api.rackspacecloud.com' \ -H 'Accept-Encoding: gzip,deflate' \ -H 'X-LC-Request-ID: 16491440' \ -H 'Content-Type: application/json; charset=UTF-8' \ -H 'Content-Length: 119' \ -H 'Accept: application/json' \ -H 'User-Agent: libcloud/0.13.0 (Rackspace Monitoring)' \ --data-binary '{"auth": {"RAX-KSKEY:apiKeyCredentials": {"username": "$MYRAXUSERNAME", "apiKey": "$MYRAXAPIKEY"}}}' \ --compress https://identity.api.rackspacecloud.com:443/v2.0/tokens
- Save the "
token
" as an environmental variable (remember, this token will only be valid for 24 hours):
$ MYRAXTOKEN=`curl -s -XPOST https://identity.api.rackspacecloud.com/v2.0/tokens \ -d'{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$MYRAXUSERNAME'","apiKey":"'$MYRAXAPIKEY'"}}}' \ -H"Content-type:application/json" | \ python -c 'import sys,json;data=json.loads(sys.stdin.read());print data["access"]["token"]["id"]'`
The token in the previous command is usually found at the beginning of the output and looks something like this: {"access":{"token":{"id":"abcd","expires":"2014-01-25T01:05:46.851Z"
(where "abcd
" is a string of 32 alphanumeric characters that are your token. The "expires
" value tells you how long your token is valid (it will be 24 hours from when you first authenticated).
I will use this $MYRAXTOKEN
environment variable (from the last command) for the remainder of this article.
Cloud Servers
Note: See the NextGen API docs or the FirstGen API docs for more details and examples. Take note of the API version numbers (e.g., "v1.0", "v2", etc.), as they are important and some are deprecated.
- Cloud Servers "flavors": A flavor is an available hardware configuration for a server. Each flavor has a unique combination of disk space, memory capacity, and priority for CPU time.
- Old flavors
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+ | 2 | 512MB Standard Instance | 512 | 20 | 0 | 512 | 1 | 80.0 | N/A | | 3 | 1GB Standard Instance | 1024 | 40 | 0 | 1024 | 1 | 120.0 | N/A | | 4 | 2GB Standard Instance | 2048 | 80 | 0 | 2048 | 2 | 240.0 | N/A | | 5 | 4GB Standard Instance | 4096 | 160 | 0 | 2048 | 2 | 400.0 | N/A | | 6 | 8GB Standard Instance | 8192 | 320 | 0 | 2048 | 4 | 600.0 | N/A | | 7 | 15GB Standard Instance | 15360 | 620 | 0 | 2048 | 6 | 800.0 | N/A | | 8 | 30GB Standard Instance | 30720 | 1200 | 0 | 2048 | 8 | 1200.0 | N/A | | performance1-1 | 1 GB Performance | 1024 | 20 | 0 | | 1 | 200.0 | N/A | | performance1-2 | 2 GB Performance | 2048 | 40 | 20 | | 2 | 400.0 | N/A | | performance1-4 | 4 GB Performance | 4096 | 40 | 40 | | 4 | 800.0 | N/A | | performance1-8 | 8 GB Performance | 8192 | 40 | 80 | | 8 | 1600.0 | N/A | | performance2-120 | 120 GB Performance | 122880 | 40 | 1200 | | 32 | 10000.0 | N/A | | performance2-15 | 15 GB Performance | 15360 | 40 | 150 | | 4 | 1250.0 | N/A | | performance2-30 | 30 GB Performance | 30720 | 40 | 300 | | 8 | 2500.0 | N/A | | performance2-60 | 60 GB Performance | 61440 | 40 | 600 | | 16 | 5000.0 | N/A | | performance2-90 | 90 GB Performance | 92160 | 40 | 900 | | 24 | 7500.0 | N/A | +------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
- New flavors
+------------------+-------------------------+-----------+------+-----------+---------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap_MB | VCPUs | RXTX_Factor | Is_Public | +------------------+-------------------------+-----------+------+-----------+---------+-------+-------------+-----------+ | 2 | 512MB Standard Instance | 512 | 20 | 0 | | 1 | | N/A | | 3 | 1GB Standard Instance | 1024 | 40 | 0 | | 1 | | N/A | | 4 | 2GB Standard Instance | 2048 | 80 | 0 | | 2 | | N/A | | 5 | 4GB Standard Instance | 4096 | 160 | 0 | | 2 | | N/A | | 6 | 8GB Standard Instance | 8192 | 320 | 0 | | 4 | | N/A | | 7 | 15GB Standard Instance | 15360 | 620 | 0 | | 6 | | N/A | | 8 | 30GB Standard Instance | 30720 | 1200 | 0 | | 8 | | N/A | | compute1-15 | 15 GB Compute v1 | 15360 | 0 | 0 | | 8 | | N/A | | compute1-30 | 30 GB Compute v1 | 30720 | 0 | 0 | | 16 | | N/A | | compute1-4 | 3.75 GB Compute v1 | 3840 | 0 | 0 | | 2 | | N/A | | compute1-60 | 60 GB Compute v1 | 61440 | 0 | 0 | | 32 | | N/A | | compute1-8 | 7.5 GB Compute v1 | 7680 | 0 | 0 | | 4 | | N/A | | general1-1 | 1 GB General Purpose v1 | 1024 | 20 | 0 | | 1 | | N/A | | general1-2 | 2 GB General Purpose v1 | 2048 | 40 | 0 | | 2 | | N/A | | general1-4 | 4 GB General Purpose v1 | 4096 | 80 | 0 | | 4 | | N/A | | general1-8 | 8 GB General Purpose v1 | 8192 | 160 | 0 | | 8 | | N/A | | io1-120 | 120 GB I/O v1 | 122880 | 40 | 1200 | | 32 | | N/A | | io1-15 | 15 GB I/O v1 | 15360 | 40 | 150 | | 4 | | N/A | | io1-30 | 30 GB I/O v1 | 30720 | 40 | 300 | | 8 | | N/A | | io1-60 | 60 GB I/O v1 | 61440 | 40 | 600 | | 16 | | N/A | | io1-90 | 90 GB I/O v1 | 92160 | 40 | 900 | | 24 | | N/A | | memory1-120 | 120 GB Memory v1 | 122880 | 0 | 0 | | 16 | | N/A | | memory1-15 | 15 GB Memory v1 | 15360 | 0 | 0 | | 2 | | N/A | | memory1-240 | 240 GB Memory v1 | 245760 | 0 | 0 | | 32 | | N/A | | memory1-30 | 30 GB Memory v1 | 30720 | 0 | 0 | | 4 | | N/A | | memory1-60 | 60 GB Memory v1 | 61440 | 0 | 0 | | 8 | | N/A | | onmetal-compute1 | OnMetal Compute v1 | 32768 | 32 | 0 | | 20 | | N/A | | onmetal-memory1 | OnMetal Memory v1 | 524288 | 32 | 0 | | 24 | | N/A | | performance1-1 | 1 GB Performance | 1024 | 20 | 0 | | 1 | | N/A | | performance1-2 | 2 GB Performance | 2048 | 40 | 20 | | 2 | | N/A | | performance1-4 | 4 GB Performance | 4096 | 40 | 40 | | 4 | | N/A | | performance1-8 | 8 GB Performance | 8192 | 40 | 80 | | 8 | | N/A | | performance2-120 | 120 GB Performance | 122880 | 40 | 1200 | | 32 | | N/A | | performance2-15 | 15 GB Performance | 15360 | 40 | 150 | | 4 | | N/A | | performance2-30 | 30 GB Performance | 30720 | 40 | 300 | | 8 | | N/A | | performance2-60 | 60 GB Performance | 61440 | 40 | 600 | | 16 | | N/A | | performance2-90 | 90 GB Performance | 92160 | 40 | 900 | | 24 | | N/A | +------------------+-------------------------+-----------+------+-----------+---------+-------+-------------+-----------+
Note that Performance Cloud Servers ("performance{1,2}
") do not have a swap partition enabled by default. The "Disk
" column is your system disk/partition and the "Ephemeral
" column is your data disk/partition in an automatic setup. Standard instances do not come with a data disk. You can, of course, manually partition your virtual hard drives. You will use the flavor "ID
" when choosing which flavor ("flavorId
") to use when creating a server.
- List all of the NextGen servers on your account for a given region:
$ REGION=dfw $ curl -XGET https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers \ -H "X-Auth-Token: ${MYRAXTOKEN}" \ -H "Accept: application/json" | python -m json.tool
- List all of the FirstGen servers on your account (regardless of region):
$ curl -s https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers \ -H "X-Auth-Token: ${MYRAXTOKEN}" | python -m json.tool
- Create a NextGen Cloud Server (in your account's default region):
$ curl -s https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers -X POST \ -H "Content-Type: application/json" \ -H "X-Auth-Token: ${MYRAXTOKEN}" \ -d "{\"server\":{\"name\":\"testing\",\"imageId\":12345678,\"flavorId\":4,}}" | python -m json.tool
The above should return something similar to the following (note that "status": "BUILD"
will become "Active"
once the build process is complete):
{ "server": { "addresses": { "private": [ "10.x.x.x" ], "public": [ "184.x.x.x" ] }, "adminPass": "lR2aB3g5Xtesting", "flavorId": 4, "hostId": "ffffff2cd2470205bbe2f6b6f7f83d14", "id": 87654321, "imageId": 12345678, "metadata": {}, "name": "testing", "progress": 0, "status": "BUILD" } }
- Delete a Cloud Server:
$ curl -i -X DELETE -H "X-Auth-Token: ${MYRAXTOKEN}" \ https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers/${SERVER_ID}
- Resize (up/down) a Cloud Server (see: for details):
$ curl -i -vv -X POST \ -H "X-Auth-Token: ${MYRAXTOKEN}" -H "Content-Type: application/json" \ https://${REGION}.servers.api.rackspacecloud.com/v2/${MYRAXACCOUNT}/servers/${SERVER_ID}/action \ -d '{"resize":{"flavorRef":"2"}}'
- Change administrative password (see: for details)
$ curl -v -s -XPUT https://servers.api.rackspacecloud.com/v1.0/${MYRAXACCOUNT}/servers/${SERVER_ID} \ -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: ${MYRAXTOKEN}" \ -d '{"server":{"name":"SERVER_NAME", "adminPass": "NEW_PASSWORD"}}' # Normal Response Code: 204
Cloud Files
- Note: Cloud Files is an "Object Storage" application level system that uses an API to interact with it. For an introduction to Object Storage, see this blog post.
- Note: See the Cloud Files API docs for more details and examples.
- Create container (e.g., region DFW):
$ REGION=dfw; curl -XPUT -H "X-Auth-Token: $MYRAXTOKEN" \ https://storage101.${REGION}1.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff/container_name
Note: If you are always going to create containers (and put objects) in the same data centre/region (e.g., DFW), you can shorten your `curl`
command like so:
$ ENDPOINT_URL=https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff $ curl -XPUT -H "X-Auth-Token: $MYRAXTOKEN" "${ENDPOINT_URL}/${CONTAINER}"
Your API URL endpoint will be identical for all regions (except for the "dfw
", "ord
", etc. part). As such, I will use that "ENDPOINT_URL
" for the remaining examples in this Cloud Files section.
- Upload ("PUT") a file to a given container:
$ CONTAINER=foo; curl -X PUT -T /path/to/your/local/file.txt -D - \ -H "Content-Type: text/plain" \ -H "X-Auth-Token: ${MYRAXTOKEN}" \ -H "X-Object-Meta-Screenie: Testing PUT Object" \ "${ENDPOINT_URL}/${CONTAINER}/file.txt" HTTP/1.1 100 Continue HTTP/1.1 201 Created Last-Modified: Fri, 27 Dec 2013 19:53:35 GMT Content-Length: 0 Etag: d41d8cd98fdfeg5fe9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1 Date: Fri, 27 Dec 2013 19:53:35 GMT
Note: If you do not explicitly define the "Content-Type
" header, the API will try to guess the object type. If it can not guess which type the object you are uploading is, it will probably set it as "application/octet-stream
".
- Download ("GET") an object ("file") from a given container:
$ CONTAINER=foo; curl -X GET -o /path/where/to/download/file.txt "${ENDPOINT_URL}/${CONTAINER}/file.txt" -H "X-Auth-Token: ${MYRAXTOKEN}"
Note: You do not have to include the "-X GET
" in the above command, since that is the default request in `curl`
. However, I like to include it, as it makes it very clear (to me, my log files, my HISTORY
, etc.) what I was requesting.
- List all containers in a given region (note: the "
-D -
" dumps the header response to STDOUT. You can remove that part if you do not need/want it):
$ REGION=dfw; curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}"
- Controlling a large list of containers (i.e., pagination; see: for details).
If you have hundreds or thousands of containers in your Cloud Files, you may not want/need to list them all out (as in the previous example). This is where setting limits, markers, and offsets (aka "pagination") becomes useful. (Note: The maximum list of containers the API will return is 10,000. Therefore, you must use pagination if you have more than that and wish to list them all.)
For an example, say you have the following Cloud Files containers:
foo bar baz qux
To list only the first two (i.e., "foo" and "bar"), use the "?limit=2
" parameter, like so:
$ curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}/?limit=2" foo bar
To list the next two containers (i.e., "baz" and "qux"), use the last container name from the previous command as the "marker
" in the following command:
$ LIMIT=2; MARKER=bar; curl -X GET -D - -H "X-Auth-Token:${MYRAXTOKEN}" "${ENDPOINT_URL}/?limit=${LIMIT}&marker=${MARKER}" baz qux
- Get information on a given container (e.g., container object count, the timestamp of when the container was created, bytes used, metadata, etc.; see: here for information on how to set/update metadata):
$ CONTAINER=foo; curl -I "${ENDPOINT_URL}/${CONTAINER}" -H "X-Auth-Token:${MYRAXTOKEN}" HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 21 Accept-Ranges: bytes X-Container-Meta-Access-Log-Delivery: false X-Timestamp: 1369387765.22382 X-Container-Bytes-Used: 10655668 Content-Type: text/plain; charset=utf-8 X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1 Date: Fri, 27 Dec 2013 18:40:32 GMT
A status code of 204 (No Content) indicates success. Status code 404 (Not Found) is returned when the requested container does not exist.
So, the container named "foo
" has 21 objects in it and they are using 10655668 bytes (~10.66 MB) of space. Note that the "X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1
" also tells you which data centre this container is stored in (i.e., the "dfw1
" characters at the end of that string). Of course, you already knew that, as you told it which DC to use in your `curl`
command (however, that might be useful information to store in, say, log files).
This container was created in Unix/POSIX time (UTC) on "1369387765.22382" (i.e., 2013-05-24 04:29:25.223820 in local time). You can use the following Python code to translate from a Unix timestamp to your local time:
$ python -c 'import datetime; print datetime.datetime.fromtimestamp(1369387765.22382)' #~OR~ $ echo 1369387765.22382|python -c 'import sys,datetime;d=sys.stdin.read();print datetime.datetime.fromtimestamp(float(d))' 2013-05-24 04:29:25.223820
Or, just use the standard Linux `date`
command:
$ date -d @1369387765.2238 Fri May 24 04:29:25 CDT 2013 #~OR~ $ date -ud @1369387765.2238 # If you, like me, prefer UTC time Fri May 24 09:29:25 UTC 2013
You can also get information on a given object ("file") within a given container:
$ CONTAINER=foo; curl -I "${ENDPOINT_URL}/${CONTAINER}/testfile2" -H "X-Auth-Token:${MYRAXTOKEN}" HTTP/1.1 200 OK Content-Length: 0 Accept-Ranges: bytes Last-Modified: Fri, 27 Dec 2013 19:59:42 GMT Etag: d41d8cd98fd34fscfe9800998ecf8427e X-Timestamp: 1388174382.97599 Content-Type: text/plain X-Object-Meta-Screenie: Testing PUT Object X-Trans-Id: txfffffffffffffffffffff-ffffffffffdfw1 Date: Fri, 27 Dec 2013 19:59:56 GMT
Note that the "X-Object-Meta-Screenie
" header was set in the previous upload/put object example (see above).
- Print out the number of bytes all containers for each region are using:
$ for region in dfw1 iad3 ord1; do \ echo $region $(curl -s -I "https://storage101.${region}.clouddrive.com/v1/MossoCloudFS_ffff-ffff-ffff-ffff-ffff" \ -H "X-Auth-Token:${MYRAXTOKEN}"|grep ^X-Account-Bytes-Used|sed 's/.*: \(\.*\)/\1/'); \ done dfw1 14745761 iad3 2046304 ord1 104906613
Note: Obviously, it would be better to do the above in a bash script (or, better yet, with an SDK like pyrax).
Cloud Databases (DBaaS)
see: Cloud Databases
Cloud Images API export/import
In this section, I will show all of the steps necessary to export an image of a Cloud Server from one DC and then import it into another DC to be used to create new Cloud Servers from that imported image. This feature, Cloud Images, was released by Rackspace on 27-March-2014. It is based on the OpenStack® Glance project, Cloud Images lets you use a RESTful API to discover, import, export, and share images of your Cloud Servers. Having portable images gives you more control and flexibility to build and deploy your applications in the cloud.
The examples in this section will do the following:
- Create a Cloud Server in DFW;
- Create an image of that Cloud Server in DFW;
- Export that image to a Cloud Files container (also in DFW);
- Download the exported image (which is a VHD) from the Cloud Files container in DFW;
- Upload the VHD to a Cloud Files container in IAD;
- Import that VHD as a saved image in IAD; and
- Create a new Cloud Server in IAD using this exported image
NOTE: I used the API (simple cURL commands) for the entire process. Although you can accomplish some of the steps via the Cloud Control Panel (e.g., creating a Cloud Server and creating images of a Cloud Server), the actual export and import of the image must be done via the API.
- Step #0: Setup your environment variables
$ ACCOUNT=012345 $ USERNAME=myraxusername $ API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx $ EXPORT_REGION=dfw $ IMPORT_REGION=iad $ EXPORT_SERVER_ENDPOINT=https://${EXPORT_REGION}.servers.api.rackspacecloud.com/v2/${ACCOUNT}/servers $ IMPORT_SERVER_ENDPOINT=https://$IMPORT_REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT/servers $ EXPORT_IMAGE_ENDPOINT=https://$EXPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT $ IMPORT_IMAGE_ENDPOINT=https://$IMPORT_REGION.images.api.rackspacecloud.com/v2/$ACCOUNT $ EXPORT_CONTAINER_ENDPOINT=https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee $ IMPORT_CONTAINER_ENDPOINT=https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee $ EXPORT_CONTAINER="image-exports" $ IMPORT_CONTAINER="image-imports" $ BASE_IMAGE_ID=fd8e4f18-9270-4f43-8932-c3719ae2f7fd # Centos 6.5 stock image $ FLAVOR_ID="performance1-1" $ TOKEN=`curl -s -XPOST https://identity.api.rackspacecloud.com/v2.0/tokens \ -d'{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'$USERNAME'","apiKey":"'$API_KEY'"}}}' \ -H"Content-type:application/json" |\ python -c 'import sys,json;data=json.loads(sys.stdin.read());print data["access"]["token"]["id"]'`
- Step #1: Create a Cloud Server in DFW
$ EXPORT_SERVER_NAME="dfw-export-server" $ curl -s -X POST $EXPORT_SERVER_ENDPOINT \ -H "X-Auth-Token: $TOKEN" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d "{\"server\":{\"name\":\"$EXPORT_SERVER_NAME\",\"imageRef\":\"$BASE_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\ python -m json.tool
Make any configuration changes to this server before proceeding to the next step.
- Step #2: Create image of Cloud Server in DFW
$ EXPORT_SERVER_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeee $ EXPORT_IMAGE_NAME="dfw-export-image" $ curl -s -X POST $EXPORT_SERVER_ENDPOINT/$EXPORT_SERVER_ID/action \ -H "X-Auth-Token: $TOKEN" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d "{\"createImage\":{\"name\":\"$EXPORT_IMAGE_NAME\"}}"
- Step #3: Export image to Cloud Files container in DFW
NOTE: Make sure the "EXPORT_CONTAINER
" exists in Cloud Files (in DFW) before you run the following command.
$ curl -s -X POST \ $EXPORT_IMAGE_ENDPOINT/tasks -H "X-Auth-Token: $TOKEN" \ -H "Content-Type: application/json" \ -d "{\"type\":\"export\",\"input\":{\"receiving_swift_container\":\"$EXPORT_CONTAINER\",\"image_uuid\":\"$EXPORT_IMAGE_ID\"}}" |\ python -mjson.tool
You can check on the status of your export with the following command (keep checking until its status goes from "processing" to "success"):
$ curl -s -X GET -H "X-Auth-Token: $TOKEN" $EXPORT_IMAGE_ENDPOINT/tasks | python -m json.tool
- Step #4: Download the exported image (VHD) from the Cloud Files container in DFW
$ VHD_FILENAME=aaaaaaaa-bbbb-cccc-dddd-eeeeeeee.vhd $ curl -o /tmp/$VHD_FILENAME \ -H "X-Auth-Token: $TOKEN" \ "$EXPORT_CONTAINER_ENDPOINT/$EXPORT_CONTAINER/$VHD_FILENAME"
After the file has successfully been downloaded to your local computer, you will want to check on the integrity of the VHD by looking at the VHD header/footer with the following commands:
$ head -c 512 $VHD_FILENAME | hexdump -C 00000000 63 6f 6e 65 63 74 69 78 00 00 00 02 00 01 00 00 |conectix........| 00000010 00 00 00 00 00 00 02 00 1a c3 dc 6a 74 61 70 00 |...........jtap.| 00000020 00 01 00 03 00 00 00 00 00 00 00 05 00 00 00 00 |................| 00000030 00 00 00 05 00 00 00 00 a2 8a 10 3f 00 00 00 03 |...........?....| 00000040 ff ff f0 d0 20 81 21 a0 08 fb 4c 2c 99 57 fd b9 |.... .!...L,.W..| 00000050 2d 6e 83 38 00 00 00 00 00 00 00 00 00 00 00 00 |-n.8............| 00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200 $ tail -c 512 $VHD_FILENAME | hexdump -C 00000000 63 6f 6e 65 63 74 69 78 00 00 00 02 00 01 00 00 |conectix........| 00000010 00 00 00 00 00 00 02 00 1a c3 dc 6a 74 61 70 00 |...........jtap.| 00000020 00 01 00 03 00 00 00 00 00 00 00 05 00 00 00 00 |................| 00000030 00 00 00 05 00 00 00 00 a2 8a 10 3f 00 00 00 03 |...........?....| 00000040 ff ff f0 d0 20 81 21 a0 08 fb 4c 2c 99 57 fd b9 |.... .!...L,.W..| 00000050 2d 6e 83 38 00 00 00 00 00 00 00 00 00 00 00 00 |-n.8............| 00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200
The output of `head`
and `tail`
should be identical and you should see that "conectix
" in the output's first bytes.
- Step #5: Upload VHD to Cloud Files container in IAD
NOTE: Make sure the "IMPORT_CONTAINER
" exists in Cloud Files (in IAD) before you run the following command.
$ curl -X PUT -T /tmp/$VHD_FILENAME \ -H "X-Auth-Token: $TOKEN" \ "$IMPORT_ENDPOINT/$IMPORT_CONTAINER/$VHD_FILENAME"
- Step #6: Import the VHD from the Cloud Files container (in IAD)
$ VHD_NOTES="image-exported-from-dfw" $ curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \ -H "X-Auth-Token: $TOKEN" \ -H "Content-Type: application/json" \ -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\ python -mjson.tool
- Step #7: Create a new Cloud Server from the imported VHD/image (in IAD)
$ IMPORT_SERVER_NAME="iad-import-server" $ IMPORT_IMAGE_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeee $ curl -s -X POST $IMPORT_SERVER_ENDPOINT \ -H "X-Auth-Token: $TOKEN" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d "{\"server\":{\"name\":\"$IMPORT_SERVER_NAME\",\"imageRef\":\"$IMPORT_IMAGE_ID\",\"flavorRef\":\"$FLAVOR_ID\",\"OS-DCF:diskConfig\":\"AUTO\"}}" |\ python -m json.tool
The process is now complete. You have successfully exported an image of your Cloud Server from DFW and created a new Cloud Server from that (imported) image in IAD.
Image import general requirements
- The image must be a single file in the VHD file format.
- Note: A Cloud Files object, whether a Dynamic Large Object or a Static Large Object, is considered to be a single file for this purpose.
- Images must not expand to a system disk larger than 40 GB, which is the maximum system disk size for Rackspace Performance Cloud Servers.
- If you have exported an image from the Rackspace open cloud it will already be in the VHD format required for import.
- Note: Images with system disks larger than 40 GB can be exported, but cannot be imported into the Rackspace OpenCloud.
Image import error codes
- 396
- invalid VHD (non-VHD file or VHD file that does not contain all the correct headers)
- 413
- invalid size (virtual size > 40GB)
- 609
- invalid size (physical size > 40GB)
- 721
- invalid chain depth (has one or more parents)
- 523
- invalid disk type (only "Fixed" and "Dynamic" types allowed, as of July 2014)
- 111
- any other error
Related links
- Glance API - Use Cases
- Rackspace Cloud Images
- Cloud Images API documentation
- Transferring images between regions of the Rackspace open cloud
- Cloud Images Frequently Asked Questions (FAQs)
- Preparing an image for import into the Rackspace open cloud
- Cloud Share Image — a simple way to share images between two Rackspace accounts
Role Based Access Control (RBAC)
Although there is currently not a way to get overall activity/audit reports for a given Role Based Access Control (RBAC) user, you can get a list of actions for a given service or product (e.g., a Cloud Server).
For an example: Say you have created an RBAC user called "bob" and you have given this user a custom product role for Next Generation Servers with an Admin role (i.e., "bob" can view, create, edit, or delete Next Generation Servers). Now, let's say user "bob" performs the following actions on your account:
- Creates a Next Generation Performance Cloud Server (let's call this server "foobar"); and
- Reboots the "foobar" Cloud Server
You are monitoring your account and you notice a new server list (either through the Cloud Control Panel or by getting a list of your servers via API calls). You would like to see who created that server and which types of actions have been done on this server. Note that the only types of actions you will be able to list are those that can be done via the API.
Assuming you have already authenticated and have received your 24-hour valid token ("TOKEN"), you can get a list of RBAC users on your account with the following:
$ AUTH_ENDPOINT=https://identity.api.rackspacecloud.com $ curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" "$AUTH_ENDPOINT/v2.0/users" | python -m json.tool
The above will return a list of your RBAC users and some details about them. It will look something like the following (I am only showing one result for brevity):
{ "users": [ { "RAX-AUTH:defaultRegion": "DFW", "RAX-AUTH:domainId": "123456", "created": "2014-07-01T04:20:10.548Z", "email": "bob@example.com", "enabled": true, "id": "ffffffffffffffffffffffffffffffff", "updated": "2014-07-01T04:20:11.320Z", "username": "bob" } ] }
The RBAC user id ("id" value in the above result) is how we will match actions taken for a given service/product with which user performed them.
So, let's say we want to get a list of actions performed on the server "foobar" (and the server's id is "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
"). We can run the following `curl`
command to query the API for this server like so:
$ ACCOUNT=123456 $ REGION=dfw $ SERVICE_ENDPOINT=https://$REGION.servers.api.rackspacecloud.com/v2/$ACCOUNT $ SERVER_ID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee $ curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" \ "$SERVICE_ENDPOINT/servers/$SERVER_ID/os-instance-actions" \ | python -m json.tool
The above command should return something that looks like the following:
{ "instanceActions": [ { "action": "reboot", "instance_uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "message": null, "project_id": "123456", "request_id": "req-bb31ea03-4c8c-4052-8e16-19d2c3b93bf6", "start_time": "2014-07-01T04:46:05.000000", "user_id": "ffffffffffffffffffffffffffffffff" }, { "action": "create", "instance_uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "message": null, "project_id": "123456", "request_id": "req-d7cd1f14-3c79-4e55-c958-316fd4f71b67", "start_time": "2014-07-01T04:27:08.000000", "user_id": "ffffffffffffffffffffffffffffffff" } ] }
As you can see, we first got the RBAC "user_id
" for "bob" ("ffffffffffffffffffffffffffffffff
" in our example). We then matched this user_id against a list of actions taken on our "foobar" server. We see that user "bob" created the server on our account ("123456") on "2014-07-01T04:27:08.000000" and then rebooted this same server on "2014-07-01T04:46:05.000000".
You can get a complete list of actions that can be taken or queried for Cloud Servers here.
In the above examples, I only showed how to get a list of actions for a given Cloud Server. You can do similar things for the other products and services Rackspace offers via the API (e.g., Load Balancers, DNS, Cloud Files, Cloud Databases, etc.). Instead of using `curl`
for these calls, you could use one of the SDKs (list on the right-hand side here).
Also, you could set up your own "logging" system by using something like cron jobs to query your account, say, once a day for a list of servers (or any other product). You could have a script parse the log for "create" actions and have it notify you via email when a Cloud Server is created (or deleted/reboot/etc.) on your account (and which of your RBAC users created the server).
See also
- Cloud Block Storage API Examples (CBS)
- Cloud Databases API Examples (DBaaS)
- Cloud Monitoring API Examples (MaaS)
- Cloud Images API Examples