Export data from Xively to CSV

I’ve been using Xively (back then it was Cosm) for my data logging Arduinos since 2012, so when they announced that “LogMeIn has made the decision to retire Xively Personal from its current line of products effective January 15, 2018 at 12:00PM ET“, I thought I’d better get my data out of there.

Unfortunately it doesn’t look like they have a tool/api that can export everything easily. They do however have an api call that’ll let you get 1000 entries in one go, so I hacked together a bash script that’ll get the data in 6 hour chunks and put them together in CSV files (Update: As Al from the comments noticed, the script only did half a days worth of data, its now updated to grap whole days! Thanks Al!):

#!/bin/bash
# d = start date
d=2012-10-28
# e = end date
e=2017-12-20
# api = api key
api=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# feed = feed id
feed=XXXXX
# data = data stream
data=Temperature
# name = filename for output
name=Test
#
while [ "$d" != "$e" ]; do
echo "----- New day -----"
echo $d" - T00:00:00"
echo "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"&duration=6hours&interval=1&limit=1000&interval_type=discrete"
wget "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"&duration=6hours&interval=1&limit=1000&interval_type=discrete" -O - >> "$name".csv
echo " " >> "$name".csv
echo $d" - T06:00:00"
echo "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T06:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete"
wget "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T06:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete" -O - >> "$name".csv
echo $d" - T12:00:00"
echo "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T12:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete"
wget "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T12:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete" -O - >> "$name".csv
echo $d" - T18:00:00"
echo "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T18:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete"
wget "http://api.xively.com/v2/feeds/"$feed"/datastreams/"$data".csv?key="$api"&start="$d"T18:00:00&duration=6hours&interval=1&limit=1000&interval_type=discrete" -O - >> "$name".csv
d=$(date -I -d "$d + 1 day")
done

Now that you have the data in a CSV format you can open it directly in a spreadsheet like Excel, or you can import it to other databases!

DiggDeliciousGoogle BuzzSlashdotRedditShare

Related Posts:


This entry was posted in Linux and tagged , , , , . Bookmark the permalink.

8 Responses to Export data from Xively to CSV

  1. Al says:

    Good post, I’ve been working around this for a couple of days also. Just a quick comment – This script only downloads half a days data because it does 2 six hour chuncks starting at midnight and therefore finishing at noon. If you want all a a days data you need to add in two more six hour chuncks for noon and 18:00 hours.

    Thanks for the pointer.

  2. Al says:

    Good work, thanks for this.
    Just one comment, I think that as written this script only downloads half a days data because it does 2 six hour chunks starting at midnight and therefore finishing at noon. If you want a whole days data you need to add in two more six hour chunks for noon and 18:00 hours.

  3. lluis says:

    thanks for this post, if I have a feed with 9 data streams, will this post download all data from 9 sensors?
    Where i have to save this script, and from where I have to execute?

    Thanks and best regards.

    • nkh says:

      You need to run it once (and only once, otherwise it’ll just append the same data to your csv file once more) for each sensor (defined by the “data” parameter).

      So essentially you need to edit the script every time before you run it for each sensor (it’s crude, I know).

      It’s a bash script, I run it in a Linux inviroment (a NAS), but you be able to get it working on other operating systems, have a look here: https://en.wikipedia.org/wiki/Bash_(Unix_shell)

  4. Lluis says:

    Thanks a lot

  5. you bet says:

    I have come to a pretty similar solution.
    This one makes use of curl (install it if necessary) and generates one file per chunk.
    I have been able to retrieve 6 years of daily data with 1 minute interval in a surprising small time.


    #!/bin/bash
    #
    # Récupération des données de production stockées chez Xively
    # Retrieval of data stored on Xively
    # Début le 6-2-2012
    # Fin le 28-2-2012 provisoirement
    # Tranches horaires : 4:00, 10:00, 15:00 par tranche de 6, 5, 6 heures
    # Starting times 4:00, 10:00, 15:00 with chunks of 6, 5, 6 hours respectively. This due to my data which are PV generation power. See below an easy modification

    # change to a suitable directory to store the csv files
    cd ~/bla/blah/recup_xively/

    # put your apikey
    apikey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    # Replace the number xxxxx with your channel (feed) ID and yy with your datastream ID (name)
    url=https://api.xively.com/v2/feeds/xxxxx/datastreams/yy.csv

    # start date. Set as appropriate; format YYYY-m-d
    sdate=$(date -d 2012-2-6 +%F)
    # current date
    curr_date=$sdate
    # end date Set as appropriate; format YYYY-m-d
    edate=$(date -d "2012-2-8 +1day" +%F)

    # fonction qui récupère une tranche de données. Chaque tranche génère un fichier de nom Préfixe+date+heure_début
    # function which retrieves one chunk of data. Each chunk creates one file whose name is Prefix+date+time_start
    # paramètres : date, heure, durée, préfixe fichier
    # parameters : date, time, duration (minutes), file prefix

    function une_tranche
    {
    curl -w "%{response_code}\n" --request GET $url"?key="$apikey"&start="$1"T"$2"&duration="$3"minutes&interval=0&limit=999" -o $4$1T$2.csv
    }

    # fonction qui récupère un jour entier. Pour récupérer un jour complet (minuit à minuit) ajouter un appel dans le IF et changer les heures de début et les durées
    # function which retrieves a full day. To retrieve a complete day(midnight to midnight), add a call to function une_tranche in the IF statement, modify the starting times and durations in the IF. Put 359 minutes for 6 hour duration.
    # paramètre : date
    # parameter : date

    function un_jour
    {
    if [[ $(une_tranche $1 04:00Z 359 prod) != 200 || $(une_tranche $1 10:00Z 299 prod) != 200 || $(une_tranche $1 15:00Z 359 prod) != 200 ]]
    then
    exit
    fi
    }
    while [[ $curr_date != $edate ]]
    do
    echo $curr_date
    un_jour $curr_date
    curr_date=$(date -d "$curr_date +1day" +%F)
    done
    #

  6. Jonny says:

    Good job. I thought better of Xively, but they are randomly terminating the connection at times ranging from minutes to hours.
    They can’t even play fair to help us get our data back at short notice.

Leave a Reply to Al Cancel reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>