Location Services

last person joined: 11 days ago 

Location-based mobile app development and Bluetooth-based asset tracking with Meridian. Gathering analytics and business intelligence from Wi-Fi with Analytics and Location Engine (ALE).

Using the Meridian API: Storing the JSON Output (Local Caching)

This thread has been viewed 1 times
  • 1.  Using the Meridian API: Storing the JSON Output (Local Caching)

    Posted Apr 23, 2019 03:52 AM
    When working with an API to view and manipulate returned data, the most likely starting place in your script is the interaction with the API.  For complex projects, by the time you have decided how your code will work you might need to make a call to the API hundreds if not thousands of times during iterative testing. When the data set is large this can take valuable time away from coding so bringing the data closer to your code is a good idea. You can cache it locally in a file!
    In previous examples I’ve used the Meridian API to extract information about Beacons like the battery level and use the Maps data to indicate which map a specific beacon might be associated to. In both of these examples the first thing I made sure was working was the request to the Meridian API to ensure I had the correct data, then I got to work on manipulating the data in Python. Through my process of learning to code and getting the results I wanted I wasted a lot of time waiting for big data sets to be returned by the API. As soon as I moved to a locally cached version of the same data I realised how fast the process could be (it truly is night and day). Here is how I got it done.
    For each data set I was collecting from the API I created a Python script to collect the data and store it in a new file. This file became the locally cached version which I could call on instead of requesting data from the API every time. The conversion of new scripts to use the local cache instead of the API was quite simple. 
    Once the .json files contain the needed data they can be used within other Python scripts just like an API data source.
    Lets take a look at an example of how this could be done within the code example from Using the Meridian API: Checking Beacon Battery Levels.
    The object that contains the Maps API data in getBeaconsPower_lesson.py is ‘maps_info’. It’s setup as a list in Line 24 ready for “extending” (data from API calls is added to the list) with the function ‘get_maps'.
    First lets open the Maps data file maps.json and load it to ‘maps_info'.
    with open(infile_map, 'r') as rm:
        maps_info = json.load(rm)
    These three lines can replace Line 24 and the entire get_maps function. That’s 3 lines instead of 11. Line 42 calls ‘get_maps' also, let’s chop that out too! If I’m going to need these lines later for the “production” script then I recommend you just comment them out by adding a # to the beginning of each line. A shortcut to do this in PyCharm on a Mac is ‘⌘/‘ - very handy.
    Now that ‘maps_info' is loaded it’s used in Line 91 straight from memory. Much faster!
    The same can be done for the Beacons data. I found the best object to load Beacons data to is ‘beacons_data’.
    with open(infile, 'r') as r:
        beacons_data = json.load(r)
    The object ‘beacons_data' is used a lot in the function 'get_beacons’. By caching the data we can skip the process of calling the Beacons API in Line 64 and also the whole ‘check_for_more’ function (Lines 72, which calls the function. Also, Lines 75-83, which is the function).
    Give your Python a try with cached data and see how much faster it will run. Where data between API requests is unlikely to change you could make your script much more efficient by using a locally stored version of the data. 
    Let me know in the comments below what kinds of data makes sense in your project to cache locally.