Compounding API calls

brakes.gif

One of the most common issues that SyncHub users experience - particularly when first backdating historical data - is the dreaded “API throttling limit” error, and the pain of seeing your sync grind to a halt with nothing do except wait until the throttle is lifted.

These API limits are imposed by the cloud platforms that we connect to, and are intended to regulate and “spread out” server load across their platform. They are in fact a “good thing”, as they help to ensure consistent and reliable service for subscribers like SyncHub. However, as with parking fines, they are pretty frustrating when they happen to you.

Of course, although SyncHub is often granted higher-than-usual API limits than most third-party applications, we are still bound by them. So, without being able to adjust the ceiling for calls - what can you do to reduce your API usage when syncing your data?

In this article, we will discuss the number one cause of API usage - compounding API calls.

A look under the hood

We’ve already spoken about nested data here. Many endpoints have subsets, which are optional blocks of nested data associated with a broader endpoint. Consider these common scenarios:

  • invoices have invoice items

  • orders have order lines

  • journals have journal entries

  • jobs have tasks

  • etc. etc.

As you can see, nested data is very common amongst most business domains. But nested data itself is not the problem. Let’s break down an actual API cal in more detail - using Unleashed’s Assembly endpoint.

So for example the “Assembly” endpoint in Unleashed has an “Assembly line” subset. Normally this isn’t an issue as most subsets simply modify the endpoint to call for a larger block of data in each call, meaning it still makes the same number of calls in total and the throttle isn’t reached any faster.

Here you can see the subset included in the parent API call, getting more data with the same number of calls.

 
2021-02-22_11h21_23.png
 

Using this structure, if an Assembly contained one hundred line items, they would all still be returned in a single API call. But not all subsets follow this pattern.

Let’s say we want to get History for Bank transactions from Xero (in actuality, the Xero History endpoints work very differently but let’s imagine for now that it follow the structure above and we retrieve our transactions).

Scenario one - requesting a list of bank transactions (page size = 1 and History is nested)

Data # API calls Notes
Bank transactions 1 Page size is one, thus one transaction is returned
History 0 No additional API calls are required, because History is included with the main payload
Total 1  

This is great - history is returned with the transactions and we only need one API call. Now let’s see how it behaves in reality. Say we want to retrieve a single transaction and its history…

Scenario two - requesting a list of bank transactions (page size = 1 and History is not nested)

Data # API calls Notes
Bank transactions 1 Page size is one, thus one transaction is returned
History (×1) 1 An extra call is required to pull down the history
Total 2  

Already, you can see that we are doubling the number API calls required to build out our data. Still, this is survivable - doubling the load isn’t great but you might not notice the effect if you aren’t trying to pull down too much data. This is usually a manageable situation.

But watch what happens when we increase the page size to something more realistic - say, 100 items/page.

Scenario 3 - requesting a list of bank transactions (page size = 100 and History is not nested)

Data # API calls Notes
Bank transactions 1 Page size is one hundred, thus one hundred transactions require History
History (×100) 100 Each History requires a seperate call
Total 101  

Ouch.

 
giphy.gif
 

Compounding the compounding

What just happened? In this scenario, we had to request a History for each individual transaction that came down from the Bank transactions endpoint. So even though the payload containing one hundred transactions only needed a single call to acquire, we still needed one hundred further calls to get each History collection. It is easy to see how this pattern can explode the number of API calls a sync requires, especially when you consider that I have been using unrealistically small numbers for demonstration.

In the real world we might be requesting two hundred pages of Bank transactions, each containing one hundred transactions, each of which has a History needing its own call for a grand total of more than twenty thousand API calls. Just to get a single endpoint backdated. That’s a pretty tall order when your daily limit might be only five thousand calls.

If you are familiar with how paging can improve performance, you can see that this structure completely negates its advantages. We still have to make one call per item, no matter how many pages we break the items into.

Observing it in action

You can see this for yourself, using SyncHub’s API Explorer. Calling one day’s worth of Bank transactions goes from this:

2021-02-23_10h05_34.png

Note the three API calls on the right - one for each page.

Now, observe what happens when we ask for History as well:

2021-02-23_10h12_23.png

Look at all those extra API calls - and we’re not even able to capture them in a single screenshot.

Want more? Guess what happens if there are large numbers of History records for a particular Bank transaction….we page them, of course - resulting in even more API calls.

But…why?

It comes down to design decisions made by services when they are building their APIs. We’ve outlined the pros and cons of nested data here. This shows that situations like we have described above are not necessarily the result of a poor design decision by the API provider. It’s simply that one API cannot necessarily service all the different requirements of their consumers.

The onus is therefore on us to mitigate and adapt. So, here’s what we do at SyncHub…

Option 1 - just turn it off

The easiest solution is simply not to request the data. SyncHub allows these call-hungry subsets to be deactivated to prevent them from demolishing your daily limit. So if you don’t need to do any reporting on this data you can simply exclude it and move on.

Our Dashboard clearly highlights endpoints that exhibit this behaviour, making identification simple. Make sure you are in nerd mode to see the warning.

API Warning Screenshot.png

Option 2 - prioritize your data

If you do need this data, then you have some options to manage the load…

  • stage endpoints to run consecutively instead of concurrently

  • reduce your sync window sizes to minimize the amount of data pulled down in each “run”

  • reduce your run frequency so you are making less runs per day

Using these methods you can at least get blocks of usable data in reasonable amounts of time. In particular this will allow you to choose the most important data to you and ensure you get it down first. It will still require the same number of calls to get fully caught up as it would otherwise however.

Beyond this your only recourse - and one we highly recommend - is to send strongly worded messages to your cloud services demanding they design their APIs with data accessibility in mind.

Good luck and happy syncing.

Previous
Previous

A deeper dive into API run management

Next
Next

How to query and report on data from an API