MCP (this is not the data access protocol you’re looking for)
The Model Context Protocol (MCP) has emerged as a promising standard for unifying data access across diverse platforms. By providing a standardized way for AI agents to communicate with different data sources, MCP promises to make data more accessible and interoperable - something I’m a big fan of.
However, MCP is only one cog in a larger machine, especially when it comes to complex, conversational AI queries. In this article, I’ll take you through a typical implementation of an MCP wrapper, then show why this will quickly fail under anything but the most trivial of use cases. Finally, I’ll present the solution that SyncHub has come up with, which addresses this problem and provides what I think is a truly useful augmentation for conversational AI.
MCP in the real world
Let’s take a fictional character, Alex, who uses a popular cloud accounting platform, BusinessOne, to manage his finances…
As well as their cloud software, BusinessOne has offered an excellent API for years, which provides programmatic access to accounting data. The API has two main endpoints:
Endpoint | Filters | Examples |
---|---|---|
/invoices | CreatedFrom CreatedUntil CustomerID InvoiceID Reference |
Return all invoices for a given month Return all invoices for a specific customer Return a single invoice by ID or reference |
/customers | Search CustomerID |
Return a single customer (by ID) Return all customers whose name or email matches the given search filter |
In addition, they also implement a couple of other extremely common practices for public APIs:
The API will return only the first 100 records in a result set. The user may request additional results by appending a “pagenumber” variable to the query
API consumers can make a maximum of 120 API requests every minute, and a maximum of 5,000 requests every day
Enter MCP…
The API has been used by developers for ten years to integrate invoicing into their systems. Now, in 2025, BusinessOne decides to add an MCP wrapper and - as if by magic - Alex doesn’t need a software developer to interact with his accounting system. He can answer questions like these from right within his ChatGPT chat window…
But wait a second…
Let’s examine that last question a little more closely - “Does ACME have any outstanding charges?”. Given the MCP endpoints, the LLM would need to break this down into two questions:
Get me the ID for the customer matching “ACME”
Get me all invoices for this ID
Okay, fair enough - two API calls is probably manageable. But, what if ACME is a long-standing client, with thousands of invoices in the system? The LLM would need to download all of these, and then iterate over them to find those with outstanding charges. Doable, but we’re starting to see some cracks in the system.
Another example
At first glance, this question seems extremely simple and easily handled by the above MCP server:
“Give me a breakdown of all my revenue, by customer”
But it belies a multitude of data sins, because to serve this, the LLM would need to:
Download all the invoices, for all time. Remember, they are paged in groups of 100, so even though Alex has only generated just over 500 invoices in the entirety of his business, this is still five API calls
But uh-oh, the invoice payload doesn’t return the customer’s name - only the CustomerID. To make the chart truly useful, the LLM is going to have to ping back to the API to get customer details for each unique CustomerID in the invoice payloads.
You can see how this is quickly going to blow out. After a few questions, Alex is going to hit the API limits imposed by BusinessOne, plus he’s going to be waiting many minutes for all those API calls to execute each time he asks a question - not exactly the “conversational” AI that people envisage.
Oh, and you know how you’re charged by the token for your AI subscription? MCP calls consume tokens just as regular text does - so Alex better warm up that credit card.
“…Hi, what would you like to know?”
SyncHub solves this problem by preloading data into a relational database and running in the background to sync modifications. We have all your data on-hand and ready when you need it. So, instead of mapping an MCP endpoint to each of your data structures, we provide just one - “what would you like to know?”.
We take this question, parse it into SQL, execute it against your database, and return exactly the results that your LLM needs - no more, and no less. Complex and multi-faceted queries are handled efficiently, providing comprehensive and timely responses that ad-hoc collections of MCP servers can never match.
Oh - and SyncHub downloads data from all your cloud services, not just your accounting software, and all behind that single MCP endpoint - “what would you like to know?”.
What’s next?
My point here is not to belittle MCP - far from it. MCP is a great solution for standardized data access and quite frankly it’s a miracle that all the major players appear to have coalesced on this single framework in only a year or so.
But while MCP holds great promise for standardizing data access, it’s not a silver bullet for data accessibility. Fundamental limitations in both (typical) API structures and normalized data models still need to be addressed - whether you use MCP or not. And this is most true when dealing with the unpredictable nature of conversational AI.
By understanding its limitations and exploring complementary solutions, like SyncHub's data warehousing approach, businesses can truly unlock the full potential of their data.