There Is a Bright Future for AI-Driven Integration
AI is reshaping the enterprise landscape. Already, developer productivity, digital labor, email marketing, website creation, etc., seem ripe for a major transformation. It is also well understood that general AI foundation models like GPT4 and Falcon-40B need to be fine-tuned or prompt-tuned for enterprise-specific tasks, and therefore must be fed some curated data that allows for some subset of the parameters to be “adjusted,” or output changed based on new task information given in prompts.
However, training the models is one thing. Enterprise applications today live and die on access to current enterprise data. For example, an e-commerce website might return the status of the orders of a logged-in customer. Or a chat application might process the return of a product. In neither of these cases can anything useful be done without real connectivity to ( integration with) one or more enterprise applications. First, we’ll speak to how an integration layer completes AI applications.
In addition, these integrations do not magically appear. They have to be coded, and they have to be tested and maintained. Later, we’ll speak to how integrations can be done better with the help of AI.
AI Without Integration is Incomplete
How would an AI application return useful information? AI without integration is like fish without water.
In the above figure, a natural language question, “When will my package arrive?” will need to be parsed by a foundation model, and generate a GraphQL request that then accesses an enterprise data source (and in this case, third-party systems such as FedEx), and then the response needs to be used as the input to generate the output.
The above example, while simple, shows that AI foundation models must be complemented by integration and API technologies. As readers of articles from one of the authors know, we have a particular bias for GraphQL APIs. And in this case, they are especially useful since the AI application can be trained to call one universal GraphQL API, and not have to deal with the subtleties of formats and authorizations and sideways information passing if the application were to learn multiple backends.
Integration Without AI is Incomplete
However, the complement to the above is that the opposite is also true. For each of the personas and task sets in the integration space, there are benefits in the application of AI:
- Developers are the primary focus of this effort in the industry today. Prior to the rise of AI, domains like API management and application integration have already evolved toward low code/no code tooling for creating integrations, enabling citizen developers with less skill and experience to use them. AI provides the ability to further augment and empower those developers in more advanced or historically specialist scenarios.
- Administrators, operations folks and site reliability engineers (SREs) of integration deployments will also benefit from the application of AI. Anomaly detection on operational metrics such as API response codes, transaction rate, queue depth and on system logs are all scenarios that machine learning models are well evolved to support – and provide the administrator a sixth sense to observe and maintain the health of a system.
- Product managers and business owners often being on the less technical end of the spectrum also benefit from the low-code and generative capabilities described above, supporting them to self-serve their needs for query and analysis of data to identify business trends and new revenue streams.
In all cases there are various aspects that require close watching as AI technology matures:
First, the models have to be trustworthy. The art and science of trust in AI is being created rapidly, but of course, the rate and pace of innovation in the core AI algorithms is moving even faster. At some point in time, the trust research will have to catch up with the model research.
Related to this is determinism and repeatability. In scenarios such as generating a mapping between two data objects, it is not desirable that a different mapping be created each time you ask the same question, and yet that is the case today for many foundation models as they balance probability between multiple competing options.
Critical to the effectiveness of AI capabilities is correctness. There are many well-known examples where content generated by AI is plausible at first glance, but flawed in practice. As such, today a skilled expert is often still needed to review, debug and rectify the AI-generated artifact, but as the technology matures, we expect to see growing confidence in the validity of the output that will reduce the need for human oversight.
Next, the cost of inferencing, which is often not talked about, will become the dominant OpEx, and enterprises will have to learn to trade off the size of the model and the size of the prompt (linear and quadratic influences respectively on the cost of inferencing) with the quality of the output (is it worth going from a 8B parameter model to 100B parameter model for a 2% lift in the quality of the output)?
Sensitivity of data ownership is also a key concern for many enterprises. Foundation models work most effectively when they can be trained using the largest corpus of available examples, but if those examples contain sensitive customer information or represent a competitive advantage to the enterprise, then care must be taken in how that data will be further used by the model owner.
There is a bright future for AI-driven integration, both in the application of integration to provide access to enterprise data for use by AI tools and also for application of AI to benefit the delivery of integration scenarios.
We will be publishing a whole series of articles on the topic of the influence of AI on APIs and integration, and as some of you might know, StepZen was acquired by IBM, so we will be bringing on some additional API and integration experts, such as Matt Roberts, the CTO for IBM’s Integration portfolio.