How to Easily Add AI to Your Applications
AI is powerful, but its innate complexity can give potential business users pause. For many organizations wanting to derive value from AI quickly, managed services that allow them to integrate AI with their applications may be the answer.
The good news is that such managed AI services, which mask complexity from the user, are available. Two key things to consider before going this route are:
- 1: What capability does the company want to build (or what process does it want to automate)? and
- 2: How does this need to run based on use case, location and regulatory requirements?
Start with the End in Mind
For example, a manufacturer interested in automating processes to reduce human error, cut costs and increase efficiency in building products will need AI to use images. Images can be captured with stationary cameras or drones depending on the state and location of the finished goods being photographed. Computer vision AI is required for these applications.
Training Versus Inferencing
One advantage of managed AI services is that they take care of training and inferencing. The user simply fine-tunes data for training and calls an API to handle inferencing on the service and model.
This approach is very developer-friendly, allowing the organization to experiment and implement quickly. In addition to automatically handling underlying infrastructure and software for training and inferencing, the services scale those invisible components as needed, with the user interacting at an API level. They are also extremely cost-effective, usually only charging per API call.
The types of data and methods of data capture will differ depending on whether the organization focuses on training or inferencing. Inferencing is the data capture setup needed when the AI is about to go into production. This can be viewed as a preset autonomous drone route. If a model requires custom training, data needs to represent different aspects of possible scenarios. For images, it’s essential to consider lighting, angles, shading and many stages of what may be encountered in the process. These stages could be different lengths of steel pipes and beams.
Where Do I Get Training Data?
Useful data is often scattered across different systems within a business. Occasionally, external data from a specific outside repository or public sources will also be needed.
Importing data, which comes in different forms, requires cleansing and restructuring to be consistent with project initiatives. Once those methods are worked out, the user can create a pipeline that automates the process from data import to data cleansing to data labeling. A labeled data set is the starting point to the AI customization process.
Following a computer vision example, the user could bring in many pipe images showing different angles, lighting, shapes, lengths, quantities and other criteria. Source images would look something like this:
Once source images are collected, the company will create a data set that can be used for custom training. In this process, the AI must be trained for object detection and to spot where a particular object is within an image. This is done by labeling source images with metadata, using what are called bounding boxes. These provide exact coordinates of the object(s) within an image that explain to the model what to look for, taking into consideration various criteria and scenarios mentioned earlier. A labeled image would look like this:
A completely labeled data set will typically be put in object storage for referral by the computer vision AI service when running a model customization. Model customization fine-tuning usually takes 24 hours or less.
When testing the model, users must watch “the confidence factor.” The model will say, in percentage terms, how confident it is that it identified an object. Model accuracy can be gauged by these reported confidence percentages. Usually, 90% or greater results in a very consistent and accurate finding. Users should determine which factors work best for a given scenario and build this logic into the application, as the confidence factor will be included in the API response when the application makes a call to the managed AI service.
The API response will include other details of the findings. In the computer vision case, it will provide the coordinates of where, exactly, on the image it found the object. This enables businesses to do more with the results. They might want to draw the bounding box on the image to show where AI found the object or how many of the objects it found.
The AI Service Is Ready to Go, Now What?
Finally, businesses must consider where and how to deploy AI. They need to think about the data source when in production, and how quickly the AI needs to respond to meet internal and customer needs. If the AI is in the public cloud, it can take time for data to transmit to the public cloud, run through the AI service, generate a response and for the application to act.
If a drone or camera is used, how often should images be updated? What is the expected response time? For images sent every minute, a 30- to 60-second response time is a reasonable expectation for the public cloud. If lower latency, higher volume or closer to real-time responses is required, AI service should be deployed closer to the data source.
Other considerations include regulatory, sovereignty, security and other restrictions. In such cases, all data and AI services must operate either in a cloud region that meets those requirements or in a service that accommodates different geographic deployment models without compromising capabilities.
While there is broad AI support across all mainstream programming languages, Python has many great libraries to help streamline packaging an application that uses these types of AI services. You can find many live labs from Oracle that make AI services easy to learn and use.