During a recent Kscope21 session, “Predict Customer Churn Using Oracle Autonomous Database, Machine Learning, and APEX“, I had shared that it has always been my personal desire to create web applications that are not only purposeful, but also have an element of “intelligence”. With new features recently introduced in the Oracle Machine Learning product offering, it is now relatively easy to introduce ML as a component of your Oracle Application Express (APEX) applications. This is the first of several posts that I am writing as a follow up to the presentations I did at this year’s (virtual) conference.
The whys and how we created our first Machine Learning (ML) model using AutoML UI is detailed in our presentation, so I highly encourage you to review the presentation to get the required background information. There are also several tutorials that Oracle has published to help you get going with the technology. In this blog post, the goal is to provide details on code needed to consume the ML model that we operationalised using OML Services. these are:
- How to authenticate with the REST services
- Identifying the REST service’s inputs and outputs; and finally
- Using the Deployed Model
Also, it would be worthwhile pointing out that AutoML, OML Services, and many of the other Oracle technologies described in this post are only available on the Oracle Autonomous Database (ADB) at this time, including the two Always Free instances that you get with every Oracle Cloud account.
To secure OML Service endpoints, clients must use an OML Service account and obtain an access token. You may create this user and enable access to OML Services using either the Oracle ML User Administration console or Database Actions.
The authentication element consists of:
- Two application items
G_TOKEN_EXPIRYthat are used to store the access token and expiry measured in epochs.
- An on page load (before header) application process with the following PL/SQL code:
if :G_TOKEN is null
or :G_TOKEN_EXPIRY is null
or pkg_omlservices_util.f_convert_to_epoch(sysdate) > :G_TOKEN_EXPIRY
p_token_url => :G_TOKEN_URL
, p_username => 'readmore'
, p_password => pkg_secrets.gc_password
, p_token => :G_TOKEN
, p_expiry_epoch => :G_TOKEN_EXPIRY
- The application process calls a custom procedure
pkg_omlservices_util.p_authenticatethat embodies the following code:
create or replace package body pkg_omlservices_util
p_token_url in varchar2
, p_username in varchar2
, p_password in varchar2
, p_token out varchar2
, p_expiry_epoch out number
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).value := 'application/json';
l_response := apex_web_service.make_rest_request(
p_url => p_token_url
, p_http_method => 'POST'
, p_body => json_object(
'grant_type' value 'password'
, 'username' value p_username
, 'password' value p_password
if apex_web_service.g_status_code = 200 then
apex_json.parse(p_source => l_response);
p_token := apex_json.get_varchar2(p_path => 'accessToken');
p_expiry_epoch := pkg_omlservices_util.f_convert_to_epoch(sysdate)
+ apex_json.get_number(p_path => 'expiresIn');
raise_application_error(-20001, 'Authentication request failed. '
|| 'HTTP status code returned: '
The URL to direct the access token request to is set and passed using the APEX substitution string
G_TOKEN_URL. This URL is constructed based on a few components unique to your environment. You will need the following information:
REGION_CODE. For example,
TENANT_OCID. Looks like this:
DATABASE_NAME. The database name given to your ADB instance.
Then, use the template below and replace the placeholders with the appropriate values.
This approach helps maintain a valid access token for as long as it is not expired, and will request a new one only when it does.
During the presentation, I had demonstrated how you can use the OML AutoML User Interface (UI) to create ML models with no code. Once you are satisfied with the predicted model, you can then deploy the model thus exposing it through OML Services as a REST endpoint. To consume the REST service, there are three essential information that we will require:
- The REST endpoint URL.
- The request’s input parameters and how to transmit them.
- The response’s expected output format.
You can access this information from the metadata published after the models are deployed.
The endpoint URL can be derived from the Open API Specification for the service that you can access by clicking the link labelled “A”. Below is shortened version of the JSON output to help highlight the important pieces of information.
The template for the OML Service endpoint URLs is:
PATH that we will use to perform the scoring operation is
/svml/score, which you can find in the
paths attribute. It accepts a HTTP
POST request with a required body containing input parameters in JSON format.
From the input schema, you can glean that the endpoint requires an
inputRecords array containing one or more objects, each with seven numerical input variables:
These are features that AutoML had selected during the training process. An example of the JSON payload included in a HTTP request is:
Finally, from the specifications, we know that the service will return the array
scoringResults that should be of the same size as the
inputRecords submitted in the request. Each element will contain an array of
classifications (there are two in our binary classifier) objects with the attributes
probability. Below is an example of prediction outputs returned after a successful call to the OML Service.
Armed with the information above, we can now construct a wrapper PL/SQL procedure, for example:
create or replace package body pkg_omlservices_predict
For a simple demonstration on how to use the OML Service in APEX, I created a page that allows users to enter the seven required inputs, and then call the
pkg_omlservices_predict.f_active_in_six_months function to obtain the predicted labels and their probabilities.
I can then render the desired output after parsing the
scoringResults JSON array returned by the OML Service, for example, using a classic report using the following SQL query:
with results as (
It is worthwhile noting that developers can submit multiple rows of input data and obtain all the results at once. For example, if you wanted to cache the predicted outcomes in a local table, you can batch them in one HTTP request and then parsing multiple results, thereby reducing some overheads.
This is only the beginning and I am super excited with all the tools that Oracle has put at our disposal. Do more and do good. If you require additional help, or would like to collaborate on other APEX+ML adventures, please do not hesitate to contact me via Twitter or through Insum.