Thinking Anew

Making the Move to the Autonomous Database

Packing boxes

In my previous post, I mentioned that I have created my Oracle Cloud Free Tier account and provisioned an Autonomous Database (ADB). Now, I’m ready to move my existing Oracle Application Express (APEX) to this new environment. Next, I’ll describe the steps I took to move (1) workspace, (2) application and (3) data, from the old to new database.

Scope

To better illustrate the move process, let’s create some fictious information about what I am about to move.

Property Value
Source server hostname dev.contoso.com
Source database name XE
Database dump directory on source server /tmp/cloud-migration
Dumpfile filename cloud-migration.dmp
Source data pump target directory CLOUD_MIGRATION_DIR
Target database name ATP01
Oracle Cloud Region Ashburn, United States
Object Storage namespace abcdefgh
Bucket name cloud-migration
Workspace ID 1234567890
Workspace name CONTOSO_WEB
Workspace administrator username admin
Workspace schema name contoso_apps
Auxiliary schema names logger
APEX applications 1000 - Contoso Web; 1400 - Contoso Content Management System
Working directory on workstation /home/jdoe/cloud-migration

Note

The source server does not expose the database port 1521. Connection to the database from my workspace requires using a SSH tunnel.

Tools Required

Install the following software in the environment that

Export Components from Old Database

Export APEX Workspace and Applications

  1. Start SQLcl:

    1
    sql /nolog
  2. Create SSH tunnel and local port forwarding:

    1
    sshtunnel [email protected] -L 1523:localhost:1521
  3. Connect to the dataabse as contoso_apps:

    1
    connect contoso_apps/[email protected]//localhost:1523/XE
  4. Obtained the workspace ID:

    1
    select to_char(workspace_id) from apex_workspaces where workspace = 'CONTOSO_WEB';
  5. Export the workspace:

    1
    apex export -workspaceid 1234567890 -expWorkspace

    This will produce a file named after the workspace ID, e.g. w1234567890.sql

  6. Export all APEX applications in the workspace:

    1
    apex export -workspaceid 1234567890

    Optionally, specify only the applications by id, using the -applicationid parameter. The export process would produce files named after the application ID, e.g. f1000.sql and f1400.sql.

  7. If there are any REST services to be migrated, then execute the following:

    1
    2
    3
    spool rest_services.sql
    rest export
    spool off
  8. Logout.

  9. If REST services were exported, then execute the following:

    1
    echo "/" >> rest_services.sql

Export Schemas

Login to the source database using SSH as user oracle. The subsequent commands in this section will be executed on the source server.

  1. Login to the database as SYSDBA:

    1
    sqlplus / as sysdba
  2. Create the target data dump directory:

    1
    host mkdir /tmp/cloud-migration
  3. Create the directory object:

    1
    create directory CLOUD_MIGRATION_DIR as '/tmp/cloud-migration';
  4. Grant the system user READ and WRITE privileges:

    1
    grant read, write on directory CLOUD_MIGRATION_DIR to system;
  5. Exit SQL*Plus.

  6. Run the export data pump utility (expdp) to export the schema objects:

    1
    2
    3
    expdp system schemas=logger,contoso_apps  \
    directory=CLOUD_MIGRATION_DIR dumpfile=cloud-migration.dmp \
    logfile=exp_cloud-migration.log
  7. Logout from the source server.

Preparing for Import

  1. Grab the dumpfile (and log file if necessary):

    1
    scp -r [email protected]:/tmp/cloud-migration/* .
  2. The local working directory /home/jdoe/cloud-migration should now contain the following files:

    • cloud-migration.dmp
    • exp_cloud-migration.log
    • f1000.sql
    • f1400.sql
    • rest_services.sql
    • w1234567890.sql

Import Components into ADB

Import the Workspace

  1. Login to the ADB’s APEX Administration Services as the admin user: Login screen
  2. Use the Manage Workspaces utilities to import the workspace from the file w1234567890.sql.
  3. During the import process, create the workspace schema. Set the password and space quota as required. Create workspace schema

Prerequisites for Using IMPDP with ADB

Data can be imported into an ADB using Oracle Data Pump. However, since we do not access to the database server’s file system, Object Storage must be used to deliver the dumpfile.

The official instructions to prepare for data pump can be found here. The following is a summary account of what I did.

Object Storage

  1. Access the Oracle Cloud Object Storage. Access Object Storage
  2. Create a bucket and name it cloud-migration. Create bucket
  3. Take note of the Object Storage namespace (1) that is required for determining the URI for the dumpfile. Bucket details
  4. Upload the dumpfile cloud-migration.dmp to the bucket.

Set Cloud Credentials for ADB

  1. Generate an Auth Token for a user in the tenant with the necessary access to the bucket and its contents. Generate an Auth Token
  2. Set the Cloud Object Storage credentials:
    1
    2
    3
    4
    5
    6
    7
    8
    begin
    dbms_cloud.create_credential(
    credential_name => 'DEF_CRED_NAME'
    , username => 'jdoe'
    , password => 'the_generated_auth_token'
    );
    end;
    /

Instant Client Setup for ADB

Official instructions for setting up your system to connect to the database using the tools provided by the Oracle Instant Client, including the Oracle Data Pump utilies needed for importing the data, can be found here.

Here’s a quick overview of what I did:

  1. Download the Cient Credentials (Wallet) from the ADB’s Service Console.

    Download wallet
  2. Set the enviroment variable TNS_ADMIN:

    1
    export TNS_ADMIN=/home/jdoe/.wallets/atp01

    The Oracle Wallet, sqlnet.ora and tnsnames.ora files will be stored in this path. The directory contents are required by the Oracle Instant Client tools to resolve the network service names for connecting to the ADB. The environment variable must be set prior to running any of these applications.

  3. Create the wallet directory:

    1
    mkdir -p $TNS_ADMIN
  4. Extract the contents of the ZIP file into the wallet directory:

    1
    unzip wallet_ATP01.zip -d $TNS_ADMIN
  5. The wallet contains a sqlnet.ora file. Replace the path ?/network/admin, specified for the parameter DIRECTORY, with the wallet’s path (TNS_ADMIN).

    1
    sed -i .bak "s|?/network/admin|$TNS_ADMIN|g" $TNS_ADMIN/sqlnet.ora

Create Auxiliary Schemas

  1. Launch SQLcl:

    1
    sql /nolog
  2. Login to the ADB using SQLcl as admin, the administrator account created when the database was provisioned.

    1
    connect admin/[email protected]_high
  3. Create the auxiliary user and grant needed privileges and quota. For example:

    1
    2
    3
    4
    5
    6
    -- Create user and set quota on DATA, the tablespace for users in ADB.
    create user logger identified by "secret" quota 50M on data;

    -- Grant privileges required by Logger.
    grant connect, create view, create job, create table, create sequence
    , create trigger, create procedure, create any context to logger;
  4. Logout from the database.

Import Schema Objects

  1. Run the import data pump utility (impdp), avoiding the recreation of certain objects as specified by the exclude option:

    1
    2
    3
    4
    5
    impdp [email protected]_high directory=data_pump_dir credential=def_cred_name \
    dumpfile=https://objectstorage.us-ashburn-1.oraclecloud.com/n/abcdefgh/b/cloud-migration/o/cloud-migration.dmp \
    schemas=logger,contoso_apps parallel=16 transform=segment_attributes:n \
    transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y \
    exclude=cluster,db_link,user,tablespace_quota,default_role
    • Details about the DBMS_CLOUD Package File URI format can be found here. The namespace was noted in an earlier section of this post.
    • The exclusions include the creation of a user and modifications of certain properties about the user. The schemas were created in earlier sections of this article. They are redundant and will likely result in errors.
    • Take note of the data pump logs. If necessary, download a copy for further analysis and future reference.

Import Applications

  1. Launch SQLcl:

    1
    sql /nolog
  2. Login to the ADB as the workspace schema user:

    1
    connect contoso_apps/[email protected]_high
  3. Set the workspace:

    1
    apex_util.set_workspace('CONTOSO_WEB');
  4. Load and execute each application export file to import the application to the target workspace:

    1
    2
    @f1000.sql
    @f1400.sql
  5. Import the REST services if any:

    1
    @rest_services.sql
  6. Exit the application.

Post-Migration Tasks

After migrating the applications and data, here are some recommended tasks:

  • Check for missing grants and assign them as necessary.
  • Re-create and database links required by the application.
  • Search for objects like packages and triggers that did not compile successfully during the import process. Fix any issues and recompile the objects.
  • Note the issues highlighted in the previous post concerning workspace developer/administrator access.
  • Test all applications.

Hope I didn’t miss any steps. If you do notice any or mistakes, please let me know in the comments below.

Thanks for reading!