DBMS_CLOUD Subprograms and REST APIs
This section
covers the DBMS_CLOUD
subprograms and
REST APIs provided with Autonomous Database.
To run
DBMS_CLOUD
subprograms with a user other than ADMIN you need to grant EXECUTE
privileges to that user. For example, run the following command as ADMIN to grant privileges to adb_user
:GRANT EXECUTE ON DBMS_CLOUD TO adb_user;
The DBMS_CLOUD
package is
made up of the following:
- DBMS_CLOUD for Access Management
The subprograms for credential management within the DBMS_CLOUD package, including creating, deleting, and updating credentials. - DBMS_CLOUD for Objects and Files
The subprograms for object and file management within the DBMS_CLOUD package. - DBMS_CLOUD for Bulk File Management
The subprograms for bulk file operations within the DBMS_CLOUD package. - DBMS_CLOUD REST APIs
This section covers theDBMS_CLOUD
REST APIs provided with Autonomous Database.
Parent topic: DBMS_CLOUD Package
DBMS_CLOUD for Access Management
The subprograms for credential management within the DBMS_CLOUD package, including creating, deleting, and updating credentials.
Subprogram | Description |
---|---|
This procedure stores cloud service credentials in Autonomous Database. | |
This procedure removes an existing credential from Autonomous Database. | |
This procedure immediately refreshes the vault secret of a vault secret credential to get the latest version of the vault secret for the specified |
|
This procedure updates cloud service credential attributes in Autonomous Database. |
- CREATE_CREDENTIAL Procedure
This procedure stores cloud service credentials in Autonomous Database. - DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Database. - REFRESH_VAULT_CREDENTIAL Procedure
This procedure refreshes the vault secret of a vault secret credential. - UPDATE_CREDENTIAL Procedure
This procedure updates an attribute with a new value for a specifiedcredential_name
.
Parent topic: DBMS_CLOUD Subprograms and REST APIs
CREATE_CREDENTIAL Procedure
This procedure stores cloud service credentials in Autonomous Database.
Use stored cloud service credentials to access the cloud service for data
loading, for querying external data residing in the cloud, or for other cases when
you use DBMS_CLOUD
procedures
with a credential_name
parameter. This procedure is overloaded:
-
Use the Oracle Cloud Infrastructure-related parameters, including:
user_ocid
,tenancy_ocid
,private_key
, andfingerprint
only when you are using Oracle Cloud Infrastructure Signing Keys authentication. -
Use the
params
parameter for one of the following:-
Amazon Resource Names (ARNs) credentials
-
Google Analytics or Google BigQuery credentials
-
Vault secret credentials for use with a supported vault:
- Oracle Cloud Infrastructure Vault
- Azure Key Vault
- AWS Secrets Manager
- GCP Secret Manager
-
Syntax
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name IN VARCHAR2,
username IN VARCHAR2,
password IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name IN VARCHAR2,
user_ocid IN VARCHAR2,
tenancy_ocid IN VARCHAR2,
private_key IN VARCHAR2,
fingerprint IN VARCHAR2);
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name IN VARCHAR2,
params IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be stored. The
|
|
The |
|
The |
|
Specifies the user's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the User's OCID. |
|
Specifies the tenancy's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the Tenancy's OCID. |
|
Specifies the generated private key. Private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for details on generating a key pair in PEM format. |
|
Specifies a fingerprint. After a generated public key is uploaded to the user's account the fingerprint is displayed in the console. Use the displayed fingerprint for this argument. See How to Get the Key's Fingerprint and How to Generate an API Signing Key for more details. |
|
Specifies credential parameters for one of the following:
|
Usage Notes
-
This operation stores the credentials in the database in an encrypted format.
-
You can see the credentials in your schema by querying the
user_credentials
table. -
The
ADMIN
user can see all the credentials by querying thedba_credentials
table. -
You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for
DBMS_CLOUD
procedures that require acredential_name
parameter. -
This procedure is overloaded. If you provide one of the key based authentication attributes,
user_ocid
,tenancy_ocid
,private_key
, orfingerprint
, the call is assumed to be an Oracle Cloud Infrastructure Signing Key based credential. -
You can list credentials from the view
ALL_CREDENTIALS
. For example, run the following command to list credentials:SELECT credential_name, username, comments FROM all_credentials;
Oracle Cloud Infrastructure Credentials (Auth Tokens)
For Oracle Cloud
Infrastructure the username
is your Oracle Cloud Infrastructure user name. The
password
is your Oracle Cloud Infrastructure auth token. See
Working with Auth Tokens.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
Use Auth Token based credentials when you are authenticating calls to OCI Object Storage. For calls to any other type of Oracle Cloud Infrastructure cloud service, use Oracle Cloud Infrastructure Signing Key Based Credentials.
For OCI Object Storage, username
parameter value must include the Identity domain and
the user name from your profile. You can find the Identity domain associated with a
user in the Oracle Cloud
Infrastructure Console.
For example:
oracleidentitycloudservice/adb_user@example.com
With the default Identity domain you are not required to include the domain name
Default
. For example:
adb_user@example.com
Oracle Cloud Infrastructure Signing Key Based Credentials
Use the Oracle Cloud
Infrastructure signing key related parameters, including: user_ocid
,
tenancy_ocid
, private_key
, and
fingerprint
with Oracle Cloud
Infrastructure Signing Keys authentication.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => ‘OCI_KEY_CRED’,
user_ocid => ‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwetkpqsoa’,
tenancy_ocid => ‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjqqge32ewnrcyx2a’,
private_key => ‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLGPD8NM//JEBg=’,
fingerprint => ‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
Private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for more information.
Oracle Cloud Infrastructure Object Storage Classic Credentials
If your source files reside in Oracle Cloud
Infrastructure Object Storage Classic, the username
is your Oracle Cloud Infrastructure
Classic user name and the password
is your Oracle Cloud Infrastructure
Classic password.
Amazon Web Services (AWS) Credentials
If your source files reside in Amazon S3 or you
are calling an AWS API, the username
is your AWS access key ID and
the password
is your AWS secret access key. See AWS Identity and Access Management.
Microsoft Azure Credentials
If your source files reside in Azure Blob Storage or Azure Data Lake
Storage or you are calling an Azure API, the username
is
your Azure storage account name and the password
is an Azure
storage account access key. See About Azure storage accounts.
Amazon S3-Compatible Credentials
Service | Credentials Information |
---|---|
Oracle Cloud Infrastructure (Customer Secret Keys) |
If your source files reside in Oracle Cloud Infrastructure, then you need to use Customer Secret Keys with S3-compatible URLs. See Working with Customer Secret Keys for more information. |
Google Cloud Storage |
If your source files reside in Google Cloud Storage or you are calling Google Cloud Storage APIs, then you need to set a default Google project and obtain an HMAC key to create credentials to supply with Google Cloud Storage S3-compatible URLs. Use the HMAC key id as the username, and the HMAC secret as the password. |
Wasabi Hot Cloud Storage |
If your source files reside in Wasabi Hot Cloud Storage or you are calling Wasabi Hot Cloud Storage APIs, then you need Access Keys to create credentials to supply with S3-compatible URLs. Use the Wasabi Hot Cloud Storage Access Key as the username, and the Wasabi Hot Cloud Storage Secret Key as the password. See Creating a Wasabi API Access Key Set for more information. |
AWS Amazon Resource Names (ARN) Credentials
If your source files reside in Amazon S3 or you
are calling an AWS API, use params
to specify the parameters for
the Amazon Resource Names (ARN).
Parameter | Value |
---|---|
aws_role_arn |
Specifies the Amazon Resource Name (ARN) that identifies the AWS role. If this parameter is not supplied when creating the
credential, |
external_id_type |
Optionally set the If this parameter is not given when creating the
credential, the default value is
|
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'MY_CRED',
params => JSON_OBJECT(
'aws_role_arn' value 'arn:aws:iam::123456:role/AWS_ROLE_ARN',
'external_id_type' value 'database_ocid'));
END;
/
GitHub Personal Access Token
If your source files reside in a GitHub repository or you are calling a
GitHub API, the username
is your GitHub email and the
password
is your GitHub personal access token. See Creating a personal access
token for more information.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'MY_GITHUB_CRED',
username => 'user@example.com',
password => 'your_personal_access_token' );
END;
/
Google Analytics or Google BigQuery Credentials
If you are accessing Google Analytics or Google BigQuery, use the
params
parameter to specify the Google OAuth 2.0 credential
parameters.
Parameter | Value |
---|---|
gcp_oauth2 |
Specifies OAuth 2.0 access for Google Analytics or Google BigQuery with a JSON object that includes the following parameters and their values:
See Using OAuth 2.0 to Access Google APIs for more information on Google OAuth credentials. |
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'GOOGLE_BIGQUERY_CRED',
params => JSON_OBJECT('gcp_oauth2' value
JSON_OBJECT(
'client_id' value 'client_id',
'client_secret' value 'client_secret',
'refresh_token' value 'refresh_token' )));
END;
/
Vault Secret Credentials with Oracle Cloud Infrastructure Vault
To create vault secret credentials with Oracle Cloud Infrastructure Vault, use the params
parameter to specify the required
parameters:
-
username
: Specifies the username of any type of username/password credential such as the username of OCI Swift password. For example, if you have a Swift credential with username as “scott” and password as “password”, provide “scott” as theusername
parameter. -
secret_id
: Is the vault secret ID. Specify thesecret_id
value as the vault secret OCID. See Overview of Vault for more information. -
region
: Is an optional parameter that specifies the oracle cloud region identifier. The region, when specified, indicates the location where Oracle Cloud Infrastructure Vault secret is located.By default,
CREATE_CREDENTIAL
uses theregion
mapped from the region key in thesecret_id
. An example of a region isus-ashburn-1
.See Regions and Availability Domains for a complete list of regions.
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'OCI_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'scott',
'region' value 'us-ashburn-1',
'secret_id' value 'ocid1.vaultsecret.co1.ap-mumbai-1.example..aaaaaaaauq5ok5nq3bf2vwetkpqsoa'));
END;
/
Notes for using an Oracle Cloud Infrastructure Vault secret to store vault secrets:
- When you use an Oracle Cloud Infrastructure Vault, on the Autonomous Database
instance you must enable principal authentication with
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL
. -
On Oracle Cloud Infrastructure you must specify a policy for the resource principal to access the secret.
To create a vault secret credential you must have
EXECUTE
privilege on the DBMS_CLOUD
package.
Vault Secret Credentials with Azure Key Vault
To create Azure Key Vault credentials, use
the params
parameter to specify the required parameters:
-
username
: Specifies the username associated with the key. -
secret_id
: Specifies the secret name. -
azure_vault_name
: Specifies the name of the vault where the secret is located.
See Create a key vault for more information.
To create a vault secret credential you must have
EXECUTE
privilege on the DBMS_CLOUD
package.
Vault Secret Credentials with AWS Secrets Manager
To create vault secret credentials with AWS Secrets Manager, use the params
parameter to specify the
required parameters:
-
username
: Specifies the AWS Secrets Manager access key. -
secret_id
: Is the AWS Secrets Manager AWS ARN. -
region
: (Optional) Specifies the AWS service region where the vault and secret are located. An example of the AWS region is "us-east-2". The defaultregion
is the region specified with the ARN in thesecret_id
parameter.See Managing AWS Regions for more information.
To create a vault secret credential you must have
EXECUTE
privilege on theDBMS_CLOUD
package.
Vault Secret Credentials with GCP Secret Manager
To create GCP Secret Manager
credentials, use the params
parameter to specify the required
parameters:
-
username
: Specifies the username associated with the secret. -
secret_id
: Is the secret name. -
gcp_project_id
: Specifies the ID of the project where the secret is located.
See Secret Manager for more information.
To create a vault secret credential you must have
EXECUTE
privilege on the DBMS_CLOUD
package.
Parent topic: DBMS_CLOUD for Access Management
DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Database.
Syntax
DBMS_CLOUD.DROP_CREDENTIAL
(
credential_name IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be removed. |
Parent topic: DBMS_CLOUD for Access Management
REFRESH_VAULT_CREDENTIAL Procedure
This procedure refreshes the vault secret of a vault secret credential.
This procedure lets you immediately refresh the vault secret of a vault
secret credential to get the latest version of the vault secret for the specified
credential_name
.
Syntax
DBMS_CLOUD.REFRESH_VAULT_CREDENTIAL
(
credential_name IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to refresh. |
Usage Notes
-
The
ADMIN
user can see all the credentials by querying thedba_credentials
table. -
You can list credentials from the view
ALL_CREDENTIALS
. For example, run the following command to list credentials:SELECT credential_name, username, comments FROM all_credentials;
Example
BEGIN
DBMS_CLOUD.REFRESH_VAULT_CREDENTIAL
(
credential_name => 'AZURE_SECRET_CRED');
END;
/
Parent topic: DBMS_CLOUD for Access Management
UPDATE_CREDENTIAL Procedure
This
procedure updates an attribute with a new value for a specified
credential_name
.
Use stored credentials for data loading, for querying external data residing in the
Cloud, or wherever you use DBMS_CLOUD
procedures with a
credential_name
parameter.
Syntax
DBMS_CLOUD.UPDATE_CREDENTIAL
(
credential_name IN VARCHAR2,
attribute IN VARCHAR2,
value IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be updated. |
|
Name of attribute to update. For a username/password type credential, the valid
For a credential for an Amazon ARN, the valid For a credential for Google BigQuery or Google Analytics, the valid Depending on the vault you are using, for Vault Secret Credentials the valid
attribute values are:
See CREATE_CREDENTIAL Procedure for more information. |
|
New value for the specified attribute. |
Usage Notes
-
The username value is case sensitive. It cannot contain double quotes or spaces.
-
The
ADMIN
user can see all the credentials by queryingdba_credentials
. -
You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for
DBMS_CLOUD
procedures that require acredential_name
parameter. -
You can list credentials from the view
ALL_CREDENTIALS
. For example, run the following command to list credentials:SELECT credential_name, username, comments FROM all_credentials;
Examples
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL
(
credential_name => 'OBJ_STORE_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL
(
credential_name => 'ARN_CRED',
attribute => 'aws_role_arn',
value => 'NEW_AWS_ARN');
END;
/
Parent topic: DBMS_CLOUD for Access Management
DBMS_CLOUD for Objects and Files
The subprograms for object and file management within the DBMS_CLOUD package.
Subprogram | Description |
---|---|
This procedure loads data into existing SODA collection either from Cloud Object Storage or from files in a directory. | |
This procedure loads data into existing Autonomous Database tables either from Cloud Object Storage or from files in a directory. | |
This procedure with the format parameter type set to the value orc , parquet , or avro loads data into existing Autonomous Database tables from ORC, Parquet, or Avro files in the Cloud or from ORC, Parquet, or Avro files in a directory.
Similar to text files, the data is copied from the source ORC, Parquet, or Avro file into the preexisting internal table. |
|
This procedure copies files from one Cloud Object Storage bucket to another. | |
This procedure creates an external table on files in the Cloud or on files in a directory. This allows you to run queries on external data from Autonomous Database. | |
CREATE_CLOUD_TABLE Procedure |
This procedure creates a cloud table where all persistent data is stored in Oracle-Managed Object Storage. |
CREATE_EXTERNAL_TABLE Procedure for Apache Iceberg |
This procedure creates external tables for Apache Iceberg tables in the supported configurations. |
CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files |
This procedure with the format parameter type set to the value parquet , orc , or avro , creates an external table with either Parquet, ORC, or Avro format files in the Cloud or in a directory.
This allows you to run queries on external data from Autonomous Database. |
This procedure creates an external partitioned table on files in the Cloud. This allows you to run queries on external data from Autonomous Database. | |
This procedure creates text index on the object store files. |
|
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database. | |
This procedure clears either all data load operations logged in the user_load_operations table in your schema or clears all the data load operations of the specified type, as indicated with the type parameter.
|
|
This procedure removes the specified file from the specified directory on Autonomous Database | |
This procedure deletes the specified object on object store. | |
This procedure accepts an |
|
This procedure drops text index on the object store files. |
|
This procedure exports data from Autonomous Database to files in the Cloud based on the result of a query. The overloaded form enables you to use the operation_id parameter. Depending on the format parameter type option specified, the procedure exports rows to the Cloud Object store as text with options of CSV, JSON, Parquet, or XML; or using the ORACLE_DATAPUMP access driver to write data to a dump file. |
|
This procedure is overloaded. The procedure form reads an object from Cloud Object Storage and copies it to Autonomous Database. The function form reads an object from Cloud Object Storage and returns a BLOB to Autonomous Database.
|
|
This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp. | |
This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp. | |
This procedure moves an object from one Cloud Object Storage bucket to another one. | |
This procedure is overloaded. In one form the procedure copies a file from Autonomous Database to the Cloud Object Storage. In another form the procedure copies a BLOB from Autonomous Database to the Cloud Object Storage.
|
|
This procedure simplifies updating an external partitioned table from files in the Cloud. Run this procedure whenever new partitions are added or when partitions are removed from the Object Store source for the external partitioned table. |
|
This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. | |
This procedure validates the source files for an external partitioned table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. | |
This procedure validates the source files for a hybrid partitioned table, generates log information, and stores the rows that do not match the format options specified for the hybrid table in a badfile table on Autonomous Database. |
- COPY_COLLECTION Procedure
This procedure loads data into a SODA collection from Cloud Object Storage or from a directory. If the specified SODA collection does not exist, the procedure creates it. The overloaded form enables you to use theoperation_id
parameter. - COPY_DATA Procedure
This procedure loads data into existing Autonomous Database tables from files in the Cloud, or from files in a directory. The overloaded form enables you to use theoperation_id
parameter. - COPY_DATA Procedure for Avro, ORC, or Parquet Files
This procedure with theformat
parametertype
set to the valueavro
,orc
, orparquet
loads data into existing Autonomous Database tables from Avro, ORC, or Parquet files in the Cloud or from files in a directory. - COPY_OBJECT Procedure
This procedure copies an object from one Cloud Object Storage bucket or folder to another. - CREATE_CLOUD_TABLE Procedure
This procedure creates a Cloud Table. All Cloud Table data is stored in Oracle managed Object Storage (Cloud Tables only store their metadata in the database). - CREATE_EXTERNAL_PART_TABLE Procedure
This procedure creates an external partitioned table on files in the Cloud, or from files in a directory. This allows you to run queries on external data from Autonomous Database. - CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud or from files in a directory. This allows you to run queries on external data from Autonomous Database. - CREATE_EXTERNAL_TABLE Procedure for Apache Iceberg
This procedure creates external tables for Apache Iceberg tables in the supported configurations. - CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files
This procedure with theformat
parametertype
set to the valueavro
,orc
, orparquet
creates an external table with either Avro, ORC, or Parquet format files in the Cloud or in a directory. - CREATE_EXTERNAL_TEXT_INDEX Procedure
This procedure creates a text index on Object Storage files. - CREATE_HYBRID_PART_TABLE Procedure
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database using database objects and files in the Cloud, or database objects and files in a directory. - DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in theuser_load_operations
table in your schema or clears all the data load operations of the specified type, as indicated with thetype
parameter. - DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on Autonomous Database. - DELETE_OBJECT Procedure
This procedure deletes the specified object on object store. - DELETE_OPERATION Procedure
This procedure clears the data load entries for the specified operation ID logged in theuser_load_operations
ordba_load_operations
tables in your schema. - DROP_EXTERNAL_TEXT_INDEX Procedure
This procedure drops text index on the Object Storage files. - EXPORT_DATA Procedure
This procedure exports data from Autonomous Database based on the result of a query. This procedure is overloaded and supports writing files to the cloud or to a directory. - GET_OBJECT Procedure and Function
This procedure is overloaded. The procedure form reads an object from Cloud Object Storage and copies it to Autonomous Database. The function form reads an object from Cloud Object Storage and returns aBLOB
to Autonomous Database. - LIST_FILES Function
This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp. - LIST_OBJECTS Function
This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp. - MOVE_OBJECT Procedure
This procedure moves an object from one Cloud Object Storage bucket or folder to another. - PUT_OBJECT Procedure
This procedure is overloaded. In one form the procedure copies a file from Autonomous Database to the Cloud Object Storage. In another form the procedure copies aBLOB
from Autonomous Database to the Cloud Object Storage. - SYNC_EXTERNAL_PART_TABLE Procedure
This procedure simplifies updating an external partitioned table from files in the Cloud. Run this procedure whenever new partitions are added or when partitions are removed from the Object Store source for the external partitioned table. - VALIDATE_EXTERNAL_PART_TABLE Procedure
This procedure validates the source files for an external partitioned table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. The overloaded form enables you to use theoperation_id
parameter. - VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. The overloaded form enables you to use theoperation_id
parameter. - VALIDATE_HYBRID_PART_TABLE Procedure
This procedure validates the source files for a hybrid partitioned table, generates log information, and stores the rows that do not match the format options specified for the hybrid table in a badfile table on Autonomous Database. The overloaded form enables you to use theoperation_id
parameter.
Parent topic: DBMS_CLOUD Subprograms and REST APIs
COPY_COLLECTION Procedure
This
procedure loads data into a SODA collection from Cloud Object Storage or from a directory.
If the specified SODA collection does not exist, the procedure creates it. The overloaded
form enables you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.COPY_COLLECTION
(
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.COPY_COLLECTION
(
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the SODA collection into which data will be loaded. If a collection with this name already exists, the specified data will be loaded, otherwise a new collection is created. |
|
The name of the credential to access the Cloud Object Storage. You can use
This parameter is not used when you specify a
directory with |
|
This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Cloud source file URIs You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the For example:
The format of the URIs depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. See REGEXP_LIKE Condition for more information on Directory You can specify one directory and one or more file names or
use a comma separated list of directories and file names. The format to specify a
directory is: Regular expressions are not supported when specifying the file names in a directory. You can only use wildcards to specify file names in a directory. The character "*" can be used as the wildcard for multiple characters, and the character "?" can be used as the wildcard for a single character. For example: To specify multiple directories, use a comma separated list of
directories: For example: Use double quotes to specify a case-sensitive directory name.
For example: To include a quote character, use two quotes. For
example: |
|
The options describing the format of the source files. These options are specified as a JSON string. Supported formats are: Apart from the mentioned formats for JSON data, Autonomous Database supports other formats too. For the list of format arguments supported by Autonomous Database, see DBMS_CLOUD Package Format Options. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'OBJ_STORE_CRED',
username => 'user_name@oracle.com',
password => 'password'
);
DBMS_CLOUD.COPY_COLLECTION
(
collection_name => 'myCollection',
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adbexample/b/json/o/myCollection.json'
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
COPY_DATA Procedure
This
procedure loads data into existing Autonomous Database tables from files in the Cloud, or from files in a directory. The
overloaded form enables you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
schema_name IN VARCHAR2,
field_list IN CLOB,
format IN CLOB);
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL,
field_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL
operation_id OUT NOCOPY NUMBER);
Parameters
Parameter | Description |
---|---|
|
The name of the target table on the database. The target table needs to be created
before you run |
|
The name of the credential to access the Cloud Object Storage. You can use
This parameter is not used when you specify a directory with
|
|
You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Cloud source file URIs This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the For example:
The format of the URIs depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. See REGEXP_LIKE Condition for more information on Directory You can specify one directory and one or more file names or
use a comma separated list of directories and file names. The format to specify a
directory is: Regular expressions are not supported when specifying the file names in a directory. You can only use wildcards to specify file names in a directory. The character "*" can be used as the wildcard for multiple characters, and the character "?" can be used as the wildcard for a single character. For example: To specify multiple directories, use a comma separated list of
directories: For example: Use double quotes to specify a case-sensitive directory name.
For example: To include a quote character, use two quotes. For
example: |
|
The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure. |
|
Identifies the fields in the source files and their data types. The default value is
NULL meaning the fields and their data types are determined by
the target table definition. This argument's syntax is the same
as the When the
For an example using |
|
The options describing the format of the source, log, and bad files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. For Avro, ORC, or Parquet file format options, see DBMS_CLOUD Package Format Options for Avro, ORC, or Parquet. |
|
Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
|
Usage Note
The default record delimiter is
detected newline
. With detected
newline
, DBMS_CLOUD
tries to
automatically find the correct newline character to use as the
record delimiter. DBMS_CLOUD
first searches for the
Windows newline character \r\n
. If it finds the
Windows newline character, this is used as the record delimiter for
all files in the procedure. If a Windows newline character is not
found, DBMS_CLOUD
searches for the UNIX/Linux
newline character \n
, and if it finds one it uses
\n
as the record delimiter for all files in
the procedure. If the source files use a combination of different
record delimiters, you may encounter an error such as,
"KUP-04020: found record longer than buffer size
supported
". In this case, you need to either modify
the source files to use the same record delimiter or only specify
the source files that use the same record delimiter.
See DBMS_CLOUD Package Format Options for information on the recorddelmiter
format option.
Examples
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'DEF_CRED_NAME',
username => 'user_name@oracle.com',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD.COPY_DATA
(
table_name =>'CHANNELS',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
BEGIN
DBMS_CLOUD.COPY_DATA
(
table_name => 'ORDERS',
schema_name => 'TEST_SCHEMA',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adbexample/b/json/o/orde[r]s.tbl.1'
format => json_object('ignoreblanklines' value TRUE,
'rejectlimit' value '0',
'dateformat' value 'yyyy-mm-dd',
'regexuri' value TRUE)
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
COPY_DATA Procedure for Avro, ORC, or Parquet Files
format
parameter type
set to the
value avro
,
orc
, or parquet
loads data into existing Autonomous Database tables from Avro, ORC, or Parquet files in the Cloud or from files
in a directory.
Similar to text files, the data is copied from the source Avro, ORC, or Parquet file into the preexisting internal table.
Syntax
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
schema_name IN VARCHAR2 DEFAULT,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the target table on the database. The target table needs to be created
before you run |
|
The name of the credential to access the Cloud Object Storage. You can use
This parameter is not used when you specify a directory with
|
|
This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Cloud source file URIs You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the For example:
The format of the URIs depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. See REGEXP_LIKE Condition for more information on Directory You can specify one directory and one or more file names or
use a comma separated list of directories and file names. The format to specify a
directory is: Regular expressions are not supported when specifying the file names in a directory. You can only use wildcards to specify file names in a directory. The character "*" can be used as the wildcard for multiple characters, and the character "?" can be used as the wildcard for a single character. For example: To specify multiple directories, use a comma separated list of
directories: For example: Use double quotes to specify a case-sensitive directory name.
For example: To include a quote character, use two quotes. For
example: |
|
The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure. |
|
Ignored for Avro, ORC, or Parquet files. The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping. For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details on mapping. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details on mapping. |
|
The options describing the format of the source files. For Avro, ORC, or Parquet files, only two options are supported: see DBMS_CLOUD Package Format Options for Avro, ORC, or Parquet. |
Usage Notes
-
As with other data files, Avro, ORC, and Parquet data loads generate logs that are viewable in the tables
dba_load_operations
anduser_load_operations
. Each load operation adds a record todba
[user]_load_operations
that indicates the table containing the logs.The log table provides summary information about the load.
-
For Avro, ORC, or Parquet, when the
format
parametertype
is set to the valueavro
,orc
, orparquet
, theBADFILE_TABLE
table is always empty.-
For Parquet files,
PRIMARY KEY
constraint errors throw anORA
error. -
If data for a column encounters a conversion error, for example, the target column is not large enough to hold the converted value, the value for the column is set to
NULL
. This does not produce a rejected record.
-
Parent topic: DBMS_CLOUD for Objects and Files
COPY_OBJECT Procedure
This procedure copies an object from one Cloud Object Storage bucket or folder to another.
The source and target bucket or folder can be in the same or different Cloud Object store provider.
When the source and target are in distinct Object Stores or have different accounts with the same cloud provider, you can give separate credential names for the source and target locations.
The source credential name is by default also used by the target location when target credential name is not provided.
Syntax
DBMS_CLOUD.COPY_OBJECT
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_object_uri IN VARCHAR2,
target_object_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the source Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to the source Object Storage bucket or folder location. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
Specifies the URI for the target Object Store. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the credential to access the target Cloud Object Storage location. You can use
If you do not supply a |
Example
BEGIN
DBMS_CLOUD.COPY_OBJECT
(
source_credential_name => 'OCI_CRED',
source_object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/bgfile.csv',
target_object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/myfile.csv'
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_CLOUD_TABLE Procedure
This procedure creates a Cloud Table. All Cloud Table data is stored in Oracle managed Object Storage (Cloud Tables only store their metadata in the database).
Syntax
DBMS_CLOUD.CREATE_CLOUD_TABLE
(
table_name IN VARCHAR2,
column_list IN CLOB,
params IN CLOB);
Parameters
Parameter | Description |
---|---|
|
The name of the Cloud Table. |
|
Comma-delimited list of column names and data types for the Cloud Table. |
Usage Notes
-
DEFAULT
attributes: Thecolumn_list
can includeDEFAULT
clause, which functions like theDEFAULT
clause in an ordinaryCREATE TABLE
. See CREATE TABLE for information on the behavior of theDEFAULT
clause. -
Use
DROP TABLE
to drop a Cloud Table. Cloud Tables do not support the recycle bin.For example:
DROP TABLE CLOUD_TAB1;
-
You can grant
SELECT
,INSERT
, andUPDATE
privileges for a Cloud Table. No other privileges can be granted to a Cloud Table.See Configuring Privilege and Role Authorization for more information.
Examples
EXEC DBMS_CLOUD.CREATE_CLOUD_TABLE
( 'CLOUD_TAB1', 'I INTEGER, J INTEGER' );
BEGIN
DBMS_CLOUD.CREATE_CLOUD_TABLE
(
table_name => 'CLOUD_TABLE_WITH_DEFAULT',
column_list => 'I INTEGER,
A VARCHAR2(32) DEFAULT ''ABC''' );
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_EXTERNAL_PART_TABLE Procedure
This procedure creates an external partitioned table on files in the Cloud, or from files in a directory. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN VARCHAR2,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. You can use
|
|
Specifies the complete partitioning clause, including the location information for individual partitions. If you use the |
|
This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Cloud source file URIs You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the This option is only supported with external tables that are created on a file in the Object Storage. For example:
If you use the parameter The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. See REGEXP_LIKE Condition for more information on |
|
Comma-delimited list of column names and data types for the external table. This
parameter has the following requirements, depending on the type of the
data files specified with the
|
|
Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the |
|
The format option
If the data files are unstructured and the
For object names that are not based on hive format, the
order of the To see all the |
Usage Notes
-
You cannot call this procedure with both
partitioning_clause
andfile_uri_list
parameters. -
Specifying the
column_list
parameter is optional with structured data files, including Avro, Parquet, or ORC data files. Ifcolumn_list
is not specified, theformat
parameterpartition_columns
option must include bothname
andtype
. -
The
column_list
parameter is required with unstructured data files, such as CSV text files. -
The procedure
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage or Azure Data Lake Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
-
GitHub Repository
See DBMS_CLOUD URI Formats for more information.
-
-
The procedure
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
supports external partitioned files in directories, either in a local file system or in a network file system. -
When you call
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
with thefile_uri_list
parameter, the types for columns specified in the Cloud Object Store file name must be one of the following types:VARCHAR2(n) NUMBER(n) NUMBER(p,s) NUMBER DATE TIMESTAMP(9)
-
The default record delimiter is
detected newline
. Withdetected newline
,DBMS_CLOUD
tries to automatically find the correct newline character to use as the record delimiter.DBMS_CLOUD
first searches for the Windows newline character\r\n
. If it finds the Windows newline character, this is used as the record delimiter for all files in the procedure. If a Windows newline character is not found,DBMS_CLOUD
searches for the UNIX/Linux newline character\n
, and if it finds one it uses\n
as the record delimiter for all files in the procedure. If the source files use a combination of different record delimiters, you may encounter an error such as, "KUP-04020: found record longer than buffer size supported
". In this case, you need to either modify the source files to use the same record delimiter or only specify the source files that use the same record delimiter.See DBMS_CLOUD Package Format Options for information on the
recorddelmiter
format option. -
The external partitioned tables you create with
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
include two invisible columnsfile$path
andfile$name
. These columns help identify which file a record is coming from.-
file$path
: Specifies the file path text up to the beginning of the object name. -
file$name
: Specifies the object name, including all the text that follows the bucket name.
-
Examples
Example using the partitioning_clause
parameter:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name =>'PET1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) location
( ''&base_URL//file_11.txt'')
,
partition p2 values less than (2000) location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000) location
( ''&base_URL/file_31.txt'')
)'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name => 'PET',
format => json_object('delimiter'value ','),
column_list => 'name varchar2(20), gender varchar2(10), salary number',
partitioning_clause => 'partition by range (salary)
( -- Use test1.csv in the DEFAULT DIRECTORY DATA_PUMP_DIR
partition p1 values less than (100) LOCATION (''test1.csv''),
-- Use test2.csv in a specified directory MY_DIR
partition p2 values less than (300) DEFAULT DIRECTORY MY_DIR LOCATION (''test2.csv'') )' );
END;
/
Example using the file_uri_list
and column_list
parameters with unstructured data files:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/*.csv',
column_list => 'product varchar2(100), units number, country varchar2(100), year number, month varchar2(2)',
field_list => 'product, units', --[Because country, year and month are not in the file, they are not listed in the field list]
format => '{"type":"csv", "partition_columns":["country","year","month"]}');
END;
/
Example using the file_uri_list
without the
column_list
parameter with structured data files:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/*.parquet',
format =>
json_object('type' value 'parquet', 'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'country', 'type' value 'varchar2(100)'),
json_object('name' value 'year', 'type' value 'number'),
json_object('name' value 'month', 'type' value 'varchar2(2)')
)
)
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud or from files in a directory. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. You can use
This parameter is not used when you specify a directory with
|
|
This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Cloud source file URIs You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the This option is only supported with external tables that are created on a file in the Object Storage. For example:
The format of the URIs depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. See REGEXP_LIKE Condition for more information on You can specify one directory and one or more file names or
use a comma separated list of directories and file names. The format to specify a
directory is: Regular expressions are not supported when specifying the file names in a directory. You can only use wildcards to specify file names in a directory. The character "*" can be used as the wildcard for multiple characters, and the character "?" can be used as the wildcard for a single character. For example: To specify multiple directories, use a comma separated list of
directories: For example: Use double quotes to specify a case-sensitive directory name.
For example: To include a quote character, use two quotes. For
example: |
|
Comma-delimited list of column names and data types for the external table. |
|
Identifies the fields in the source files and their data types. The default value is
NULL meaning the fields and their data types are determined by the
|
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. For Avro, ORC, or Parquet format files, see CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files. |
Usage Notes
-
The procedure
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage or Azure Data Lake Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
-
GitHub Repository
The credential is a table level property; therefore, the external files must be on the same object store.
See DBMS_CLOUD URI Formats for more information.
-
-
The default record delimiter is
detected newline
. Withdetected newline
,DBMS_CLOUD
tries to automatically find the correct newline character to use as the record delimiter.DBMS_CLOUD
first searches for the Windows newline character\r\n
. If it finds the Windows newline character, this is used as the record delimiter for all files in the procedure. If a Windows newline character is not found,DBMS_CLOUD
searches for the UNIX/Linux newline character\n
, and if it finds one it uses\n
as the record delimiter for all files in the procedure. If the source files use a combination of different record delimiters, you may encounter an error such as, "KUP-04020: found record longer than buffer size supported
". In this case, you need to either modify the source files to use the same record delimiter or only specify the source files that use the same record delimiter.See DBMS_CLOUD Package Format Options for information on the
recorddelimiter
format option.
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name =>'WEATHER_REPORT_DOUBLE_DATE',
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'&base_URL/Charlotte_NC_Weather_History_Double_Dates.csv',
format => json_object('type' value 'csv', 'skipheaders' value '1'),
field_list => 'REPORT_DATE DATE''mm/dd/yy'',
REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
ACTUAL_MEAN_TEMP,
ACTUAL_MIN_TEMP,
ACTUAL_MAX_TEMP,
AVERAGE_MIN_TEMP,
AVERAGE_MAX_TEMP,
AVERAGE_PRECIPITATION',
column_list => 'REPORT_DATE DATE,
REPORT_DATE_COPY DATE,
ACTUAL_MEAN_TEMP NUMBER,
ACTUAL_MIN_TEMP NUMBER,
ACTUAL_MAX_TEMP NUMBER,
AVERAGE_MIN_TEMP NUMBER,
AVERAGE_MAX_TEMP NUMBER,
AVERAGE_PRECIPITATION NUMBER');
END;
/
SELECT * FROM WEATHER_REPORT_DOUBLE_DATE where
actual_mean_temp > 69 and actual_mean_temp < 74
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_EXTERNAL_TABLE Procedure for Apache Iceberg
This procedure creates external tables for Apache Iceberg tables in the supported configurations.
For a description of supported configurations, see About Querying Apache Iceberg Tables.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
column_list IN CLOB DEFAULT NULL,
field_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential used to access the data files, the metadata files and the Iceberg Catalog (if used). For AWS and OCI configurations, the credential should be created as described in CREATE_CREDENTIAL Procedure. AWS Amazon Resource Names (ARN) credentials are currently not supported. |
|
Must be NULL if an Iceberg catalog is specified (see
format parameter below). If an iceberg catalog
is not used, then the file_uri_list must contain
the URI to the iceberg metadata file.
|
|
Must be NULL, as the column names and types are automatically derived from Iceberg metadata. The column names match the names found in the
underlying data files (Parquet, Avro, ORC). The Oracle data
types are derived using the Parquet/Avro/ORC mappings between
Iceberg and the Parquet, Avro and ORC data types. Therefore
users cannot specify the |
|
Must be NULL, as column names and data types are automatically derived from the Iceberg metadata. |
|
The For examples and further information: see the examples below, Iceberg Support on OCI Data Flow Samples, DBMS_CLOUD URI Formats. |
Example AWS Iceberg tables using an AWS Glue Catalog
The format
parameter when creating tables over an AWS Iceberg table
using an AWS Glue Catalog is as follows:
format => json_object('access_protocol' value
json_object('protocol_type' value 'iceberg',
'protocol_config' value
json_object('iceberg_catalog_type' value 'aws_glue',
'iceberg_glue_region' value 'glue region',
'iceberg_table_path' value 'database_name.table_name')));
access_protocol
parameter contains a JSON
object with two elements as follows:
protocol_type
: Must be 'iceberg'protocol_config
: A nested JSON object specifying the iceberg catalog details.iceberg_catalog_type
: Must be'aws_glue'
iceberg_glue_region
: The catalog region, e.g.'us-west-1'
iceberg_table_path
: Aglue database.glue table name
path.
Example AWS Iceberg table using a metadata file URI
format
parameter when creating tables over an AWS
Iceberg table using a metadata file URI, is as
follows:format => json_object('access_protocol' value
json_object('protocol_type' value 'iceberg')
Example OCI Iceberg table using HadoopCatalog catalog
format
parameter when creating tables over an OCI
Iceberg table created by OCI Data Flow using HadoopCatalog catalog, is as
follows:format => json_object('access_protocol' value
json_object('protocol_type' value 'iceberg',
'protocol_config' value
json_object('iceberg_catalog_type' value 'hadoop',
'iceberg_warehouse' value '<OCI folder URI>',
'iceberg_table_path' value 'database_name.table_name')));
access_protocol
parameter contains a JSON
object with two elements as follows:
protocol_type
: Must be'iceberg'
protocol_config
: A nested JSON object specifying the iceberg catalog details.iceberg_catalog_type
: Must be'hadoop'
iceberg_warehouse
: The warehouse directory path used when generating the table, in native URI format.iceberg_table_path
: Thedatabase_name.table name
path used when creating the table.
Example OCI Iceberg table using the URI of the metadata file
format
parameter when creating tables over an OCI
Iceberg table using the URI of the metadata file, is as
follows:format => json_object('access_protocol' value
json_object('protocol_type' value 'iceberg')
access_protocol
parameter contains a JSON
object with one element as follows:
protocol_type
: Must be'iceberg'
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files
format
parameter type
set to the
value avro
,
orc
, or parquet
creates an external table with either Avro, ORC, or Parquet format files in the Cloud
or in a directory.
This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. You can use
This parameter is not used when you specify a
directory with |
|
This parameter specifies either a comma-delimited list of source file URIs or one or more directories and source files. Cloud source file URIs You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the This option is only supported with external tables that are created on a file in the Object Storage. For example:
The format of the URIs depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. See REGEXP_LIKE Condition for more information on Directory You can specify one directory and one or more file names or
use a comma separated list of directories and file names. The format to specify a
directory is: Regular expressions are not supported when specifying the file names in a directory. You can only use wildcards to specify file names in a directory. The character "*" can be used as the wildcard for multiple characters, and the character "?" can be used as the wildcard for a single character. For example: To specify multiple directories, use a comma separated list of
directories: For example: Use double quotes to specify a case-sensitive directory name.
For example: To include a quote character, use two quotes. For
example: |
|
(Optional) This field, when specified, overrides the
When the For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping for details. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details. |
|
Ignored for Avro, ORC, or Parquet files. The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details. |
|
For Avro, ORC, or Parquet
|
Examples ORC
format => '{"type":"orc", "schema": "all"}'
format => json_object('type' value 'orc', 'schema' value 'first')
Examples Avro
format => '{"type":"avro", "schema": "all"}'
format => json_object('type' value 'avro', 'schema' value 'first')
Examples Parquet
format => '{"type":"parquet", "schema": "all"}'
format => json_object('type' value 'parquet', 'schema' value 'first')
Avro, ORC, or Parquet Column Name Mapping to Oracle Column Names
See DBMS_CLOUD Package Avro, ORC, and Parquet to Oracle Column Name Mapping for information on column name mapping and column name conversion usage in Oracle SQL.
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_EXTERNAL_TEXT_INDEX Procedure
This procedure creates a text index on Object Storage files.
The CREATE_EXTERNAL_TEXT_INDEX
procedure creates text index on the Object Storage files specified at the location_uri
location. The index is refreshed at regular intervals, for any new additions or deletions done with files on location URI.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
index_name IN VARCHAR2,
format IN CLOB DEFAULT NULL
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage location. For public, pre-authenticated, or pre-signed bucket URIs, a NULL can be specified. See Configure Policies and Roles to Access Resources for more information. If you do not supply a |
location_uri |
Specifies the Object Store bucket or folder URI. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
index_name |
Specifies the name of the index you are building on the files located at the location_uri location.
This parameter is mandatory. |
|
Specifies additional configuration options. Options are specified as a JSON string. The supported format options are:
The You can specify stop words using the following methods:
If you do not supply a
|
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX
(
credential_name => 'DEFAULT_CREDENTIAL',
location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/ts_data/'
index_name => 'EMP',
format => JSON_OBJECT ('refresh_rate' value 10)
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
CREATE_HYBRID_PART_TABLE Procedure
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database using database objects and files in the Cloud, or database objects and files in a directory.
Syntax
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. You can use
|
|
Specifies the complete partitioning clause, including the location information for individual partitions. To use directories, the partitioning clause supports the You can use wildcards as well as regular expressions in the file names in Cloud source file URIs. Regular expressions can only be used when the
The characters "*" and "?" are considered wildcard characters when the Regular expression patterns are only supported for the file name or subfolder path in your URIs and the pattern matching is identical to that performed by the For example:
See REGEXP_LIKE Condition for more information on |
|
Comma-delimited list of column names and data types for the external table. |
|
Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the |
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. |
Usage Notes
-
The procedure
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage or Azure Data Lake Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
-
GitHub Repository
The credential is a table level property; therefore, the external files must be on the same object store.
See DBMS_CLOUD URI Formats for more information.
-
-
The procedure
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
supports hybrid partitioned files in directories, either in a local file system or in a network file system. -
The external partitioned tables you create with
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
include two invisible columnsfile$path
andfile$name
. These columns help identify which file a record is coming from.-
file$path
: Specifies the file path text up to the beginning of the object name. -
file$name
: Specifies the object name, including all the text that follows the bucket name.
-
Examples
BEGIN
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
(
table_name =>'HPT1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) external location
( ''&base_URL/file_11.txt'')
,
partition p2 values less than (2000) external location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000)
)'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
(
table_name => 'HPT1',
format => json_object('delimiter'value ',', 'recorddelimiter'value 'newline'),
column_list => 'NAME VARCHAR2(30), GENDER VARCHAR2(10), BALANCE number',
partitioning_clause => 'partition by range (B 2 ALANCE)
(partition p1 values less than (1000) external DEFAULT DIRECTORY DATA_PUMP_DIR LOCATION (''Scott_male_1000.csv''),
partition p2 values less than (2000) external DEFAULT DIRECTORY DATA_PUMP_DIR LOCATION (''Mary_female_3000.csv''),
partition p3 values less than (3000))' );
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in the
user_load_operations
table in your schema or clears all the data
load operations of the specified type, as indicated with the type
parameter.
Syntax
DBMS_CLOUD.DELETE_ALL_OPERATIONS
(
type IN VARCHAR DEFAULT NULL);
Parameters
Parameter | Description |
---|---|
|
Specifies the type of operation to delete. Type values can be found in the
If no |
Usage Note
-
DBMS_CLOUD.DELETE_ALL_OPERATIONS
does not delete currently running operations (operations in a "Running" status).
Parent topic: DBMS_CLOUD for Objects and Files
DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on Autonomous Database.
Syntax
DBMS_CLOUD.DELETE_FILE
(
directory_name IN VARCHAR2,
file_name IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE);
Parameters
Parameter | Description |
---|---|
|
The name of the directory on the Autonomous Database instance. |
|
The name of the file to be removed. |
force |
Ignore and do not report errors if the file does not exist. Valid values are: |
To run
DBMS_CLOUD.DELETE_FILE
with a user other than ADMIN you need to grant write privileges on the directory that contains the file to that user. For example, run the following command as ADMIN to grant write privileges to adb_user
:GRANT WRITE ON DIRECTORY data_pump_dir TO adb_user;
Example
BEGIN
DBMS_CLOUD.DELETE_FILE
(
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp1.dmp' );
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
DELETE_OBJECT Procedure
This procedure deletes the specified object on object store.
Syntax
DBMS_CLOUD.DELETE_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
|
object_uri |
Object or file URI for the object to delete. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. |
force |
Ignore and do not report errors if object does not exist. Valid
values are: |
Example
BEGIN
DBMS_CLOUD.DELETE_OBJECT
(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp' );
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
DELETE_OPERATION Procedure
This
procedure clears the data load entries for the specified operation ID logged in the
user_load_operations
or dba_load_operations
tables in your schema.
Syntax
DBMS_CLOUD.DELETE_OPERATION
(
id IN NUMBER);
Parameters
Parameter | Description |
---|---|
|
Specifies the operation ID associated with the log file entries you want to delete. |
Example
SELECT id FROM user_load_operations WHERE type LIKE '%BAD%';
EXEC DBMS_CLOUD.DELETE_OPERATION
(id);
Parent topic: DBMS_CLOUD for Objects and Files
DROP_EXTERNAL_TEXT_INDEX Procedure
This procedure drops text index on the Object Storage files.
The DROP_EXTERNAL_TEXT_INDEX
procedure drops the specified index created with the CREATE_EXTERNAL_TEXT_INDEX
procedure.
Syntax
DBMS_CLOUD.DROP_EXTERNAL_TEXT_INDEX
(
index_name IN VARCHAR2,
);
Parameters
Parameter | Description |
---|---|
index_name |
Specifies the name of the index you are dropping. The index name must match the name provided at the time of the index creation. This parameter is mandatory. |
Example
BEGIN
DBMS_CLOUD.DROP_EXTERNAL_TEXT_INDEX
(
index_name => 'EMP',
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
EXPORT_DATA Procedure
Based on the format
type
parameter, the procedure exports files to the Cloud or to a directory location as text files in CSV, JSON, Parquet, or XML format, or using the ORACLE_DATAPUMP access driver to write data to an Oracle Datapump dump file.
Syntax
DBMS_CLOUD.EXPORT_DATA
(
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB,
query IN CLOB);
DBMS_CLOUD.EXPORT_DATA
(
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL,
query IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
When the credential parameter is not included, this specifies output to a directory. |
|
There are different forms, depending on the value of the format parameter and depending on whether you include a credential parameter:
The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. |
|
A JSON string that provides export format options. Supported option is:
|
|
Use this parameter to specify a SELECT warehouse_id, quantity FROM inventories For information with the format When the For example: SELECT JSON_OBJECT(* RETURNING CLOB) from(SELECT warehouse_id, quantity FROM inventories) |
|
Use this parameter to track the progress and final status of the export
operation as the corresponding ID in the
|
Usage Notes:
-
The
query
parameter value that you supply can be an advanced query, if required, such as a query that includes joins or subqueries. -
Depending on the format parameter specified,
DBMS_CLOUD.EXPORT_DATA
outputs the results of the specified query on the Cloud Object Store or to a directory location in one of these formats:-
CSV, JSON, Parquet, or XML files.
See Export Data to Object Store as Text and Export Data to a Directory for more information on using
DBMS_CLOUD.EXPORT_DATA
with CSV, JSON, Parquet, or XML output files. -
Using the ORACLE_DATAPUMP access driver to write data to a dump file.
-
-
For CSV, JSON, or XML output, by default when a generated file contains 10MB of data a new output file is created. However, if you have less than 10MB of result data you may have multiple output files, depending on the database service and the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous Database instance.
See File Naming for Text Output (CSV, JSON, Parquet, or XML) for more information.
The default output file chunk size is 10MB for CSV, JSON, or XML. You can change this value with the
format
parametermaxfilesize
option. See DBMS_CLOUD Package Format Options for EXPORT_DATA for more information. -
For Parquet output, each generated file is less than 128MB and multiple output files may be generated. However, if you have less than 128MB of result data, you may have multiple output files depending on the database service and the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous Database instance.
See File Naming for Text Output (CSV, JSON, Parquet, or XML) for more information.
Usage Notes for ORACLE_DATAPUMP Output (DBMS_CLOUD.EXPORT_DATA
with format
parameter type
option datapump
):
-
EXPORT_DATA
usesDATA_PUMP_DIR
as the default logging directory. So the write privilege onDATA_PUMP_DIR
is required when usingORACLE_DATAPUMP
output. -
Autonomous Database export using
DBMS_CLOUD.EXPORT_DATA
withformat
parametertype
optiondatapump
only supports Oracle Cloud Infrastructure Object Storage, Oracle Cloud Infrastructure Object Storage Classic object stores or directory output. -
When you specify
DBMS_CLOUD.EXPORT_DATA
with theformat
parametertype
optiondatapump
, thecredential_name
parameter value cannot be an OCI resource principal. -
Oracle Data Pump divides each dump file part into smaller chunks for faster uploads. The Oracle Cloud Infrastructure Object Storage console shows multiple files for each dump file part that you export. The size of the actual dump files will be displayed as zero (0) and its related file chunks as 10mb or less. For example:
Downloading the zero byte dump file from the Oracle Cloud Infrastructure console or using the Oracle Cloud Infrastructure CLI will not give you the full dump files. To download the full dump files from the Object Store, use a tool that supports Swift such as curl, and provide your user login and Swift auth token.exp01.dmp exp01.dmp_aaaaaa exp02.dmp exp02.dmp_aaaaaa
curl -O -v -X GET -u 'user1@example.com:auth_token' \ https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/namespace-string/bucketname/exp01.dmp
If you import a file with the
DBMS_CLOUD
procedures that support theformat
parametertype
with the value 'datapump
', you only need to provide the primary file name. The procedures that support the 'datapump
' format type automatically discover and download the chunks.When you use
DBMS_CLOUD.DELETE_OBJECT
, the procedure automatically discovers and deletes the chunks when the procedure deletes the primary file. -
The
DBMS_CLOUD.EXPORT_DATA
procedure creates the dump file(s) from thefile_uri_list
values that you specify, as follows:-
As more files are needed, the procedure creates additional files from the
file_uri_list
. -
The procedure does not overwrite files. If a dump file in the
file_uri_list
exists,DBMS_CLOUD.EXPORT_DATA
reports an error. -
DBMS_CLOUD.EXPORT_DATA
does not create buckets.
-
-
The number of dump files that
DBMS_CLOUD.EXPORT_DATA
generates is determined when the procedure runs. The number of dump files that are generated depends on the number of file names you provide in thefile_uri_list
parameter, as well as on the number of Autonomous Database OCPUs available to the instance, the service level, and the size of the data.For example, if you use a 1 OCPU Autonomous Database instance or the
low
service, then a single dump file is exported with no parallelism, even if you provide multiple file names. If you use a 4 OCPU Autonomous Database instance with themedium
orhigh
service, then the jobs can run in parallel and multiple dump files are exported if you provide multiple file names. -
The dump files you create with
DBMS_CLOUD.EXPORT_DATA
cannot be imported using Oracle Data Pumpimpdp
. Depending on the database, you can use these files as follows:-
On an Autonomous Database, you can use the dump files with the
DBMS_CLOUD
procedures that support theformat
parametertype
with the value 'datapump
'. You can import the dump files usingDBMS_CLOUD.COPY_DATA
or you can callDBMS_CLOUD.CREATE_EXTERNAL_TABLE
to create an external table. -
On any other Oracle Database, such as Oracle Database 19c on-premise, you can import the dump files created with the procedure
DBMS_CLOUD.EXPORT_DATA
using theORACLE_DATAPUMP
access driver. See Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for more information.
-
-
The
query
parameter value that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.
Usage Notes for DBMS_CLOUD.EXPORT_DATA
with Output to a Directory
-
The provided directory must exist and you must be logged in as the
ADMIN
user or haveWRITE
access to the directory. -
DBMS_CLOUD.EXPORT_DATA
does not create directories. -
The procedure does not overwrite files. For example, if a dump file in the
file_uri_list
exists,DBMS_CLOUD.EXPORT_DATA
reports an error such as:ORA-31641: unable to create dump file "/u02/exports/123.dmp" ORA-27038: created file already exists
Examples
The following example shows DBMS_CLOUD.EXPORT_DATA
with the format type
parameter with the value datapump
:
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp',
format => json_object('type' value 'datapump', 'compression' value 'basic', 'version' value 'latest'),
query => 'SELECT warehouse_id, quantity FROM inventories'
);
END;
/
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
The following example shows DBMS_CLOUD.EXPORT_DATA
with the format type
parameter with the value json
:
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.json',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'json', 'compression' value 'gzip'));
);
END;
/
The following example shows DBMS_CLOUD.EXPORT_DATA
with the format type
parameter with the value xml
:
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.xml',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'xml', 'compression' value 'gzip'));
);
END;
/
The following example shows DBMS_CLOUD.EXPORT_DATA
with the format type
parameter with the value csv
:
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp.csv',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'csv', 'delimiter' value '|', 'compression' value 'gzip', 'header' value true, 'encryption' value ('user_defined_function' value 'ADMIN.decryption_callback')));
);
END;
/
The following example shows DBMS_CLOUD.EXPORT_DATA
exporting data to a directory location with the type
parameter with the value datapump
:
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
file_uri_list => 'export_dir:sales.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
GET_OBJECT Procedure and Function
This procedure is overloaded. The procedure form reads an object from
Cloud Object Storage and copies it to Autonomous Database. The function form reads an object from Cloud Object Storage and returns a
BLOB
to Autonomous Database.
Syntax
DBMS_CLOUD.GET_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2 DEFAULT NULL,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.GET_OBJECT
(
credential_name IN VARCHAR2 DEFAULT NULL,
object_uri IN VARCHAR2,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL)
RETURN BLOB;
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
|
|
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. |
|
The name of the directory on the database. Foot 1 |
|
Specifies the name of the file to create. If file name is not specified, the file
name is taken from after the last slash in the
|
startoffset |
The offset, in bytes, from where the procedure starts reading. |
endoffset |
The offset, in bytes, until where the procedure stops reading. |
|
Specifies the compression used to store the object. When
|
Footnote 1
To run DBMS_CLOUD.GET_OBJECT
with
a user other than ADMIN you need to grant WRITE
privileges on the directory to that user. For example,
run the following command as ADMIN to grant write
privileges to adb_user
:
GRANT WRITE ON DIRECTORY data_pump_dir TO adb_user;
Return Values
The function form reads from Object Store and DBMS_CLOUD.GET_OBJECT
returns a
BLOB
.
Examples
BEGIN
DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/file.txt',
directory_name => 'DATA_PUMP_DIR');
END;
/
To read character data from a file in Object Store:
SELECT to_clob(
DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/file.txt'))
FROM DUAL;
To add an image stored on Object Store in a BLOB
in the
database:
DECLARE
l_blob BLOB := NULL;
BEGIN
l_blob := DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/MyImage.gif' );
END;
/
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
Parent topic: DBMS_CLOUD for Objects and Files
LIST_FILES Function
This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_FILES
(
directory_name IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter | Description |
---|---|
|
The name of the directory on the database. |
Usage Notes
-
To run
DBMS_CLOUD.LIST_FILES
with a user other than ADMIN you need to grant read privileges on the directory to that user. For example, run the following command as ADMIN to grant read privileges toadb_user
:GRANT READ ON DIRECTORY data_pump_dir TO adb_user;
-
This is a pipelined table function with return type as
DBMS_CLOUD_TYPES.list_object_ret_t
. -
DBMS_CLOUD.LIST_FILES
does not obtain the checksum value and returnsNULL
for this field.
Example
This is a pipelined function that returns a row for each file. For example, use the following query to use this function:
SELECT * FROM DBMS_CLOUD.LIST_FILES
('DATA_PUMP_DIR');
OBJECT_NAME BYTES CHECKSUM CREATED LAST_MODIFIED
------------ ---------- ---------- --------------------- ---------------------
cwallet.sso 2965 2018-12-12T18:10:47Z 2019-11-23T06:36:54Z
Parent topic: DBMS_CLOUD for Objects and Files
LIST_OBJECTS Function
This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_OBJECTS
(
credential_name IN VARCHAR2,
location_uri IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
|
location_uri |
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. |
Usage Notes
-
Depending on the capabilities of the object store,
DBMS_CLOUD.LIST_OBJECTS
does not return values for certain attributes and the return value for the field isNULL
in this case.All supported Object Stores return values for the
OBJECT_NAME
,BYTES
, andCHECKSUM
fields.The following table shows support for the fields
CREATED
andLAST_MODIFIED
by Object Store:Object Store CREATED
LAST_MODIFIED
Oracle Cloud Infrastructure Native Returns timestamp Returns timestamp Oracle Cloud Infrastructure Swift Returns NULL
Returns timestamp Oracle Cloud Infrastructure Classic Returns NULL
Returns timestamp Amazon S3 Returns NULL
Returns timestamp Amazon S3-Compatible Returns NULL
Returns timestamp Azure Returns timestamp Returns timestamp GitHub Repository -
The checksum value is the MD5 checksum. This is a 32-character hexadecimal number that is computed on the object contents. It is expected to have a different checksum value if
OCI$RESOURCE_PRINCIPAL
credential is used. -
This is a pipelined table function with return type as
DBMS_CLOUD_TYPES.list_object_ret_t
.
Example
This is a pipelined function that returns a row for each object. For example, use the following query to use this function:
SELECT * FROM DBMS_CLOUD.LIST_OBJECTS
('OBJ_STORE_CRED',
'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/');
OBJECT_NAME BYTES CHECKSUM CREATED LAST_MODIFIED
------------ ---------- -------------------------------- --------------------- --------------------
cwallet.sso 2965 2339a2731ba24a837b26d344d643dc07 2019-11-23T06:36:54Z
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
Parent topic: DBMS_CLOUD for Objects and Files
MOVE_OBJECT Procedure
This procedure moves an object from one Cloud Object Storage bucket or folder to another.
The source and target bucket or folder can be in the same or different Cloud Object store provider.
When the source and target are in distinct Object Stores or have different accounts with the same cloud provider, you can give separate credential names for the source and target locations.
The source credential name is by default also used by the target location when target credential name is not provided.
Syntax
DBMS_CLOUD.MOVE_OBJECT
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_object_uri IN VARCHAR2,
target_object_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the source Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to the source Object Storage bucket or folder location. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
Specifies the URI for the target Object Storage bucket or folder, where the files need to be moved. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the credential to access the target Cloud Object Storage location. You can use
If you do not supply a |
Example
BEGIN
DBMS_CLOUD.MOVE_OBJECT
(
source_credential_name => 'OCI_CRED',
source_object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/bgfile.csv',
target_object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/myfile.csv'
);
END;
/
Parent topic: DBMS_CLOUD for Objects and Files
PUT_OBJECT Procedure
This
procedure is overloaded. In one form the procedure copies a file from Autonomous Database to the Cloud Object Storage.
In another form the procedure copies a BLOB
from Autonomous Database to the Cloud Object
Storage.
Syntax
DBMS_CLOUD.PUT_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2
compression IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.PUT_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
contents IN BLOB
compression IN VARCHAR2 DEFAULT NULL);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
|
|
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD URI Formats. |
|
The name of the directory on the Autonomous Database. Foot 2 |
|
Specifies the |
|
The name of the file in the specified directory. |
|
Specifies the compression used to store the object. Default value: |
Footnote 2
To run DBMS_CLOUD.PUT_OBJECT
with
a user other than ADMIN you need to grant read privileges on the
directory to that user. For example, run the following
command as ADMIN to
grant read privileges to adb_user
:
GRANT READ ON DIRECTORY data_pump_dir TO adb_user;
Example
To handle BLOB
data after in-database processing and then store the
data directly into a file in the object store:
DECLARE
my_blob_data BLOB;
BEGIN
/* Some processing producing BLOB data and populating my_blob_data */
DBMS_CLOUD.PUT_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/my_new_file',
contents => my_blob_data));
END;
/
Usage Notes
Depending on your Cloud Object Storage, the size of the object you transfer is limited as follows:
Cloud Object Storage Service | Object Transfer Size Limit |
---|---|
Oracle Cloud Infrastructure Object Storage |
50 GB |
Amazon S3 |
5 GB |
Azure Blob Storage or Azure Data Lake Storage |
256 MB |
Amazon S3-Compatible |
Set by the object store provider. For more information, refer to the provider's documentation. |
Oracle Cloud
Infrastructure object store does not allow writing files into a public bucket without supplying
credentials (Oracle Cloud
Infrastructure allows users to download objects from public buckets). Thus, you must supply a
credential name with valid credentials to store an object in an Oracle Cloud
Infrastructure public bucket using PUT_OBJECT
.
See DBMS_CLOUD URI Formats for more information.
Parent topic: DBMS_CLOUD for Objects and Files
SYNC_EXTERNAL_PART_TABLE Procedure
This procedure simplifies updating an external partitioned table from files in the Cloud. Run this procedure whenever new partitions are added or when partitions are removed from the Object Store source for the external partitioned table.
Syntax
DBMS_CLOUD.SYNC_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
update_columns IN BOOLEAN DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the target table. The target table needs to be created before you run |
|
The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure. |
|
The new files may introduce a change to the schema. Updates supported include: new columns, deleted columns. Updates to existing columns, for example a change in the data type throw errors. Default Value: False |
Parent topic: DBMS_CLOUD for Objects and Files
VALIDATE_EXTERNAL_PART_TABLE Procedure
This procedure validates the source files for an external partitioned
table, generates log information, and stores the rows that do not match the format options
specified for the external table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
|
If defined, then only a specific partition is validated. If not
specified then read all partitions sequentially until
|
|
If defined, then only a specific subpartition is validated. If
not specified then read from all external partitions or
subpartitions sequentially until |
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
For internal use only. Do not use this parameter. |
|
Determines if the validate should stop when a row is rejected. The default value is |
Parent topic: DBMS_CLOUD for Objects and Files
VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table,
generates log information, and stores the rows that do not match the format options
specified for the external table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NOCOPY NUMBER,
schema_name IN VARCHAR2 DEFAULT NULL,
rowcount IN NUMBER DEFAULT 0,
stop_on_error IN BOOLEAN DEFAULT TRUE);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
|
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
Determines if the validate should stop when a row is rejected. The default value is If the external table refers to Avro, ORC, or Parquet files then the validate stops at the first rejected row. When the external table specifies the |
Usage Notes
-
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
works with both partitioned external tables and hybrid partitioned tables. This potentially reads data from all external partitions untilrowcount
is reached orstop_on_error
applies. You do not have control over which partition, or parts of a partition, is read in which order.
Parent topic: DBMS_CLOUD for Objects and Files
VALIDATE_HYBRID_PART_TABLE Procedure
This procedure validates the source files for a hybrid partitioned
table, generates log information, and stores the rows that do not match the format options
specified for the hybrid table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
|
If defined, then only a specific partition is validated. If not
specified then read from all external partitions sequentially
until |
|
If defined, then only a specific subpartition is
validated. If not specified then read from all external
partitions or subpartitions sequentially until
|
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
For internal use only. Do not use this parameter. |
|
Determines if the validate should stop when a row is rejected. The default value is |
Parent topic: DBMS_CLOUD for Objects and Files
DBMS_CLOUD for Bulk File Management
The subprograms for bulk file operations within the DBMS_CLOUD package.
Subprogram | Description |
---|---|
BULK_COPY Procedure |
This procedure copies files from one Cloud Object Storage bucket to another. |
BULK_DELETE Procedure |
The procedure deletes files from Cloud Object Storage bucket or folder. |
BULK_DOWNLOAD Procedure |
This procedure downloads files from Cloud Object store bucket to a directory in Autonomous Database. |
BULK_MOVE Procedure |
This procedure moves files from one Cloud Object Storage bucket to another. |
BULK_UPLOAD Procedure |
This procedure uploads files from a directory in Autonomous Database to the Cloud Object Storage. |
- BULK_COPY Procedure
This procedure bulk copies files from one Cloud Object Storage bucket to another. The overloaded form enables you to use theoperation_id
parameter. - BULK_DELETE Procedure
This procedure bulk deletes files from the Cloud Object Storage. The overloaded form enables you to use theoperation_id
parameter. You can filter the list of files to be deleted using a regular expression pattern compatible withREGEXP_LIKE
operator. - BULK_DOWNLOAD Procedure
This procedure downloads files into an Autonomous Database directory from Cloud Object Storage. The overloaded form enables you to use theoperation_id
parameter. You can filter the list of files to be downloaded using a regular expression pattern compatible withREGEXP_LIKE
operator. - BULK_MOVE Procedure
This procedure bulk moves files from one Cloud Object Storage bucket or folder to another. The overloaded form enables you to use theoperation_id
parameter. - BULK_UPLOAD Procedure
This procedure copies files into Cloud Object Storage from an Autonomous Database directory. The overloaded form enables you to use theoperation_id
parameter.
Parent topic: DBMS_CLOUD Subprograms and REST APIs
BULK_COPY Procedure
This
procedure bulk copies files from one Cloud Object Storage bucket to another. The overloaded
form enables you to use the operation_id
parameter.
You can filter the list of files to be deleted using a regular expression pattern compatible with REGEXP_LIKE
operator.
The source and target bucket or folder can be in the same or different Cloud Object store provider.
When the source and target are in distinct Object Stores or have different accounts with the same cloud provider, you can give separate credential names for the source and target locations.
The source credential name is by default also used by the target location.
Syntax
DBMS_CLOUD.BULK_COPY
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_COPY
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to the source Object Storage bucket or folder location. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
Specifies the URI for the target Object Storage bucket or folder, where the files need to be copied. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the credential to access the target Cloud Object Storage location. You can use
If you do not supply a |
|
Specifies the REGEX expression to filter files. The REGEX expression pattern must be compatible with the If you do not supply a See REGEXP_LIKE Condition for more information. |
|
Specifies the additional configuration options for the file operation. These options are specified as a JSON string. The supported format options are:
If you do not supply a |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Usage Notes
-
An error is returned when the source and target URI point to the same Object Storage bucket or folder.
Example
BEGIN
DBMS_CLOUD.BULK_COPY
(
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7, 'logprefix' value 'BULKOP')
);
END;
/
Parent topic: DBMS_CLOUD for Bulk File Management
BULK_DELETE Procedure
This
procedure bulk deletes files from the Cloud Object Storage. The overloaded form enables you
to use the operation_id
parameter. You can filter the list of files to be
deleted using a regular expression pattern compatible with REGEXP_LIKE
operator.
Syntax
DBMS_CLOUD.BULK_DELETE
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_DELETE
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to an Object Storage location in the Autonomous Database. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
Specifies the REGEX expression to filter files. The REGEX expression pattern must be compatible with the If you do not supply a See REGEXP_LIKE Condition for more information. |
|
Specifies the additional configuration options for the file operation. These options are specified as a JSON string. The supported format options are:
If you do not supply a |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.BULK_DELETE
(
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
format => JSON_OBJECT ('logretention' value 5, 'logprefix' value 'BULKDEL')
);
END;
/
Parent topic: DBMS_CLOUD for Bulk File Management
BULK_DOWNLOAD Procedure
This
procedure downloads files into an Autonomous Database directory from Cloud Object Storage.
The overloaded form enables you to use the operation_id
parameter. You can
filter the list of files to be downloaded using a regular expression pattern compatible with
REGEXP_LIKE
operator.
Syntax
DBMS_CLOUD.BULK_DOWNLOAD
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_DOWNLOAD
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to an Object Storage location in the Autonomous Database. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the directory on the Autonomous Database from where you want to download the files. This parameter is mandatory. |
|
Specifies the REGEX expression to filter files. The REGEX expression pattern must be compatible with the If you do not supply a See REGEXP_LIKE Condition for more information. |
|
Specifies the additional configuration options for the file operation. These options are specified as a JSON string. The supported format options are:
If you do not supply a |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.BULK_DOWNLOAD
(
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 7, 'logprefix' value 'BULKOP')
);
END;
/
Parent topic: DBMS_CLOUD for Bulk File Management
BULK_MOVE Procedure
This
procedure bulk moves files from one Cloud Object Storage bucket or folder to another. The
overloaded form enables you to use the operation_id
parameter.
You can filter the list of files to be deleted using a regular expression pattern compatible with REGEXP_LIKE
operator.
The source and target bucket or folder can be in the same or different Cloud Object store provider.
When the source and target are in distinct Object Stores or have different accounts with the same cloud provider, you can give separate credential names for the source and target locations.
The source credential name is by default also used by the target location when target credential name is not provided.
The first step in moving files is copying them to the target location, then deleting the source files, once they are successfully copied.
The object is renamed rather than moved if Object Store allows renaming operations between source and target locations.
Syntax
DBMS_CLOUD.BULK_MOVE
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_MOVE
(
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the source Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that point to the source Object Storage bucket or folder location. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
Specifies the URI for the target Object Storage bucket or folder, where the files need to be moved. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the credential to access the target Cloud Object Storage location. You can use
If you do not supply a |
|
Specifies the REGEX expression to filter files. The REGEX expression pattern must be compatible with the If you do not supply a See REGEXP_LIKE Condition for more information. |
|
Specifies the additional configuration options for the file operation. These options are specified as a JSON string. The supported format options are:
If you do not supply a |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.BULK_MOVE
(
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7, 'logprefix' value 'BULKMOVE')
);
END;
/
An error is returned when the source and target URI point to the same Object Storage bucket or folder.
Parent topic: DBMS_CLOUD for Bulk File Management
BULK_UPLOAD Procedure
This
procedure copies files into Cloud Object Storage from an Autonomous Database directory. The
overloaded form enables you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.BULK_UPLOAD
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_UPLOAD
(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. You can use
If you do not supply a |
|
Specifies URI, that points to an Object Storage location to upload files. This parameter is mandatory. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD URI Formats for more information. |
|
The name of the directory on the Autonomous Database from where you upload files. This parameter is mandatory. |
regex_filter |
Specifies the REGEX expression to filter files. The REGEX expression pattern must be compatible with If you do not supply a See REGEXP_LIKE Condition for more information. |
|
Specifies the additional configuration options for the file operation. These options are specified as a JSON string. The supported format options are:
If you do not supply a |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.BULK_UPLOAD
(
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 5, 'logprefix' value 'BULKUPLOAD')
);
END;
/
Parent topic: DBMS_CLOUD for Bulk File Management
DBMS_CLOUD REST APIs
This section
covers the DBMS_CLOUD
REST APIs provided
with Autonomous Database.
REST API | Description |
---|---|
This function returns the HTTP response headers as JSON data in a JSON object in Autonomous Database. | |
This function returns the HTTP response in RAW format Autonomous Database. This is useful if the HTTP response is expected to be binary format. | |
This function returns the HTTP response status code as an integer in Autonomous Database. The status code helps to identify if the request is successful. | |
This function returns the HTTP response in TEXT format
(VARCHAR2 or CLOB ) in Autonomous Database. Usually, most Cloud REST
APIs return JSON response in text format. This function is useful if you expect the
HTTP response is in text format.
|
|
This function returns the configured result cache size. |
|
This function begins an HTTP request, gets the response, and ends the response in Autonomous Database. This function provides a workflow for sending a Cloud REST API request with arguments and a return response code and payload. | |
This procedure sets the maximum cache size for current session. |
- DBMS_CLOUD REST API Overview
When you use PL/SQL in your application and you need to call Cloud REST APIs you can useDBMS_CLOUD.SEND_REQUEST
to send the REST API requests. - DBMS_CLOUD REST API Constants
Describes theDBMS_CLOUD
constants for makingHTTP
requests usingDBMS_CLOUD.SEND_REQUEST
. - DBMS_CLOUD REST API Results Cache
You can saveDBMS_CLOUD
REST API results when you set thecache
parameter to true withDBMS_CLOUD.SEND_REQUEST
. TheSESSION_CLOUD_API_RESULTS
view describes the columns you can use when REST API results are saved. - GET_RESPONSE_HEADERS Function
This function returns the HTTP response headers as JSON data in a JSON object. - GET_RESPONSE_RAW Function
This function returns the HTTP response in RAW format. This is useful if the HTTP response is expected to be binary format. - GET_RESPONSE_STATUS_CODE Function
This function returns the HTTP response status code as an integer. The status code helps to identify if the request is successful. - GET_RESPONSE_TEXT Function
This function returns the HTTP response inTEXT
format (VARCHAR2
orCLOB
). Usually, most Cloud REST APIs return JSON response in text format. This function is useful if you expect the HTTP response is in text format. - GET_API_RESULT_CACHE_SIZE Function
This function returns the configured result cache size. The cache size value only applies for the current session. - SEND_REQUEST Function and Procedure
This function and procedure begins an HTTP request, gets the response, and ends the response. This function provides a workflow for sending a cloud REST API request with arguments and the function returns a response code and payload. If you use the procedure, you can view results and response details from the saved results with theSESSION_CLOUD_API_RESULTS
view. - SET_API_RESULT_CACHE_SIZE Procedure
This procedure sets the maximum cache size for current session. The cache size value only applies for the current session. - DBMS_CLOUD REST API Examples
Shows examples usingDBMS_CLOUD.SEND_REQUEST
to create and delete an Oracle Cloud Infrastructure Object Storage bucket, and an example to list all compartments in the tenancy.
Parent topic: DBMS_CLOUD Subprograms and REST APIs
DBMS_CLOUD REST API Overview
When
you use PL/SQL in your application and you need to call Cloud REST APIs you can use DBMS_CLOUD.SEND_REQUEST
to send the REST
API requests.
The DBMS_CLOUD
REST API functions allow you to make HTTP
requests using DBMS_CLOUD.SEND_REQUEST
and obtain and save results. These functions provide a generic API that lets you call any REST API with the following supported cloud services:
- Oracle Cloud
Infrastructure
See API Reference and Endpoints for information on Oracle Cloud Infrastructure REST APIs.
- Amazon Web Services (AWS)
See Guides and API References for information on Amazon Web Services REST APIs.
- Azure Cloud Foot 3
See Azure REST API Reference for information on Azure REST APIs.
- Oracle Cloud Infrastructure
Classic
See All REST Endpoints for information on Oracle Cloud Infrastructure Classic REST APIs.
- GitHub Repository
See GitHub REST API for more information.
Parent topic: DBMS_CLOUD REST APIs
DBMS_CLOUD REST API Constants
Describes the DBMS_CLOUD
constants for making HTTP
requests using
DBMS_CLOUD.SEND_REQUEST
.
DBMS_CLOUD
supports
GET
, PUT
, POST
, HEAD
and DELETE
HTTP methods. The REST API method to be used for an HTTP request
is typically documented in the Cloud REST API documentation.
Name | Type | Value |
---|---|---|
METHOD_DELETE |
VARCHAR2(6) |
'DELETE' |
METHOD_GET |
VARCHAR2(3) |
'GET' |
METHOD_HEAD |
VARCHAR2(4) |
'HEAD' |
METHOD_POST |
VARCHAR2(4) |
'POST' |
METHOD_PUT |
VARCHAR2(3) |
'PUT' |
Parent topic: DBMS_CLOUD REST APIs
DBMS_CLOUD REST API Results Cache
You can save DBMS_CLOUD
REST API results when you set the cache
parameter to true with DBMS_CLOUD.SEND_REQUEST
. The SESSION_CLOUD_API_RESULTS
view describes the columns you can use when REST API results are saved.
By default DBMS_CLOUD
REST API calls do not save results for your session. In this case you use the DBMS_CLOUD.SEND_REQUEST
function to return
results.
When you use DBMS_CLOUD.SEND_REQUEST
and set the cache
parameter to TRUE
, results are saved and you can view past results in the
SESSION_CLOUD_API_RESULTS
view. Saving and querying historical results of
DBMS_CLOUD
REST API requests can help you when you need to work with your
previous results in your applications.
For example, to query recent DBMS_CLOUD
REST API results, use the view
SESSION_CLOUD_API_RESULTS
:
SELECT timestamp FROM SESSION_CLOUD_API_RESULTS;
When you save DBMS_CLOUD
REST API results with DBMS_CLOUD.SEND_REQUEST
the saved data is
only available within the same session (connection). After the session exits, the saved data
is no longer available.
Use DBMS_CLOUD.GET_API_RESULT_CACHE_SIZE
and DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE
to view and set the
DBMS_CLOUD
REST API cache size, and to disable caching.
- DBMS_CLOUD REST API Results cache_scope Parameter
When you saveDBMS_CLOUD
REST API results withDBMS_CLOUD.SEND_REQUEST
, access to the results inSESSION_CLOUD_API_RESULTS
is provided based on the value ofcache_scope
. - DBMS_CLOUD REST API SESSION_CLOUD_API_RESULTS View
You can saveDBMS_CLOUD
REST API results when you set thecache
parameter to true withDBMS_CLOUD.SEND_REQUEST
. TheSESSION_CLOUD_API_RESULTS
view describes the columns you can use when REST API results are saved.
Parent topic: DBMS_CLOUD REST APIs
DBMS_CLOUD REST API Results cache_scope Parameter
When you save DBMS_CLOUD
REST API results with DBMS_CLOUD.SEND_REQUEST
, access to the results
in SESSION_CLOUD_API_RESULTS
is provided based on the value of
cache_scope
.
By default cache_scope
is 'PRIVATE'
and only
the current user of the session can access the results. If you set the
cache_scope
to 'PUBLIC'
, then all session users can
access the results. The default value for cache_scope
specifies that each
user can only see DBMS_CLOUD.SEND_REQUEST
REST API results generated by the
procedures they invoke with invoker's rights. When you invoke DBMS_CLOUD.SEND_REQUEST
in a session, there are three
possibilities that determines if the current user can see results in the cache, based on the
cache_scope
value:
-
You directly execute
DBMS_CLOUD.SEND_REQUEST
as a top-level statement and the call toDBMS_CLOUD.SEND_REQUEST
and the REST API results are saved with the same username. In this case you have access to all results with the default value, 'PRIVATE'
, set forcache_scope
. -
You write a wrapper invoker's rights procedure and as the current user your call with
DBMS_CLOUD.SEND_REQUEST
calls the procedure and the REST API results are saved with the same username. In this case, and you have access to all results with the default value, 'PRIVATE'
, set forcache_scope
. -
You write a wrapper definer's rights procedure and the procedure is owned by another user. When you call
DBMS_CLOUD.SEND_REQUEST
inside the procedure, the results are saved with the username of the procedure owner.For this case, a different definer's rights user is invoking
DBMS_CLOUD.SEND_REQUEST
, and the REST API results are saved with that definers procedure's owner. For this case, by default whencache_scope
isPRIVATE'
, the invoker's session cannot see the results.If the definer's procedure owner wants to make the results available to any invoking session user, then they must set
cache_scope
to'PUBLIC'
in theDBMS_CLOUD.SEND_REQUEST
.
Parent topic: DBMS_CLOUD REST API Results Cache
DBMS_CLOUD REST API SESSION_CLOUD_API_RESULTS View
You can save DBMS_CLOUD
REST API results when you set the cache
parameter to true with DBMS_CLOUD.SEND_REQUEST
. The SESSION_CLOUD_API_RESULTS
view describes the columns you can use when REST API results are saved.
The view SESSION_CLOUD_API_RESULTS
is the view created if you
cache results with DBMS_CLOUD.SEND_REQUEST
. You can query historical results which
belong to your user session. When the session ends, the data in the
SESSION_CLOUD_API_RESULTS
is purged.
Column | Description |
---|---|
URI |
The DBMS_CLOUD REST API request URL
|
TIMESTAMP |
The DBMS_CLOUD REST API response timestamp
|
CLOUD_TYPE |
The DBMS_CLOUD REST API cloud type, such as Oracle Cloud
Infrastructure, AMAZON_S3, and AZURE_BLOB
|
REQUEST_METHOD |
The DBMS_CLOUD REST API request method, such as
GET , PUT , HEAD |
REQUEST_HEADERS |
The DBMS_CLOUD REST API request headers
|
REQUEST_BODY_TEXT |
The DBMS_CLOUD REST API request body in
CLOB |
RESPONSE_STATUS_CODE |
The DBMS_CLOUD REST API response status code, such as
200(OK) , 404(Not Found) |
RESPONSE_HEADERS |
The DBMS_CLOUD REST API response headers
|
RESPONSE_BODY_TEXT |
The DBMS_CLOUD REST API response body in
CLOB |
SCOPE |
The |
Parent topic: DBMS_CLOUD REST API Results Cache
GET_RESPONSE_HEADERS Function
This function returns the HTTP response headers as JSON data in a JSON object.
Syntax
DBMS_CLOUD.GET_RESPONSE_HEADERS
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN JSON_OBJECT_T;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
Parent topic: DBMS_CLOUD REST APIs
GET_RESPONSE_RAW Function
This function returns the HTTP response in RAW format. This is useful if the HTTP response is expected to be binary format.
Syntax
DBMS_CLOUD.GET_RESPONSE_RAW
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN BLOB;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
Parent topic: DBMS_CLOUD REST APIs
GET_RESPONSE_STATUS_CODE Function
This function returns the HTTP response status code as an integer. The status code helps to identify if the request is successful.
Syntax
DBMS_CLOUD.GET_RESPONSE_STATUS_CODE
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN PLS_INTEGER;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
Parent topic: DBMS_CLOUD REST APIs
GET_RESPONSE_TEXT Function
This
function returns the HTTP response in TEXT
format (VARCHAR2
or
CLOB
). Usually, most Cloud REST APIs return JSON response in text format.
This function is useful if you expect the HTTP response is in text format.
Syntax
DBMS_CLOUD.GET_RESPONSE_TEXT
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN CLOB;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
Parent topic: DBMS_CLOUD REST APIs
GET_API_RESULT_CACHE_SIZE Function
This function returns the configured result cache size. The cache size value only applies for the current session.
Syntax
DBMS_CLOUD.GET_API_RESULT_CACHE_SIZE
()
RETURN NUMBER;
Parent topic: DBMS_CLOUD REST APIs
SEND_REQUEST Function and Procedure
This function and procedure begins an HTTP request, gets the response, and
ends the response. This function provides a workflow for sending a cloud REST API request with
arguments and the function returns a response code and payload. If you use the procedure, you
can view results and response details from the saved results with the
SESSION_CLOUD_API_RESULTS
view.
Syntax
DBMS_CLOUD.SEND_REQUEST
(
credential_name IN VARCHAR2,
uri IN VARCHAR2,
method IN VARCHAR2,
headers IN CLOB DEFAULT NULL,
async_request_url IN VARCHAR2 DEFAULT NULL,
wait_for_states IN DBMS_CLOUD_TYPES.wait_for_states_t DEFAULT NULL,
timeout IN NUMBER DEFAULT 0,
cache IN PL/SQL BOOLEAN DEFAULT FALSE,
cache_scope IN VARCHAR2 DEFAULT 'PRIVATE',
body IN BLOB DEFAULT NULL)
RETURN DBMS_CLOUD_TYPES.resp;
DBMS_CLOUD.SEND_REQUEST
(
credential_name IN VARCHAR2,
uri IN VARCHAR2,
method IN VARCHAR2,
headers IN CLOB DEFAULT NULL,
async_request_url IN VARCHAR2 DEFAULT NULL,
wait_for_states IN DBMS_CLOUD_TYPES.wait_for_states_t DEFAULT NULL,
timeout IN NUMBER DEFAULT 0,
cache IN PL/SQL BOOLEAN DEFAULT FALSE,
cache_scope IN VARCHAR2 DEFAULT 'PRIVATE',
body IN BLOB DEFAULT NULL);
Parameters
Parameter | Description |
---|---|
|
The name of the credential for authenticating with the corresponding cloud native API. You can use
|
uri |
HTTP URI to make the request. |
method |
HTTP Request Method: See DBMS_CLOUD REST API Constants for more information. |
headers |
HTTP Request headers for the corresponding cloud native API in JSON format. The authentication headers are set automatically, only pass custom headers. |
|
An asynchronous request URL. To obtain the URL select your request API from the list of APIs (see https://docs.cloud.oracle.com/en-us/iaas/api/). Then, navigate to find the API for your request in the left pane. For example, Database Services API → Autonomous Database → StopAutonomousDatabase. This page shows the API home (and shows the base endpoint). Then, append the base endpoint with the relative path obtained for your work request WorkRequest link. |
wait_for_states |
Wait for states is a status of type:
Multiple states are allowed for |
timeout |
Specifies the timeout, in seconds, for asynchronous requests with
the parameters Default value is |
cache |
If The default value is |
cache_scope |
Specifies whether everyone can have access to this request result cache. Valid
values: |
body |
HTTP Request Body for |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_req_method |
ORA-20023 |
Request method passed to |
invalid_req_header |
ORA-20024 |
Request headers passed to |
Usage Notes
-
If you are using Oracle Cloud Infrastructure, you must use a Signing Key based credential value for the
credential_name
. See CREATE_CREDENTIAL Procedure for more information. -
The optional parameters
async_request_url
,wait_for_states
, andtimeout
allow you to handle long running requests. Using this asynchronous form ofsend_request
, the function waits for the completion status specified inwait_for_states
before returning. With these parameters in the send request, you pass the expected return states in thewait_for_states
parameter, and you use theasync_request_url
parameter to specify an associated work request, the request does not return immediately. Instead, the request probes theasync_request_url
until the return state is one of the expected states or thetimeout
is exceeded (timeout
is optional). If notimeout
is specified, the request waits until a state found inwait_for_states
occurs.
Parent topic: DBMS_CLOUD REST APIs
SET_API_RESULT_CACHE_SIZE Procedure
This procedure sets the maximum cache size for current session. The cache size value only applies for the current session.
Syntax
DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE
(
cache_size IN NUMBER);
Parameters
Parameter | Description |
---|---|
cache_size |
Set the maximum cache size to the specified value
If the cache size is set to The default cache size is |
Exceptions
Exception | Error | Description |
---|---|---|
invalid API result cache size |
ORA-20032 |
The minimum value is 0 and the maximum value is 10000. This exception is shown when the input value is less than 0 or is larger than 10000. |
Example
EXEC DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE
(101);
Parent topic: DBMS_CLOUD REST APIs
DBMS_CLOUD REST API Examples
Shows examples using DBMS_CLOUD.SEND_REQUEST
to create and delete an Oracle Cloud
Infrastructure Object Storage bucket, and an example to list all compartments in the tenancy.
These examples show Oracle Cloud
Infrastructure request APIs and require that you use a Signing Key based credential for the
credential_name
. Oracle Cloud
Infrastructure Signing Key based credentials include the private_key
and
fingerprint
arguments.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => ‘OCI_KEY_CRED’,
user_ocid => ‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwetkpqsoa’,
tenancy_ocid => ‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjqqge32ewnrcyx2a’,
private_key => ‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLGPD8NM//JEBg=’,
fingerprint => ‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
See
CREATE_CREDENTIAL Procedure
for information on DBMS_CLOUD.CREATE_CREDENTIAL
.
Create Bucket Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with
HTTP POST
method to create an object store bucket
named bucketname
.
See CreateBucket for details on the Oracle Cloud Infrastructure Object Storage Service API for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/namespace-string/b/',
method => DBMS_CLOUD.METHOD_POST,
body => UTL_RAW.cast_to_raw(
JSON_OBJECT('name' value 'bucketname',
'compartmentId' value 'compartment_OCID'))
);
-- Response Body in TEXT format
dbms_output.put_line('Body: ' || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_text(resp) || CHR(10));
-- Response Headers in JSON format
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
-- Response Status Code
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_status_code(resp));
END;
/
Notes:
-
In this example,
namespace-string
is the Oracle Cloud Infrastructure object storage namespace andbucketname
is the bucket name. See Understanding Object Storage Namespaces for more information. -
Where:
region
is an endpoint region. See Object Storage API reference in API Reference and Endpoints for more information. For example, whereregion
is:us-phoenix-1
.
Delete Bucket Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with
HTTP DELETE
method to delete an object store bucket
named bucketname
.
See DeleteBucket for details on the Oracle Cloud Infrastructure Object Storage Service API for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/namespace-string/b/bucketname',
method => DBMS_CLOUD.METHOD_DELETE
);
-- Response Body in TEXT format
dbms_output.put_line('Body: ' || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_text(resp) || CHR(10));
-- Response Headers in JSON format
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
-- Response Status Code
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_status_code(resp));
END;
/
Notes:
-
In this example,
namespace-string
is the Oracle Cloud Infrastructure object storage namespace andbucketname
is the bucket name. See Understanding Object Storage Namespaces for more information. -
Where:
region
is an endpoint region. See Object Storage API reference in API Reference and Endpoints for more information. For example, whereregion
is:us-phoenix-1
.
List Compartments Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with HTTP GET
method to
list all compartments in the tenancy (root compartment). This example shows how to pass
request headers in the DBMS_CLOUD.SEND_REQUEST
.
See ListCompartments for details on the Oracle Cloud Infrastructure Identity and Access Management Service API for this example.
--
-- List compartments
--
DECLARE
resp DBMS_CLOUD_TYPES.resp;
root_compartment_ocid VARCHAR2(512) := '&1';
BEGIN
-- Send request
dbms_output.put_line('Send Request');
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://identity.region.oraclecloud.com/20160918/compartments?compartmentId=' || root_compartment_ocid,
method => DBMS_CLOUD.METHOD_GET,
headers => JSON_OBJECT('opc-request-id' value 'list-compartments')
);
dbms_output.put_line('Body: ' || '------------' || CHR(10) || DBMS_CLOUD.get_response_text(resp) || CHR(10));
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) || DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) || DBMS_CLOUD.get_response_status_code(resp));
dbms_output.put_line(CHR(10));
END;
/
Where: region
is an endpoint region. See Identity and Access
Management (IAM) API reference in API Reference and Endpoints for more information.
For example, where region
is: uk-london-1
.
Asynchronous Request Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with HTTP POST
method to
perform the Autonomous Database stop operation and wait for status. This example shows how
to use DBMS_CLOUD.SEND_REQUEST
with the
async_request_url
, wait_for_states
, and
timeout
parameters.
--
-- Sent Work Request Autonomous Database Stop Request with Wait for Status
DECLARE
l_resp DBMS_CLOUD_TYPES.resp;
l_resp_json JSON_OBJECT_T;
l_key_shape JSON_OBJECT_T;
l_body JSON_OBJECT_T;
status_array DBMS_CLOUD_TYPES.wait_for_states_t;
BEGIN
status_array := DBMS_CLOUD_TYPES.wait_for_states_t('SUCCEEDED');
l_body := JSON_OBJECT_T('{}');
l_body.put('autonomousDatabaseId', 'ocid');
-- Send request
dbms_output.put_line(l_body.to_clob);
dbms_output.put_line('Send Request');
l_resp := DBMS_CLOUD.send_request(
credential_name => 'NATIVE_CRED_OCI',
uri => 'https://database.region.oraclecloud.com/20160918/autonomousDatabases/ocid/actions/stop',
method => DBMS_CLOUD.METHOD_POST,
body => UTL_RAW.cast_to_raw(l_body.to_clob),
async_request_url => 'https://iaas.region.oraclecloud.com/20160918/workRequests',
wait_for_states => status_array,
timeout => 600
);
dbms_output.put_line('resp body: '||DBMS_CLOUD.get_response_text(l_resp));
dbms_output.put_line('resp headers: '||DBMS_CLOUD.get_response_headers(l_resp).to_clob);
END;
/
Where: region
is an endpoint region. See Identity and
Access Management (IAM) API reference in API Reference and Endpoints for more
information. For example, where region
is:
uk-london-1
.
The ocid
is the Oracle Cloud
Infrastructure resource identifier. See Resource Identifiers for more information.
Parent topic: DBMS_CLOUD REST APIs
Footnote Legend
Footnote 3: Support for Azure Cloud REST API calls is limited to the domain "blob.windows.net".