Creating and Saving a Model with the Console

Create a model in the Console and save it directly to the model catalog.

To document a model, you must prepare the metadata before you create and save it.

This task involves creating a model, adding metadata, defining the training environment, specifying predictions schemas, and saving the model to the model catalog.
Important

  • We recommend that you create and save models to the model catalog programmatically instead, either using ADS or the OCI Python SDK.
  • You can use ADS to create large models. Large models have artifacts limitations of up to 400 GB.
  • Models stored in the model catalog can also be deployed using model deployment.

If you're saving a model trained elsewhere or want to use the Console, use these steps to save a model:

  1. From the Data Science models page, create a model artifact zip archive on your local machine containing the score.py and runtime.yaml files (and any other files needed to run your model). Select Download sample artifact zip to get sample files that you can change to create your model artifact. If you need help finding the list of models, see Listing Models.
  2. On the Projects list page, select the project that contains the models that you want to work with. If you need help finding the list page or the project, see Listing Projects.
  3. On the project details page, select Models.
  4. On the Models list page, select Create model.
The Create model page opens.

1. Basic information

Upload or reference the model artifact and provide basic identifying information.

  • Compartment: Select the compartment to contain the model.
  • Name (Optional): Enter a unique name (limit of 255 characters). If you don't provide a name, a name is automatically generated. Example: model20200108222435
  • Description (Optional): Enter a description (limit of 400 characters) for the model.
  • Model artifact: Select the relevant option.
    • Upload Model Artifact: Upload the model artifact archive (a zip file) by dragging it into the box.
    • Model by Reference
      • Compartment
      • Bucket
      • Object name prefix (Optional): Enter an object name prefix. The prefix must refer to the root directory of the model artifacts. It consists of all the artifacts related to the model, with score.py and runtime.yaml at the first level inside the prefix.

2. Model version set

Either select an existing version set for the new model, or create a new version set for the new model.

  • Select from existing version sets
  • Create model in a new version set
    • Compartment: Select the compartment for the version set.
    • version set name: Enter the name for the version set. The name must be unique within the compartment.
    • Description (Optional)
    • Advanced options (Optional)
      • Model name
      • Version label
      • Tags
    • Version label (Optional)

    See also Creating a Model Version Set.

3. Model provenance

  • Select model provenance: Select the relevant option for storing the taxonomy documentation.
    • Notebook session
    • Job run: Select the relevant option and then select the job run.
  • Find a notebook session / Find a job run: Select the search option that you want to use, then select the notebook session or job run that the model was trained with.
    • Choose a project: Select the name of the project to use in the selected compartment.

      The selected compartment applies to both the project and the notebook session or job run, and both must be in the same compartment. If not, then search by OCID instead. You can change the compartment for both the project and notebook session or job run.

    • Search by OCID: If the notebook session or job run is in a different compartment than the project, then enter the notebook session or job run OCID that you trained the model in.
  • Training code (under Advanced options) (Optional): Identify Git and model training information.
    • Git repository URL: The URL of the remote Git repository.
    • Git branch: The name of the branch.
    • Git commit: The commit ID of the Git repository.
    • Local model directory: The directory path where the model artifact was temporarily stored. This could be a path in a notebook session or a local computer directory for example.
    • Model training script: The name of the Python script or notebook session that the model was trained with.
    Tip

    You can also populate model provenance metadata when you save a model to the model catalog using the OCI SDKs or the CLI.

4. Model Taxonomy

Optionally specify what the model does, machine learning framework, hyperparameters, or to create custom metadata to document the model.

Important

The maximum allowed size for all the model metadata is 32000 bytes. The size is a combination of the preset model taxonomy and the custom attributes.
  • Document model taxonomy (Optional)
    • Use case: The type of machine learning use case to use.
    • Artifact test results: The JSON output of the introspection test results run on the client side. These tests are included in the model artifact boilerplate code. You can run them optionally before saving the model in the model catalog.
    • Model framework: The Python library you used to train the model.
    • Model framework version: The version of the machine learning framework. This is a free text value. For example, the value could be 2.3.
    • Model algorithm or model estimator object: The algorithm used or model instance class. This is a free text value. For example, sklearn.ensemble.RandomForestRegressor could be the value.
    • Model hyperparameters: The hyperparameters of the model in JSON format.
  • Create custom label and value attribute pairs (Optional)
    • Label: The key label of the custom metadata.
    • Value: The value attached to the key.
    • Category(Optional): The category of the metadata from many choices including:
      • performance
      • training profile
      • training and validation datasets
      • training environment
      • other

      You can use the category to group and filter custom metadata to display in the Console. This is useful when you have many custom metadata that you want to track.

    • Description (Optional): Enter unique description of the custom metadata.
    • Search keywords
  • Upload metadata artifact (Optional)
    Note

    You can only upload the artifact file when the model is created.
    • Metadata field name
    • Value
    • Search keywords (Optional): Enter search keywords to help find the artifact.

5. Model input and output schema

Optionally document the model predictions. You define model prediction features that the model requires to make a successful prediction. You also define input and output schemas that describe the predictions returned by the model (defined in the score.py file with the predict() function).

Important

You can only document the input and output data schemas when you create the model. You can't edit the schemas post model creation. The maximum allowed file size for the combined input and output schemas is 32000 bytes.
  • Upload an input schema file: Drag the input schema JSON file into the box.
  • Upload an output schema file: Drag the output schema JSON file into the box.

6. Backup and Retention

Optionally set up backup and retention.

  • Enable Backup
    • Region
    • Notifications
  • Enable model retention
    • Notifications
    • Archive Rule: Automatic retention period in days
    • Deletion Rule: Automatic deletion period in days after archival

Review and create

Review the configuration and then select Create.