Namespace Oci.GenerativeaiService.Models
Classes
AddArtifactDetails
new artifact details.
ApiKey
ApiKeys are resources used to access GenAI models. You must be authorized through an IAM policy to use any API operations. If you're not authorized, contact an administrator who manages OCI resource access. See Getting Started with Policies and Getting Access to Generative AI Resources.
ApiKeyCollection
Results of a ApiKey search.
ApiKeyItem
The ApiKey item.
ApiKeySummary
Summary of the API key.
Artifact
Container/artifact configuration for the deployment.
ArtifactModelConverter
ChangeApiKeyCompartmentDetails
The details to move an APIKey to another compartment.
ChangeDedicatedAiClusterCompartmentDetails
The details to move a dedicated AI cluster to another compartment.
ChangeEndpointCompartmentDetails
The details to move an endpoint to another compartment.
ChangeGenerativeAiPrivateEndpointCompartmentDetails
The details required to change a private endpoint compartment.
ChangeGenerativeAiProjectCompartmentDetails
The details to move a GenerativeAiProject to another compartment.
ChangeHostedApplicationCompartmentDetails
The details to move a hosted application to another compartment.
ChangeHostedApplicationStorageCompartmentDetails
The details to move a hosted application storage to another compartment.
ChangeImportedModelCompartmentDetails
The details to move an imported model to another compartment.
ChangeModelCompartmentDetails
The details to move a custom model to another compartment.
ChangeSemanticStoreCompartmentDetails
The details to move a SemanticStore to another compartment.
ChatModelMetrics
The chat model metrics of the fine-tuning process.
CondenserConfig
Configuration for condensing conversation content.
ConnectorConfiguration
Datasource configuration for the connector.
ConnectorConfigurationModelConverter
ContentModerationConfig
The configuration details, whether to add the content moderation feature to the model. Content moderation removes toxic and biased content from responses.
ConversationConfig
Holds configuration related to conversation retention
CreateApiKeyDetails
The data to create an API key.
CreateArtifactDetails
Artifact configuration input for the deployment.
CreateArtifactDetailsModelConverter
CreateDataSourceDatabaseToolsConnectionDetails
Defines the OCI Database Tools Connection data source that the semantic model connects to.
CreateDataSourceDetails
Defines the data source that the semantic model connects to.
CreateDataSourceDetailsModelConverter
CreateDedicatedAiClusterDetails
The data to create a dedicated AI cluster.
CreateEndpointDetails
The data to create an endpoint.
CreateGenerativeAiPrivateEndpointDetails
The details required to create a Generative AI private endpoint.
CreateGenerativeAiProjectDetails
The data to create a GenerativeAiProject.
CreateHostedApplicationDetails
The details required to create a hosted application.
CreateHostedApplicationStorageDetails
The data to create a hosted application storage.
CreateHostedDeploymentDetails
The data to create a hosted deployment.
CreateImportedModelDetails
The data to import a model.
CreateModelDetails
The data to create a custom model.
CreateSchemasDatabaseToolsConnectionDetails
Array of database schemas or database objects included in the enrichment pipeline for data sources connected via an OCI Database Tools connection.
CreateSchemasDetails
Array of database schemas or other database objects to include in enrichment pipeline.
CreateSchemasDetailsModelConverter
CreateSemanticStoreDetails
The data to create a SemanticStore.
CreateSingleDockerArtifactDetails
CreateVectorStoreConnectorDetails
The data to create a VectorStoreConnector.
CreateVectorStoreConnectorFileSyncDetails
The data to create a VectorStoreConnectorFileSync.
DataSourceDatabaseToolsConnectionDetails
Defines the OCI Database Tools Connection data source that the semantic model connects to.
DataSourceDetails
Defines the data source that the semantic model connects to.
DataSourceDetailsModelConverter
DatabaseToolsConnection
Dataset
DatasetModelConverter
DedicatedAiCluster
Dedicated AI clusters are compute resources that you can use for fine-tuning custom models or for hosting endpoints for custom models. The clusters are dedicated to your models and not shared with users in other tenancies.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
DedicatedAiClusterCapacity
The total capacity for a dedicated AI cluster.
DedicatedAiClusterCapacityModelConverter
DedicatedAiClusterCollection
Results of a dedicate AI cluster search. Contains DedicatedAiClusterSummary items and other information such as metadata.
DedicatedAiClusterHostingCapacity
The capacity of a hosting type dedicated AI cluster.
DedicatedAiClusterSummary
Summary information about a dedicated AI cluster.
EmbeddingConfig
Configuration for generating embeddings from extracted information.
Endpoint
To host a custom model for inference, create an endpoint for that model on a dedicated AI cluster of type HOSTING.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
EndpointCollection
Results of an endpoint search. Contains EndpointSummary items and other information such as metadata.
EndpointSummary
Summary information for an endpoint resource.
EnvironmentVariable
The environment variables for the Hosted Application
ExtractionConfig
Configuration for information extraction from conversation content.
FileSyncIngestionLogs
A Log object that gives the ingestion status of a File from a datasource read by a VectorStoreConnector
FileSyncIngestionLogsCollection
Results of a VectorStoreConnector Ingestion Log search.
FileSyncStatistics
Synchronization Statistics for a VectorStore File Sync operation or for a VectorStore Connector
FineTuneDetails
Details about fine-tuning a custom model.
GenAiModelLlmSelection
LLM selection with specific Gen AI model.
GenerativeAiPrivateEndpoint
Generative AI private endpoint.
GenerativeAiPrivateEndpointCollection
Collection of GenerativeAiPrivateEndpointSummary
GenerativeAiPrivateEndpointSummary
List of Generative AI private endpoints.
GenerativeAiProject
A GenerativeAiProject is a logical container that stores conversation, file and containers.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
GenerativeAiProjectCollection
Results of a GenerativeAiProject search. Contains GenerativeAiProjectSummary items and other information such as metadata.
GenerativeAiProjectSummary
Summary information for a GenerativeAiProject.
HostedApplication
Hosted Application, defines shared configurations that apply across multiple deployments of the Agent or MCP application.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
HostedApplicationCollection
Results of a dedicate hosted application search. Contains HostedApplicationSummary items and other information such as metadata.
HostedApplicationStorage
defines a physical storage (database or cache) managed by service. Each application can choose one or two storages for certain purpose such as agent memory.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
HostedApplicationStorageCollection
Results of a dedicate hosted application search. Contains HostedApplicationStorageSummary items and other information such as metadata.
HostedApplicationStorageSummary
Summary information about a hosted application storage.
HostedApplicationSummary
Summary information about a hosted application.
HostedDeployment
Hosted deployment is designed to support the full spectrum of agent use cases from lightweight, employee-facing assistants and internal workflow automation, to enterprise-grade, large-scale customer-facing workloads.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
HostedDeploymentCollection
Results of a dedicate hosted deployment search. Contains Hosted Deployment Summary items and other information such as metadata.
HostedDeploymentSummary
Summary information about a hosted deployment.
HuggingFaceModel
Configuration for importing a model from Hugging Face. Requires the model ID and a reference to the token stored in a vault for authenticated access.
IdcsAuthConfig
Oracle Identity Cloud Service (IDCS) configuration used when inboundAuthConfigType is set to IDCS_AUTH_CONFIG. This object must be specified when inboundAuthConfigType is IDCS_AUTH_CONFIG.
ImportedModel
Represents a model imported into the system based on an external data source, such as Hugging Face or Object Storage.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
ImportedModelCollection
Represents the result of a list operation for imported models.
ImportedModelSummary
Summary of the importedModel.
InboundAuthConfig
The client-side inbound authentication configuration for the Hosted Application. Defines the network access rules. When unspecified, the service applies the default inbound authentication configuration type.
InboundNetworkingConfig
Inbound Networking configuration.
KeyDetails
The data to create/renew an API key item.
LlmSelection
LLM selection configuration.
LlmSelectionModelConverter
LongTermMemoryConfig
Configuration settings for long-term memory behavior.
LoraTrainingConfig
The Lora training method hyperparameters.
Model
You can create a custom model by using your dataset to fine-tune an out-of-the-box text generation base model. Have your dataset ready before you create a custom model. See Training Data Requirements.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
ModelCollection
Results of a model search. Contains ModelSummary items and other information such as metadata.
ModelConfig
Base model configuration shared across GenAI Project memory.
ModelConfigModelConverter
ModelDataSource
Defines the source location and method used to import the model. Supports importing from Hugging Face, an Object Storage location, or by referencing an already imported model.
ModelDataSourceModelConverter
ModelMetrics
Model metrics during the creation of a new model.
ModelMetricsModelConverter
ModelSummary
Summary of the model.
NetworkingConfig
Networking configuration.
ObjectStorageConfig
OCI Object storage configuration details.
ObjectStorageDataset
The dataset is stored in an OCI Object Storage bucket.
ObjectStorageObject
Details about the object storage location.
OciObjectStorageConfiguration
The OCI Object Storage namespace and bucket details of the datasource
OutboundNetworkingConfig
Outbound Networking configuration.
PiiDetectionConfig
The configuration details for personally identifiable information (PII) detection, in prompts and responses.
PromptInjectionConfig
The configuration details for prompt injection (PI) detection. This is for input only.
RefreshScheduleDetails
Specifies a refresh schedule. Null represents no automated synchronization schedule.
RefreshScheduleDetailsModelConverter
RefreshScheduleIntervalDetails
Defines the refresh schedule by specifying the interval between each refresh.
RefreshScheduleNoneDetails
Allows user to opt out of automated synchronization schedule.
RefreshScheduleOnCreateDetails
Only trigger the enrichment at the creation time.
RenewApiKeyDetails
The data to renew an API key item.
ScalingConfig
The auto scaling configuration for the Hosted Application. Defines the minimum and maximum number of replicas. When unspecified, the service applies service-defined default scaling values.
ScheduleConfig
The Schedule configuration of a VectorStoreConnector to trigger a File Sync Operation.
ScheduleConfigModelConverter
ScheduleCronConfig
The scheduled UNIX cron definition.
ScheduleIntervalConfig
The interval schedule config.
SchemaItem
Represents a database schema identified by name. This is the simplest schema definition and includes only schema name now. Additional configuration options may be supported in extended forms later.
SchemasDatabaseToolsConnectionDetails
Array of database schemas or database objects included in the enrichment pipeline for data sources connected via an OCI Database Tools connection.
SchemasDetails
Array of database schemas or other database objects to include in enrichment pipeline.
SchemasDetailsModelConverter
SemanticStore
A Semantic Store is a container resource of semantic records, with controllable enrichment refresh and synchronization policy.
To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator who gives OCI resource access to users. See
Getting Started with Policies and Getting Access to Generative AI Resources.
SemanticStoreCollection
Results of a SemanticStore list. Contains SemanticStoreSummary items and other information such as metadata.
SemanticStoreSummary
Summary information for a SemanticStore.
SetApiKeyStateDetails
The data to set the state of an API key item.
ShortTermMemoryOptimizationConfig
Configuration settings for short-term memory optimization.
SingleDockerArtifact
Container/artifact configuration for the deployment.
StandardLongTermMemoryStrategy
Standard strategy settings for long-term memory.
StorageConfig
The type of service-managed storage.
TFewTrainingConfig
The TFEW training method hyperparameters.
TextGenerationModelMetrics
The text generation model metrics of the fine-tuning process.
TrainingConfig
The fine-tuning method and hyperparameters used for fine-tuning a custom model.
TrainingConfigModelConverter
UpdateApiKeyDetails
The data to update an API key.
UpdateDedicatedAiClusterDetails
The data to update a dedicated AI cluster.
UpdateEndpointDetails
The data to update an endpoint.
UpdateGenerativeAiPrivateEndpointDetails
The details required to update a Generative AI private endpoint.
UpdateGenerativeAiProjectDetails
The data to update a GenerativeAiProject.
UpdateHostedApplicationDetails
The data to update a hosted application.
UpdateHostedDeploymentDetails
The data to update a hosted deployment.
UpdateImportedModelDetails
The data to update an imported model.
UpdateModelDetails
The data to update a custom model.
UpdateSemanticStoreDetails
The data to update a SemanticStore.
UpdateVectorStoreConnectorDetails
The data to update a VectorStoreConnector.
VanillaTrainingConfig
The Vanilla training method hyperparameters.
VectorStoreConnector
A VectorStore Connector offers a lightweight and configurable mechanism to continuously synchronize data from external systems into the VectorStore at scale. It captures the configuration of the datasource for data ingestion.
VectorStoreConnectorCollection
Results of a VectorStoreConnector search. Contains VectorStoreConnectorSummary items.
VectorStoreConnectorFileSync
The VectorStoreConnectorFileSync is an operation that carries out the data sync operation between the datasource and the VectorStore. The FileSync can be triggered either manually or at a scheduled interval by the VectorStoreConnector.
VectorStoreConnectorFileSyncCollection
Results of a VectorStoreConnectorFileSync search.
VectorStoreConnectorFileSyncSummary
Summary information for a VectorStoreConnectorFileSync
VectorStoreConnectorIngestionLogs
A Log object that gives the ingestion status of a File from a datasource read by a VectorStoreConnector
VectorStoreConnectorIngestionLogsCollection
Results of a VectorStoreConnector Ingestion Log search.
VectorStoreConnectorStats
File synchronization statistics for a VectorStoreConnector.
VectorStoreConnectorSummary
Summary information for a VectorStoreConnector
WorkRequest
An asynchronous work request. When you start a long-running operation, the service creates a work request. Work requests help you monitor long-running operations.
A work request is an activity log that lets you track each step in the operation's progress. Each work request has an OCID that lets you interact with it programmatically and use it for automation.
WorkRequestError
An error encountered while performing an operation that is tracked by this work request.
WorkRequestErrorCollection
A list of work request errors. Can contain errors and other information such as metadata.
WorkRequestLogEntry
The log message from performing an operation that is tracked by this work request.
WorkRequestLogEntryCollection
A list of work request logs. Can contain logs and other information such as metadata.
WorkRequestResource
The resource created or operated on by a work request.
WorkRequestSummary
Summary information about an asynchronous work request.
WorkRequestSummaryCollection
A list of work requests. Can contain work requests and other information such as metadata.
Enums
ActionType
Possible types of actions.
ApiKey.LifecycleStateEnum
ApiKeyItem.StateEnum
Artifact.ArtifactTypeEnum
Artifact.StatusEnum
ConnectorConfiguration.TypeEnum
ContentModerationConfig.ModeEnum
CreateArtifactDetails.ArtifactTypeEnum
CreateDataSourceDetails.ConnectionTypeEnum
CreateSchemasDetails.ConnectionTypeEnum
DataSourceDetails.ConnectionTypeEnum
Dataset.DatasetTypeEnum
DedicatedAiCluster.LifecycleStateEnum
DedicatedAiCluster.TypeEnum
DedicatedAiCluster.UnitShapeEnum
DedicatedAiClusterCapacity.CapacityTypeEnum
Endpoint.LifecycleStateEnum
EnvironmentVariable.TypeEnum
FileSyncIngestionLogs.StatusEnum
GenerativeAiPrivateEndpoint.LifecycleStateEnum
GenerativeAiPrivateEndpoint.ResourceTypeEnum
GenerativeAiProject.LifecycleStateEnum
HostedApplication.LifecycleStateEnum
HostedApplicationStorage.LifecycleStateEnum
HostedApplicationStorage.StorageTypeEnum
HostedApplicationStorageSummary.StorageTypeEnum
HostedDeployment.LifecycleStateEnum
ImportedModel.LifecycleStateEnum
ImportedModelCapability
Specifies the intended use or supported capabilities of the imported model.
InboundAuthConfig.InboundAuthConfigTypeEnum
InboundNetworkingConfig.EndpointModeEnum
LlmSelection.LlmSelectionTypeEnum
Model.LifecycleStateEnum
Model.TypeEnum
ModelCapability
Describes what this model can be used for.
ModelConfig.ModelConfigTypeEnum
ModelDataSource.SourceTypeEnum
ModelMetrics.ModelMetricsTypeEnum
OperationStatus
The status of the work request.
OperationType
The resources affected by this work request.
OutboundNetworkingConfig.NetworkModeEnum
RefreshScheduleDetails.TypeEnum
ScalingConfig.ScalingTypeEnum
ScheduleConfig.ConfigTypeEnum
ScheduleConfig.StateEnum
ScheduleIntervalConfig.FrequencyEnum
SchemasDetails.ConnectionTypeEnum
SemanticStore.LifecycleStateEnum
SortOrder
The sort order to use, either ascending (ASC) or descending (DESC). The displayName
sort order is case sensitive.
TrainingConfig.TrainingConfigTypeEnum
VectorStoreConnector.LifecycleStateEnum
VectorStoreConnectorFileSync.LifecycleStateEnum
VectorStoreConnectorFileSync.TriggerTypeEnum
VectorStoreConnectorFileSyncSummary.TriggerTypeEnum
VectorStoreConnectorIngestionLogs.StatusEnum
WorkRequestResourceMetadataKey
Possible metadata keys for work request resource metadata.