Interface GenerativeAiInferenceAsync
-
- All Superinterfaces:
AutoCloseable
- All Known Implementing Classes:
GenerativeAiInferenceAsyncClient
@Generated(value="OracleSDKGenerator", comments="API Version: 20231130") public interface GenerativeAiInferenceAsync extends AutoCloseable
OCI Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases for text generation, summarization, and text embeddings.Use the Generative AI service inference API to access your custom model endpoints, or to try the out-of-the-box models to {@link #eNGenerative-ai-inferenceLatestChatResultChat(ENGenerative-ai-inferenceLatestChatResultChatRequest) eNGenerative-ai-inferenceLatestChatResultChat}, {@link #eNGenerative-ai-inferenceLatestGenerateTextResultGenerateText(ENGenerative-ai-inferenceLatestGenerateTextResultGenerateTextRequest) eNGenerative-ai-inferenceLatestGenerateTextResultGenerateText}, {@link #eNGenerative-ai-inferenceLatestSummarizeTextResultSummarizeText(ENGenerative-ai-inferenceLatestSummarizeTextResultSummarizeTextRequest) eNGenerative-ai-inferenceLatestSummarizeTextResultSummarizeText}, and {@link #eNGenerative-ai-inferenceLatestEmbedTextResultEmbedText(ENGenerative-ai-inferenceLatestEmbedTextResultEmbedTextRequest) eNGenerative-ai-inferenceLatestEmbedTextResultEmbedText}.
To use a Generative AI custom model for inference, you must first create an endpoint for that model. Use the {@link #eNGenerative-aiLatest(ENGenerative-aiLatestRequest) eNGenerative-aiLatest} to {@link #eNGenerative-aiLatestModel(ENGenerative-aiLatestModelRequest) eNGenerative-aiLatestModel} by fine-tuning an out-of-the-box model, or a previous version of a custom model, using your own data. Fine-tune the custom model on a {@link #eNGenerative-aiLatestDedicatedAiCluster(ENGenerative-aiLatestDedicatedAiClusterRequest) eNGenerative-aiLatestDedicatedAiCluster}. Then, create a {@link #eNGenerative-aiLatestDedicatedAiCluster(ENGenerative-aiLatestDedicatedAiClusterRequest) eNGenerative-aiLatestDedicatedAiCluster} with an
Endpoint
to host your custom model. For resource management in the Generative AI service, use the {@link #eNGenerative-aiLatest(ENGenerative-aiLatestRequest) eNGenerative-aiLatest}.To learn more about the service, see the [Generative AI documentation](https://docs.oracle.com/iaas/Content/generative-ai/home.htm).
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description Future<ChatResponse>
chat(ChatRequest request, AsyncHandler<ChatRequest,ChatResponse> handler)
Creates a response for the given conversation.Future<EmbedTextResponse>
embedText(EmbedTextRequest request, AsyncHandler<EmbedTextRequest,EmbedTextResponse> handler)
Produces embeddings for the inputs.Future<GenerateTextResponse>
generateText(GenerateTextRequest request, AsyncHandler<GenerateTextRequest,GenerateTextResponse> handler)
Generates a text response based on the user prompt.String
getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)void
refreshClient()
Rebuilds the client from scratch.Future<RerankTextResponse>
rerankText(RerankTextRequest request, AsyncHandler<RerankTextRequest,RerankTextResponse> handler)
Reranks the text responses based on the input documents and a prompt.void
setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).void
setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).void
setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).Future<SummarizeTextResponse>
summarizeText(SummarizeTextRequest request, AsyncHandler<SummarizeTextRequest,SummarizeTextResponse> handler)
Summarizes the input text.void
useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not.-
Methods inherited from interface java.lang.AutoCloseable
close
-
-
-
-
Method Detail
-
refreshClient
void refreshClient()
Rebuilds the client from scratch.Useful to refresh certificates.
-
setEndpoint
void setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).- Parameters:
endpoint
- The endpoint of the serice.
-
getEndpoint
String getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)
-
setRegion
void setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).Note, this will call
setEndpoint
after resolving the endpoint. If the service is not available in this region, however, an IllegalArgumentException will be raised.- Parameters:
region
- The region of the service.
-
setRegion
void setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).Note, this will first try to map the region ID to a known Region and call
setRegion
.If no known Region could be determined, it will create an endpoint based on the default endpoint format (
Region.formatDefaultRegionEndpoint(Service, String)
and then callsetEndpoint
.- Parameters:
regionId
- The public region ID.
-
useRealmSpecificEndpointTemplate
void useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not.Set realmSpecificEndpointTemplateEnabled to “true” if the user wants to enable use of realm specific endpoint template, otherwise set it to “false”
- Parameters:
realmSpecificEndpointTemplateEnabled
- flag to enable the use of realm specific endpoint template
-
chat
Future<ChatResponse> chat(ChatRequest request, AsyncHandler<ChatRequest,ChatResponse> handler)
Creates a response for the given conversation.- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
embedText
Future<EmbedTextResponse> embedText(EmbedTextRequest request, AsyncHandler<EmbedTextRequest,EmbedTextResponse> handler)
Produces embeddings for the inputs.An embedding is numeric representation of a piece of text. This text can be a phrase, a sentence, or one or more paragraphs. The Generative AI embedding model transforms each phrase, sentence, or paragraph that you input, into an array with 1024 numbers. You can use these embeddings for finding similarity in your input text such as finding phrases that are similar in context or category. Embeddings are mostly used for semantic searches where the search function focuses on the meaning of the text that it's searching through rather than finding results based on keywords.
- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
generateText
Future<GenerateTextResponse> generateText(GenerateTextRequest request, AsyncHandler<GenerateTextRequest,GenerateTextResponse> handler)
Generates a text response based on the user prompt.- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
rerankText
Future<RerankTextResponse> rerankText(RerankTextRequest request, AsyncHandler<RerankTextRequest,RerankTextResponse> handler)
Reranks the text responses based on the input documents and a prompt.Rerank assigns an index and a relevance score to each document, indicating which document is most related to the prompt.
- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
summarizeText
Future<SummarizeTextResponse> summarizeText(SummarizeTextRequest request, AsyncHandler<SummarizeTextRequest,SummarizeTextResponse> handler)
Summarizes the input text.- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
-