GenerativeModel


class GenerativeModel : AutoCloseable


A facilitator for a given system model.

Summary

Public constructors

GenerativeModel(
    generationConfig: GenerationConfig,
    downloadConfig: DownloadConfig
)

Public functions

open Unit

Closes the client and releases its resources.

suspend GenerateContentResponse
generateContent(vararg prompt: Content)

Generates a response from the system model with the provided Contents.

suspend GenerateContentResponse

Generates a response from the system model with the provided text represented Content.

Flow<GenerateContentResponse>
generateContentStream(vararg prompt: Content)

Generates a streaming response from the system model with the provided Contents.

Flow<GenerateContentResponse>

Generates a streaming response from the system model with the provided text represented Content.

suspend <Error class: unknown class>

Prepares engine in advance so as to move timing overhead out of inference.

Public properties

DownloadConfig

the config for system model downloading

GenerationConfig

configuration parameters to use for content generation

Public constructors

GenerativeModel

GenerativeModel(
    generationConfig: GenerationConfig,
    downloadConfig: DownloadConfig = DownloadConfig()
)

Public functions

close

open fun close(): Unit

Closes the client and releases its resources.

generateContent

suspend fun generateContent(vararg prompt: Content): GenerateContentResponse

Generates a response from the system model with the provided Contents.

Parameters
vararg prompt: Content

A group of Contents to send to the model.

Returns
GenerateContentResponse

A GenerateContentResponse after some delay. Function should be called within a suspend context to properly manage concurrency.

generateContent

suspend fun generateContent(prompt: String): GenerateContentResponse

Generates a response from the system model with the provided text represented Content.

Parameters
prompt: String

The text to be converted into a single piece of Content to send to the model.

Returns
GenerateContentResponse

A GenerateContentResponse after some delay. Function should be called within a suspend context to properly manage concurrency.

generateContentStream

fun generateContentStream(vararg prompt: Content): Flow<GenerateContentResponse>

Generates a streaming response from the system model with the provided Contents.

Parameters
vararg prompt: Content

A group of Contents to send to the model.

Returns
Flow<GenerateContentResponse>

A Flow which will emit responses as they are returned from the model.

generateContentStream

fun generateContentStream(prompt: String): Flow<GenerateContentResponse>

Generates a streaming response from the system model with the provided text represented Content.

Parameters
prompt: String

The text to be converted into a single piece of Content to send to the model.

Returns
Flow<GenerateContentResponse>

A Flow which will emit responses as they are returned from the model.

prepareInferenceEngine

suspend fun prepareInferenceEngine(): <Error class: unknown class>

Prepares engine in advance so as to move timing overhead out of inference. Calling this method is strictly optional, but we recommend calling it well before the first inference call to minimize the latency of the first inference.

Public properties

downloadConfig

val downloadConfigDownloadConfig

the config for system model downloading

generationConfig

val generationConfigGenerationConfig

configuration parameters to use for content generation