public static interface RagFileParsingConfig.LlmParserOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
String |
getCustomParsingPrompt()
The prompt to use for parsing.
|
com.google.protobuf.ByteString |
getCustomParsingPromptBytes()
The prompt to use for parsing.
|
int |
getMaxParsingRequestsPerMin()
The maximum number of requests the job is allowed to make to the
LLM model per minute.
|
String |
getModelName()
The name of a LLM model used for parsing.
|
com.google.protobuf.ByteString |
getModelNameBytes()
The name of a LLM model used for parsing.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofString getModelName()
The name of a LLM model used for parsing.
Format:
* `projects/{project_id}/locations/{location}/publishers/{publisher}/models/{model}`
string model_name = 1;com.google.protobuf.ByteString getModelNameBytes()
The name of a LLM model used for parsing.
Format:
* `projects/{project_id}/locations/{location}/publishers/{publisher}/models/{model}`
string model_name = 1;int getMaxParsingRequestsPerMin()
The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used.
int32 max_parsing_requests_per_min = 2;String getCustomParsingPrompt()
The prompt to use for parsing. If not specified, a default prompt will be used.
string custom_parsing_prompt = 3;com.google.protobuf.ByteString getCustomParsingPromptBytes()
The prompt to use for parsing. If not specified, a default prompt will be used.
string custom_parsing_prompt = 3;Copyright © 2025 Google LLC. All rights reserved.