public interface ImportRagFilesConfigOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
GcsSource |
getGcsSource()
Google Cloud Storage location.
|
GcsSourceOrBuilder |
getGcsSourceOrBuilder()
Google Cloud Storage location.
|
GoogleDriveSource |
getGoogleDriveSource()
Google Drive location.
|
GoogleDriveSourceOrBuilder |
getGoogleDriveSourceOrBuilder()
Google Drive location.
|
BigQueryDestination |
getImportResultBigquerySink()
The BigQuery destination to write import result to.
|
BigQueryDestinationOrBuilder |
getImportResultBigquerySinkOrBuilder()
The BigQuery destination to write import result to.
|
GcsDestination |
getImportResultGcsSink()
The Cloud Storage path to write import result to.
|
GcsDestinationOrBuilder |
getImportResultGcsSinkOrBuilder()
The Cloud Storage path to write import result to.
|
ImportRagFilesConfig.ImportResultSinkCase |
getImportResultSinkCase() |
ImportRagFilesConfig.ImportSourceCase |
getImportSourceCase() |
JiraSource |
getJiraSource()
Jira queries with their corresponding authentication.
|
JiraSourceOrBuilder |
getJiraSourceOrBuilder()
Jira queries with their corresponding authentication.
|
int |
getMaxEmbeddingRequestsPerMin()
Optional.
|
BigQueryDestination |
getPartialFailureBigquerySink()
Deprecated.
google.cloud.aiplatform.v1.ImportRagFilesConfig.partial_failure_bigquery_sink is
deprecated. See google/cloud/aiplatform/v1/vertex_rag_data.proto;l=422
|
BigQueryDestinationOrBuilder |
getPartialFailureBigquerySinkOrBuilder()
Deprecated.
|
GcsDestination |
getPartialFailureGcsSink()
Deprecated.
google.cloud.aiplatform.v1.ImportRagFilesConfig.partial_failure_gcs_sink is
deprecated. See google/cloud/aiplatform/v1/vertex_rag_data.proto;l=413
|
GcsDestinationOrBuilder |
getPartialFailureGcsSinkOrBuilder()
Deprecated.
|
ImportRagFilesConfig.PartialFailureSinkCase |
getPartialFailureSinkCase() |
RagFileParsingConfig |
getRagFileParsingConfig()
Optional.
|
RagFileParsingConfigOrBuilder |
getRagFileParsingConfigOrBuilder()
Optional.
|
RagFileTransformationConfig |
getRagFileTransformationConfig()
Specifies the transformation config for RagFiles.
|
RagFileTransformationConfigOrBuilder |
getRagFileTransformationConfigOrBuilder()
Specifies the transformation config for RagFiles.
|
SharePointSources |
getSharePointSources()
SharePoint sources.
|
SharePointSourcesOrBuilder |
getSharePointSourcesOrBuilder()
SharePoint sources.
|
SlackSource |
getSlackSource()
Slack channels with their corresponding access tokens.
|
SlackSourceOrBuilder |
getSlackSourceOrBuilder()
Slack channels with their corresponding access tokens.
|
boolean |
hasGcsSource()
Google Cloud Storage location.
|
boolean |
hasGoogleDriveSource()
Google Drive location.
|
boolean |
hasImportResultBigquerySink()
The BigQuery destination to write import result to.
|
boolean |
hasImportResultGcsSink()
The Cloud Storage path to write import result to.
|
boolean |
hasJiraSource()
Jira queries with their corresponding authentication.
|
boolean |
hasPartialFailureBigquerySink()
Deprecated.
google.cloud.aiplatform.v1.ImportRagFilesConfig.partial_failure_bigquery_sink is
deprecated. See google/cloud/aiplatform/v1/vertex_rag_data.proto;l=422
|
boolean |
hasPartialFailureGcsSink()
Deprecated.
google.cloud.aiplatform.v1.ImportRagFilesConfig.partial_failure_gcs_sink is
deprecated. See google/cloud/aiplatform/v1/vertex_rag_data.proto;l=413
|
boolean |
hasRagFileParsingConfig()
Optional.
|
boolean |
hasRagFileTransformationConfig()
Specifies the transformation config for RagFiles.
|
boolean |
hasSharePointSources()
SharePoint sources.
|
boolean |
hasSlackSource()
Slack channels with their corresponding access tokens.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofboolean hasGcsSource()
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - `gs://bucket_name/my_directory/object_name/my_file.txt` - `gs://bucket_name/my_directory`
.google.cloud.aiplatform.v1.GcsSource gcs_source = 2;GcsSource getGcsSource()
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - `gs://bucket_name/my_directory/object_name/my_file.txt` - `gs://bucket_name/my_directory`
.google.cloud.aiplatform.v1.GcsSource gcs_source = 2;GcsSourceOrBuilder getGcsSourceOrBuilder()
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - `gs://bucket_name/my_directory/object_name/my_file.txt` - `gs://bucket_name/my_directory`
.google.cloud.aiplatform.v1.GcsSource gcs_source = 2;boolean hasGoogleDriveSource()
Google Drive location. Supports importing individual files as well as Google Drive folders.
.google.cloud.aiplatform.v1.GoogleDriveSource google_drive_source = 3;GoogleDriveSource getGoogleDriveSource()
Google Drive location. Supports importing individual files as well as Google Drive folders.
.google.cloud.aiplatform.v1.GoogleDriveSource google_drive_source = 3;GoogleDriveSourceOrBuilder getGoogleDriveSourceOrBuilder()
Google Drive location. Supports importing individual files as well as Google Drive folders.
.google.cloud.aiplatform.v1.GoogleDriveSource google_drive_source = 3;boolean hasSlackSource()
Slack channels with their corresponding access tokens.
.google.cloud.aiplatform.v1.SlackSource slack_source = 6;SlackSource getSlackSource()
Slack channels with their corresponding access tokens.
.google.cloud.aiplatform.v1.SlackSource slack_source = 6;SlackSourceOrBuilder getSlackSourceOrBuilder()
Slack channels with their corresponding access tokens.
.google.cloud.aiplatform.v1.SlackSource slack_source = 6;boolean hasJiraSource()
Jira queries with their corresponding authentication.
.google.cloud.aiplatform.v1.JiraSource jira_source = 7;JiraSource getJiraSource()
Jira queries with their corresponding authentication.
.google.cloud.aiplatform.v1.JiraSource jira_source = 7;JiraSourceOrBuilder getJiraSourceOrBuilder()
Jira queries with their corresponding authentication.
.google.cloud.aiplatform.v1.JiraSource jira_source = 7;boolean hasSharePointSources()
SharePoint sources.
.google.cloud.aiplatform.v1.SharePointSources share_point_sources = 13;SharePointSources getSharePointSources()
SharePoint sources.
.google.cloud.aiplatform.v1.SharePointSources share_point_sources = 13;SharePointSourcesOrBuilder getSharePointSourcesOrBuilder()
SharePoint sources.
.google.cloud.aiplatform.v1.SharePointSources share_point_sources = 13;@Deprecated boolean hasPartialFailureGcsSink()
The Cloud Storage path to write partial failures to. Deprecated. Prefer to use `import_result_gcs_sink`.
.google.cloud.aiplatform.v1.GcsDestination partial_failure_gcs_sink = 11 [deprecated = true];
@Deprecated GcsDestination getPartialFailureGcsSink()
The Cloud Storage path to write partial failures to. Deprecated. Prefer to use `import_result_gcs_sink`.
.google.cloud.aiplatform.v1.GcsDestination partial_failure_gcs_sink = 11 [deprecated = true];
@Deprecated GcsDestinationOrBuilder getPartialFailureGcsSinkOrBuilder()
The Cloud Storage path to write partial failures to. Deprecated. Prefer to use `import_result_gcs_sink`.
.google.cloud.aiplatform.v1.GcsDestination partial_failure_gcs_sink = 11 [deprecated = true];
@Deprecated boolean hasPartialFailureBigquerySink()
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use `import_result_bq_sink`.
.google.cloud.aiplatform.v1.BigQueryDestination partial_failure_bigquery_sink = 12 [deprecated = true];
@Deprecated BigQueryDestination getPartialFailureBigquerySink()
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use `import_result_bq_sink`.
.google.cloud.aiplatform.v1.BigQueryDestination partial_failure_bigquery_sink = 12 [deprecated = true];
@Deprecated BigQueryDestinationOrBuilder getPartialFailureBigquerySinkOrBuilder()
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use `import_result_bq_sink`.
.google.cloud.aiplatform.v1.BigQueryDestination partial_failure_bigquery_sink = 12 [deprecated = true];
boolean hasImportResultGcsSink()
The Cloud Storage path to write import result to.
.google.cloud.aiplatform.v1.GcsDestination import_result_gcs_sink = 14;GcsDestination getImportResultGcsSink()
The Cloud Storage path to write import result to.
.google.cloud.aiplatform.v1.GcsDestination import_result_gcs_sink = 14;GcsDestinationOrBuilder getImportResultGcsSinkOrBuilder()
The Cloud Storage path to write import result to.
.google.cloud.aiplatform.v1.GcsDestination import_result_gcs_sink = 14;boolean hasImportResultBigquerySink()
The BigQuery destination to write import result to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table.
.google.cloud.aiplatform.v1.BigQueryDestination import_result_bigquery_sink = 15;BigQueryDestination getImportResultBigquerySink()
The BigQuery destination to write import result to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table.
.google.cloud.aiplatform.v1.BigQueryDestination import_result_bigquery_sink = 15;BigQueryDestinationOrBuilder getImportResultBigquerySinkOrBuilder()
The BigQuery destination to write import result to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table.
.google.cloud.aiplatform.v1.BigQueryDestination import_result_bigquery_sink = 15;boolean hasRagFileTransformationConfig()
Specifies the transformation config for RagFiles.
.google.cloud.aiplatform.v1.RagFileTransformationConfig rag_file_transformation_config = 16;
RagFileTransformationConfig getRagFileTransformationConfig()
Specifies the transformation config for RagFiles.
.google.cloud.aiplatform.v1.RagFileTransformationConfig rag_file_transformation_config = 16;
RagFileTransformationConfigOrBuilder getRagFileTransformationConfigOrBuilder()
Specifies the transformation config for RagFiles.
.google.cloud.aiplatform.v1.RagFileTransformationConfig rag_file_transformation_config = 16;
boolean hasRagFileParsingConfig()
Optional. Specifies the parsing config for RagFiles. RAG will use the default parser if this field is not set.
.google.cloud.aiplatform.v1.RagFileParsingConfig rag_file_parsing_config = 8 [(.google.api.field_behavior) = OPTIONAL];
RagFileParsingConfig getRagFileParsingConfig()
Optional. Specifies the parsing config for RagFiles. RAG will use the default parser if this field is not set.
.google.cloud.aiplatform.v1.RagFileParsingConfig rag_file_parsing_config = 8 [(.google.api.field_behavior) = OPTIONAL];
RagFileParsingConfigOrBuilder getRagFileParsingConfigOrBuilder()
Optional. Specifies the parsing config for RagFiles. RAG will use the default parser if this field is not set.
.google.cloud.aiplatform.v1.RagFileParsingConfig rag_file_parsing_config = 8 [(.google.api.field_behavior) = OPTIONAL];
int getMaxEmbeddingRequestsPerMin()
Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.
int32 max_embedding_requests_per_min = 5 [(.google.api.field_behavior) = OPTIONAL];
ImportRagFilesConfig.ImportSourceCase getImportSourceCase()
ImportRagFilesConfig.PartialFailureSinkCase getPartialFailureSinkCase()
ImportRagFilesConfig.ImportResultSinkCase getImportResultSinkCase()
Copyright © 2025 Google LLC. All rights reserved.