Class BigQueryIO.Write<T>

  • All Implemented Interfaces:
    java.io.Serializable, org.apache.beam.sdk.transforms.display.HasDisplayData
    Enclosing class:
    BigQueryIO

    public abstract static class BigQueryIO.Write<T>
    extends org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,​WriteResult>
    Implementation of BigQueryIO.write().
    See Also:
    Serialized Form
    • Constructor Detail

      • Write

        public Write()
    • Method Detail

      • to

        public BigQueryIO.Write<T> to​(com.google.api.services.bigquery.model.TableReference table)
        Writes to the given table, specified as a TableReference.
      • to

        public BigQueryIO.Write<T> to​(org.apache.beam.sdk.options.ValueProvider<java.lang.String> tableSpec)
        Same as to(String), but with a ValueProvider.
      • to

        public BigQueryIO.Write<T> to​(org.apache.beam.sdk.transforms.SerializableFunction<org.apache.beam.sdk.values.ValueInSingleWindow<T>,​TableDestination> tableFunction)
        Writes to table specified by the specified table function. The table is a function of ValueInSingleWindow, so can be determined by the value or by the window.

        If the function produces destinations configured with clustering fields, ensure that withClustering() is also set so that the clustering configurations get properly encoded and decoded.

      • withFormatFunction

        public BigQueryIO.Write<T> withFormatFunction​(org.apache.beam.sdk.transforms.SerializableFunction<T,​com.google.api.services.bigquery.model.TableRow> formatFunction)
        Formats the user's type into a TableRow to be written to BigQuery.
      • withAvroSchemaFactory

        public BigQueryIO.Write<T> withAvroSchemaFactory​(org.apache.beam.sdk.transforms.SerializableFunction<@Nullable com.google.api.services.bigquery.model.TableSchema,​org.apache.avro.Schema> avroSchemaFactory)
        Uses the specified function to convert a TableSchema to a Schema.

        If not specified, the TableSchema will automatically be converted to an avro schema.

      • withSchema

        public BigQueryIO.Write<T> withSchema​(org.apache.beam.sdk.options.ValueProvider<com.google.api.services.bigquery.model.TableSchema> schema)
        Same as withSchema(TableSchema) but using a deferred ValueProvider.
      • withJsonSchema

        public BigQueryIO.Write<T> withJsonSchema​(org.apache.beam.sdk.options.ValueProvider<java.lang.String> jsonSchema)
        Same as withJsonSchema(String) but using a deferred ValueProvider.
      • withSchemaFromView

        public BigQueryIO.Write<T> withSchemaFromView​(org.apache.beam.sdk.values.PCollectionView<java.util.Map<java.lang.String,​java.lang.String>> view)
        Allows the schemas for each table to be computed within the pipeline itself.

        The input is a map-valued PCollectionView mapping string tablespecs to JSON-formatted TableSchemas. Tablespecs must be in the same format as taken by to(String).

      • withTimePartitioning

        public BigQueryIO.Write<T> withTimePartitioning​(com.google.api.services.bigquery.model.TimePartitioning partitioning)
        Allows newly created tables to include a TimePartitioning class. Can only be used when writing to a single table. If to(SerializableFunction) or to(DynamicDestinations) is used to write dynamic tables, time partitioning can be directly set in the returned TableDestination.
      • withJsonClustering

        public BigQueryIO.Write<T> withJsonClustering​(org.apache.beam.sdk.options.ValueProvider<java.lang.String> jsonClustering)
        The same as withClustering(Clustering), but takes a JSON-serialized Clustering object in a deferred ValueProvider. For example: `"{"fields": ["column1", "column2", "column3"]}"`
      • withClustering

        public BigQueryIO.Write<T> withClustering()
        Allows writing to clustered tables when to(SerializableFunction) or to(DynamicDestinations) is used. The returned TableDestination objects should specify the clustering fields per table. If writing to a single table, use withClustering(Clustering) instead to pass a Clustering instance that specifies the static clustering fields to use.

        Setting this option enables use of TableDestinationCoderV3 which encodes clustering information. The updated coder is compatible with non-clustered tables, so can be freely set for newly deployed pipelines, but note that pipelines using an older coder must be drained before setting this option, since TableDestinationCoderV3 will not be able to read state written with a previous version.

      • withTableDescription

        public BigQueryIO.Write<T> withTableDescription​(java.lang.String tableDescription)
        Specifies the table description.
      • withBigLakeConfiguration

        public BigQueryIO.Write<T> withBigLakeConfiguration​(java.util.Map<java.lang.String,​java.lang.String> bigLakeConfiguration)
        Specifies a configuration to create BigLake tables. The following options are available:
        • connectionId (REQUIRED): the name of your cloud resource connection.
        • storageUri (REQUIRED): the path to your GCS folder where data will be written to. This sink will create sub-folders for each project, dataset, and table destination. Example: if you specify a storageUri of "gs://foo/bar" and writing to table "my_project.my_dataset.my_table", your data will be written under "gs://foo/bar/my_project/my_dataset/my_table/"
        • fileFormat (OPTIONAL): defaults to "parquet"
        • tableFormat (OPTIONAL): defaults to "iceberg"

        NOTE: This is only supported with the Storage Write API methods.

        See Also:
        BigQuery Tables for Apache Iceberg documentation
      • withFailedInsertRetryPolicy

        public BigQueryIO.Write<T> withFailedInsertRetryPolicy​(InsertRetryPolicy retryPolicy)
        Specifies a policy for handling failed inserts.

        Currently this only is allowed when writing an unbounded collection to BigQuery. Bounded collections are written using batch load jobs, so we don't get per-element failures. Unbounded collections are written using streaming inserts, so we have access to per-element insert results.

      • withoutValidation

        public BigQueryIO.Write<T> withoutValidation()
        Disables BigQuery table validation.
      • withDirectWriteProtos

        public BigQueryIO.Write<T> withDirectWriteProtos​(boolean directWriteProtos)
      • withLoadJobProjectId

        public BigQueryIO.Write<T> withLoadJobProjectId​(java.lang.String loadJobProjectId)
        Set the project the BigQuery load job will be initiated from. This is only applicable when the write method is set to BigQueryIO.Write.Method.FILE_LOADS. If omitted, the project of the destination table is used.
      • withLoadJobProjectId

        public BigQueryIO.Write<T> withLoadJobProjectId​(org.apache.beam.sdk.options.ValueProvider<java.lang.String> loadJobProjectId)
      • withTriggeringFrequency

        public BigQueryIO.Write<T> withTriggeringFrequency​(org.joda.time.Duration triggeringFrequency)
        Choose the frequency at which file writes are triggered.

        This is only applicable when the write method is set to BigQueryIO.Write.Method.FILE_LOADS or BigQueryIO.Write.Method.STORAGE_WRITE_API, and only when writing an unbounded PCollection.

        Every triggeringFrequency duration, a BigQuery load job will be generated for all the data written since the last load job. BigQuery has limits on how many load jobs can be triggered per day, so be careful not to set this duration too low, or you may exceed daily quota. Often this is set to 5 or 10 minutes to ensure that the project stays well under the BigQuery quota. See Quota Policy for more information about BigQuery quotas.

      • withNumStorageWriteApiStreams

        public BigQueryIO.Write<T> withNumStorageWriteApiStreams​(int numStorageWriteApiStreams)
        Control how many parallel streams are used when using Storage API writes.

        For streaming pipelines, and when withTriggeringFrequency(org.joda.time.Duration) is also set. To let runner determine the sharding at runtime, set this to zero, or withAutoSharding() instead.

        For batch pipelines, it inserts a redistribute. To not reshufle and keep the pipeline parallelism as is, set this to zero.

      • withPropagateSuccessfulStorageApiWrites

        public BigQueryIO.Write<T> withPropagateSuccessfulStorageApiWrites​(java.util.function.Predicate<java.lang.String> columnsToPropagate)
        If called, then all successful writes will be propagated to WriteResult and accessible via the WriteResult.getSuccessfulStorageApiInserts() method. The predicate allows filtering out columns from appearing in the resulting PCollection. The argument to the predicate is the name of the field to potentially be included in the output. Nested fields will be presented using . notation - e.g. a.b.c. If you want a nested field included, you must ensure that the predicate returns true for every parent field. e.g. if you want field "a.b.c" included, the predicate must return true for "a" for "a.b" and for "a.b.c".

        The predicate will be invoked repeatedly for every field in every message, so it is recommended that it be as lightweight as possible. e.g. looking up fields in a hash table instead of searching a list of field names.

      • withCustomGcsTempLocation

        public BigQueryIO.Write<T> withCustomGcsTempLocation​(org.apache.beam.sdk.options.ValueProvider<java.lang.String> customGcsTempLocation)
        Provides a custom location on GCS for storing temporary files to be loaded via BigQuery batch load jobs. See "Usage with templates" in BigQueryIO documentation for discussion.
      • skipInvalidRows

        public BigQueryIO.Write<T> skipInvalidRows()
        Insert all valid rows of a request, even if invalid rows exist. This is only applicable when the write method is set to BigQueryIO.Write.Method.STREAMING_INSERTS. The default value is false, which causes the entire request to fail if any invalid rows exist.
      • ignoreUnknownValues

        public BigQueryIO.Write<T> ignoreUnknownValues()
        Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.
      • useAvroLogicalTypes

        public BigQueryIO.Write<T> useAvroLogicalTypes()
        Enables interpreting logical types into their corresponding types (ie. TIMESTAMP), instead of only using their raw types (ie. LONG).
      • ignoreInsertIds

        public BigQueryIO.Write<T> ignoreInsertIds()
        Setting this option to true disables insertId based data deduplication offered by BigQuery. For more information, please see https://cloud.google.com/bigquery/streaming-data-into-bigquery#disabling_best_effort_de-duplication.
      • withPrimaryKey

        public BigQueryIO.Write<T> withPrimaryKey​(java.util.List<java.lang.String> primaryKey)
      • withDefaultMissingValueInterpretation

        public BigQueryIO.Write<T> withDefaultMissingValueInterpretation​(com.google.cloud.bigquery.storage.v1.AppendRowsRequest.MissingValueInterpretation missingValueInterpretation)
        Specify how missing values should be interpreted when there is a default value in the schema. Options are to take the default value or to write an explicit null (not an option of the field is also required.). Note: this is only used when using one of the storage write API insert methods.
      • optimizedWrites

        public BigQueryIO.Write<T> optimizedWrites()
        If true, enables new codepaths that are expected to use less resources while writing to BigQuery. Not enabled by default in order to maintain backwards compatibility.
      • useBeamSchema

        public BigQueryIO.Write<T> useBeamSchema()
        If true, then the BigQuery schema will be inferred from the input schema. If no formatFunction is set, then BigQueryIO will automatically turn the input records into TableRows that match the schema.
      • withMaxRetryJobs

        public BigQueryIO.Write<T> withMaxRetryJobs​(int maxRetryJobs)
        If set, this will set the max number of retry of batch load jobs.
      • withSuccessfulInsertsPropagation

        public BigQueryIO.Write<T> withSuccessfulInsertsPropagation​(boolean propagateSuccessful)
        If true, it enables the propagation of the successfully inserted TableRows on BigQuery as part of the WriteResult object when using BigQueryIO.Write.Method.STREAMING_INSERTS. By default this property is set on true. In the cases where a pipeline won't make use of the insert results this property can be set on false, which will make the pipeline let go of those inserted TableRows and reclaim worker resources.
      • withAutoSchemaUpdate

        public BigQueryIO.Write<T> withAutoSchemaUpdate​(boolean autoSchemaUpdate)
        If true, enables automatically detecting BigQuery table schema updates. Table schema updates are usually noticed within several minutes. Only supported when using one of the STORAGE_API insert methods.
      • withDeterministicRecordIdFn

        public BigQueryIO.Write<T> withDeterministicRecordIdFn​(org.apache.beam.sdk.transforms.SerializableFunction<T,​java.lang.String> toUniqueIdFunction)
      • withMaxFilesPerBundle

        public BigQueryIO.Write<T> withMaxFilesPerBundle​(int maxFilesPerBundle)
        Control how many files will be written concurrently by a single worker when using BigQuery load jobs before spilling to a shuffle. When data comes into this transform, it is written to one file per destination per worker. When there are more files than maxFilesPerBundle (DEFAULT: 20), the data is shuffled (i.e. Grouped By Destination), and written to files one-by-one-per-worker. This flag sets the maximum number of files that a single worker can write concurrently before shuffling the data. This flag should be used with caution. Setting a high number can increase the memory pressure on workers, and setting a low number can make a pipeline slower (due to the need to shuffle data).
      • withMaxFileSize

        public BigQueryIO.Write<T> withMaxFileSize​(long maxFileSize)
        Controls the maximum byte size per file to be loaded into BigQuery. If the amount of data written to one file reaches this threshold, we will close that file and continue writing in a new file.

        The default value (4 TiB) respects BigQuery's maximum number of source URIs per job configuration.

        See Also:
        BigQuery Load Job Limits
      • withMaxFilesPerPartition

        public BigQueryIO.Write<T> withMaxFilesPerPartition​(int maxFilesPerPartition)
        Controls how many files will be assigned to a single BigQuery load job. If the number of files increases past this threshold, we will spill it over into multiple load jobs as necessary.

        The default value (10,000 files) respects BigQuery's maximum number of source URIs per job configuration.

        See Also:
        BigQuery Load Job Limits
      • withMaxBytesPerPartition

        public BigQueryIO.Write<T> withMaxBytesPerPartition​(long maxBytesPerPartition)
        Control how much data will be assigned to a single BigQuery load job. If the amount of data flowing into one BatchLoads partition exceeds this value, that partition will be handled via multiple load jobs.

        The default value (11 TiB) respects BigQuery's maximum size per load job limit and is appropriate for most use cases. Reducing the value of this parameter can improve stability when loading to tables with complex schemas containing thousands of fields.

        See Also:
        BigQuery Load Job Limits
      • withWriteTempDataset

        public BigQueryIO.Write<T> withWriteTempDataset​(java.lang.String writeTempDataset)
        Temporary dataset. When writing to BigQuery from large file loads, the BigQueryIO.write() will create temporary tables in a dataset to store staging data from partitions. With this option, you can set an existing dataset to create the temporary tables. BigQueryIO will create temporary tables in that dataset, and will remove it once it is not needed. No other tables in the dataset will be modified. Remember that the dataset must exist and your job needs permissions to create and remove tables inside that dataset.
      • withErrorHandler

        public BigQueryIO.Write<T> withErrorHandler​(org.apache.beam.sdk.transforms.errorhandling.ErrorHandler<org.apache.beam.sdk.transforms.errorhandling.BadRecord,​?> errorHandler)
      • validate

        public void validate​(org.apache.beam.sdk.options.PipelineOptions pipelineOptions)
        Overrides:
        validate in class org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,​WriteResult>
      • expand

        public WriteResult expand​(org.apache.beam.sdk.values.PCollection<T> input)
        Specified by:
        expand in class org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,​WriteResult>
      • populateDisplayData

        public void populateDisplayData​(org.apache.beam.sdk.transforms.display.DisplayData.Builder builder)
        Specified by:
        populateDisplayData in interface org.apache.beam.sdk.transforms.display.HasDisplayData
        Overrides:
        populateDisplayData in class org.apache.beam.sdk.transforms.PTransform<org.apache.beam.sdk.values.PCollection<T>,​WriteResult>
      • getTable

        public @Nullable org.apache.beam.sdk.options.ValueProvider<com.google.api.services.bigquery.model.TableReference> getTable()
        Returns the table reference, or null.