All Classes Interface Summary Class Summary Enum Summary Exception Summary
| Class |
Description |
| ActionFactory |
Factory class for creating instances that will handle different functions of DoFns.
|
| ActionFactory |
Factory class for creating instances that will handle each type of record within a change stream
query.
|
| AddUuidsTransform |
A transform to add UUIDs to each message to be written to Pub/Sub Lite.
|
| AsyncWatermarkCache |
Asynchronously compute the earliest partition watermark and stores it in memory.
|
| AvroGenericRecordToStorageApiProto |
Utility methods for converting Avro GenericRecord objects to dynamic protocol message,
for use with the Storage write API.
|
| AvroWriteRequest<T> |
|
| BeamRowToBigtableMutation |
|
| BeamRowToBigtableMutation.ToBigtableRowFn |
|
| BeamRowToStorageApiProto |
Utility methods for converting Beam Row objects to dynamic protocol message, for use with
the Storage write API.
|
| BigqueryClient |
A wrapper class to call Bigquery API calls.
|
| BigQueryCoderProviderRegistrar |
A CoderProviderRegistrar for standard types used with BigQueryIO.
|
| BigQueryDirectReadSchemaTransformProvider |
|
| BigQueryDirectReadSchemaTransformProvider.BigQueryDirectReadSchemaTransform |
|
| BigQueryDirectReadSchemaTransformProvider.BigQueryDirectReadSchemaTransformConfiguration |
Configuration for reading from BigQuery with Storage Read API.
|
| BigQueryDirectReadSchemaTransformProvider.BigQueryDirectReadSchemaTransformConfiguration.Builder |
|
| BigQueryDlqProvider |
|
| BigQueryExportReadSchemaTransformConfiguration |
Configuration for reading from BigQuery.
|
| BigQueryExportReadSchemaTransformConfiguration.Builder |
|
| BigQueryExportReadSchemaTransformProvider |
|
| BigQueryExportReadSchemaTransformProvider.BigQueryExportSchemaTransform |
|
| BigQueryFileLoadsSchemaTransformProvider |
|
| BigQueryFileLoadsSchemaTransformProvider.BigQueryFileLoadsSchemaTransform |
|
| BigQueryHelpers |
A set of helper functions and classes used by BigQueryIO.
|
| BigQueryInsertError |
Model definition for BigQueryInsertError.
|
| BigQueryInsertErrorCoder |
|
| BigQueryIO |
PTransforms for reading and writing BigQuery tables.
|
| BigQueryIO.Read |
|
| BigQueryIO.TypedRead<T> |
|
| BigQueryIO.TypedRead.Method |
Determines the method used to read data from BigQuery.
|
| BigQueryIO.TypedRead.QueryPriority |
An enumeration type for the priority of a query.
|
| BigQueryIO.Write<T> |
|
| BigQueryIO.Write.CreateDisposition |
An enumeration type for the BigQuery create disposition strings.
|
| BigQueryIO.Write.Method |
Determines the method used to insert data in BigQuery.
|
| BigQueryIO.Write.SchemaUpdateOption |
An enumeration type for the BigQuery schema update options strings.
|
| BigQueryIO.Write.WriteDisposition |
An enumeration type for the BigQuery write disposition strings.
|
| BigQueryIOTranslation |
|
| BigQueryIOTranslation.ReadRegistrar |
|
| BigQueryIOTranslation.WriteRegistrar |
|
| BigqueryMatcher |
A matcher to verify data in BigQuery by processing given query and comparing with content's
checksum.
|
| BigqueryMatcher.TableAndQuery |
|
| BigQueryOptions |
Properties needed when using Google BigQuery with the Apache Beam SDK.
|
| BigQuerySchemaIOProvider |
An implementation of SchemaIOProvider for reading and writing to BigQuery with BigQueryIO.
|
| BigQuerySchemaRetrievalException |
Exception to signal that BigQuery schema retrieval failed.
|
| BigQuerySchemaTransformTranslation |
|
| BigQuerySchemaTransformTranslation.BigQueryStorageReadSchemaTransformTranslator |
|
| BigQuerySchemaTransformTranslation.BigQueryWriteSchemaTransformTranslator |
|
| BigQuerySchemaTransformTranslation.ReadWriteRegistrar |
|
| BigQueryServices |
An interface for real, mock, or fake implementations of Cloud BigQuery services.
|
| BigQueryServices.BigQueryServerStream<T> |
Container for reading data from streaming endpoints.
|
| BigQueryServices.DatasetService |
An interface to get, create and delete Cloud BigQuery datasets and tables.
|
| BigQueryServices.DatasetService.TableMetadataView |
|
| BigQueryServices.JobService |
An interface for the Cloud BigQuery load service.
|
| BigQueryServices.StorageClient |
An interface representing a client object for making calls to the BigQuery Storage API.
|
| BigQueryServices.StreamAppendClient |
An interface for appending records to a Storage API write stream.
|
| BigQueryServices.WriteStreamService |
An interface to get, create and flush Cloud BigQuery STORAGE API write streams.
|
| BigQueryServicesImpl |
An implementation of BigQueryServices that actually communicates with the cloud BigQuery
service.
|
| BigQueryServicesImpl.DatasetServiceImpl |
|
| BigQueryServicesImpl.WriteStreamServiceImpl |
|
| BigQuerySinkMetrics |
Helper class to create perworker metrics for BigQuery Sink stages.
|
| BigQuerySinkMetrics.RpcMethod |
|
| BigQueryStorageApiInsertError |
|
| BigQueryStorageApiInsertErrorCoder |
|
| BigQueryStorageTableSource<T> |
A Source representing reading from a table.
|
| BigQueryStorageWriteApiSchemaTransformProvider |
|
| BigQueryStorageWriteApiSchemaTransformProvider.BigQueryStorageWriteApiSchemaTransform |
|
| BigQueryUtils |
Utility methods for BigQuery related operations.
|
| BigQueryUtils.ConversionOptions |
Options for how to convert BigQuery data to Beam data.
|
| BigQueryUtils.ConversionOptions.Builder |
|
| BigQueryUtils.ConversionOptions.TruncateTimestamps |
Controls whether to truncate timestamps to millisecond precision lossily, or to crash when
truncation would result.
|
| BigQueryUtils.SchemaConversionOptions |
Options for how to convert BigQuery schemas to Beam schemas.
|
| BigQueryUtils.SchemaConversionOptions.Builder |
|
| BigQueryWriteConfiguration |
Configuration for writing to BigQuery with SchemaTransforms.
|
| BigQueryWriteConfiguration.Builder |
|
| BigQueryWriteConfiguration.ErrorHandling |
|
| BigQueryWriteConfiguration.ErrorHandling.Builder |
|
| BigQueryWriteSchemaTransformProvider |
|
| BigQueryWriteSchemaTransformProvider.BigQueryWriteSchemaTransform |
|
| BigtableChangeStreamAccessor |
This is probably a temporary solution to what is a bigger migration from
cloud-bigtable-client-core to java-bigtable.
|
| BigtableChangeStreamTestOptions |
|
| BigtableClientOverride |
Override the configuration of Cloud Bigtable data and admin client.
|
| BigtableConfig |
Configuration for a Cloud Bigtable client.
|
| BigtableIO |
Transforms for reading from and writing to Google Cloud Bigtable.
|
| BigtableIO.ExistingPipelineOptions |
Overwrite options to determine what to do if change stream name is being reused and there
exists metadata of the same change stream name.
|
| BigtableIO.Read |
A PTransform that reads from Google Cloud Bigtable.
|
| BigtableIO.ReadChangeStream |
|
| BigtableIO.Write |
A PTransform that writes to Google Cloud Bigtable.
|
| BigtableIO.WriteWithResults |
A PTransform that writes to Google Cloud Bigtable and emits a BigtableWriteResult for each batch written.
|
| BigtableReadSchemaTransformProvider |
|
| BigtableReadSchemaTransformProvider.BigtableReadSchemaTransformConfiguration |
Configuration for reading from Bigtable.
|
| BigtableReadSchemaTransformProvider.BigtableReadSchemaTransformConfiguration.Builder |
|
| BigtableRowToBeamRow |
|
| BigtableRowToBeamRowFlat |
|
| BigtableUtils |
|
| BigtableWriteResult |
The result of writing a batch of rows to Bigtable.
|
| BigtableWriteResultCoder |
|
| BigtableWriteSchemaTransformProvider |
|
| BigtableWriteSchemaTransformProvider.BigtableWriteSchemaTransformConfiguration |
Configuration for writing to Bigtable.
|
| BigtableWriteSchemaTransformProvider.BigtableWriteSchemaTransformConfiguration.Builder |
|
| BigtableWriteSchemaTransformProvider.GetMutationsFromBeamRow |
|
| BlockingCommitterImpl |
|
| BytesThroughputEstimator<T> |
An estimator to provide an estimate on the byte throughput of the outputted elements.
|
| BytesThroughputEstimator<T> |
An estimator to provide an estimate on the throughput of the outputted elements.
|
| ByteStringRangeHelper |
Helper functions to evaluate the completeness of collection of ByteStringRanges.
|
| CacheFactory |
|
| ChangeStreamAction |
This class is responsible for processing individual ChangeStreamRecord.
|
| ChangeStreamContinuationTokenHelper |
|
| ChangeStreamDao |
Data access object to list and read stream partitions of a table.
|
| ChangeStreamDao |
Responsible for making change stream queries for a given partition.
|
| ChangeStreamMetrics |
Class to aggregate metrics related functionality.
|
| ChangeStreamMetrics |
Class to aggregate metrics related functionality.
|
| ChangeStreamRecord |
Represents a Spanner Change Stream Record.
|
| ChangeStreamRecordMapper |
This class is responsible for transforming a Struct to a List of ChangeStreamRecord models.
|
| ChangeStreamRecordMetadata |
|
| ChangeStreamRecordMetadata.Builder |
|
| ChangeStreamResultSet |
Decorator class over a ResultSet that provides telemetry for the streamed records.
|
| ChangeStreamResultSetMetadata |
Represents telemetry metadata gathered during the consumption of a change stream query.
|
| ChangeStreamsConstants |
Single place for defining the constants used in the Spanner.readChangeStreams()
connector.
|
| CheckpointMarkImpl |
|
| ChildPartition |
A child partition represents a new partition that should be queried.
|
| ChildPartitionsRecord |
Represents a ChildPartitionsRecord.
|
| ChildPartitionsRecordAction |
|
| CivilTimeEncoder |
Encoder for TIME and DATETIME values, according to civil_time encoding.
|
| CleanUpReadChangeStreamDoFn |
|
| CloudPubsubTransforms |
A class providing transforms between Cloud Pub/Sub and Pub/Sub Lite message types.
|
| CoderSizeEstimator<T> |
This class is used to estimate the size in bytes of a given element.
|
| ColumnType |
Defines a column type from a Cloud Spanner table with the following information: column name,
column type, flag indicating if column is primary key and column position in the table.
|
| CombineAsIterable<T> |
|
| CreateTableHelpers |
|
| CreateTables<DestinationT,ElementT> |
Creates any tables needed before performing streaming writes to the tables.
|
| DaoFactory |
|
| DaoFactory |
Factory class to create data access objects to perform change stream queries and access the
metadata tables.
|
| DataChangeRecord |
A data change record encodes modifications to Cloud Spanner rows.
|
| DataChangeRecordAction |
|
| DatastoreIO |
|
| DatastoreV1 |
|
| DatastoreV1.DeleteEntity |
A PTransform that deletes Entities from Cloud Datastore.
|
| DatastoreV1.DeleteEntityWithSummary |
|
| DatastoreV1.DeleteKey |
A PTransform that deletes Entities associated with the given Keys from Cloud Datastore.
|
| DatastoreV1.DeleteKeyWithSummary |
A PTransform that deletes Entities associated with the given Keys from Cloud Datastore and returns DatastoreV1.WriteSuccessSummary for each successful delete.
|
| DatastoreV1.Read |
A PTransform that reads the result rows of a Cloud Datastore query as Entity
objects.
|
| DatastoreV1.Write |
A PTransform that writes Entity objects to Cloud Datastore.
|
| DatastoreV1.WriteSuccessSummary |
Summary object produced when a number of writes are successfully written to Datastore in a
single Mutation.
|
| DatastoreV1.WriteWithSummary |
|
| DataStoreV1SchemaIOProvider |
An implementation of SchemaIOProvider for reading and writing payloads with DatastoreIO.
|
| DataStoreV1SchemaIOProvider.DataStoreV1SchemaIO |
An abstraction to create schema aware IOs.
|
| DetectNewPartitionsAction |
|
| DetectNewPartitionsAction |
This class is responsible for scheduling partitions.
|
| DetectNewPartitionsDoFn |
|
| DetectNewPartitionsDoFn |
A SplittableDoFn (SDF) that is responsible for scheduling partitions to be queried.
|
| DetectNewPartitionsRangeTracker |
|
| DetectNewPartitionsState |
|
| DetectNewPartitionsTracker |
|
| DicomIO |
The DicomIO connectors allows Beam pipelines to make calls to the Dicom API of the Google Cloud
Healthcare API (https://cloud.google.com/healthcare/docs/how-tos#dicom-guide).
|
| DicomIO.ReadStudyMetadata |
This class makes a call to the retrieve metadata endpoint
(https://cloud.google.com/healthcare/docs/how-tos/dicomweb#retrieving_metadata).
|
| DicomIO.ReadStudyMetadata.Result |
|
| DlqProvider |
|
| DynamicDestinations<T,DestinationT> |
This class provides the most general way of specifying dynamic BigQuery table destinations.
|
| EncodingException |
Represents an error during encoding (serializing) a class.
|
| EncodingException |
Represents an error during encoding (serializing) a class.
|
| EntityToRow |
A PTransform to perform a conversion of Entity to Row.
|
| ErrorContainer<T> |
ErrorContainer interface.
|
| ExternalRead |
Exposes PubsubIO.Read as an external transform for cross-language usage.
|
| ExternalRead.Configuration |
Parameters class to expose the transform to an external SDK.
|
| ExternalRead.ReadBuilder |
|
| ExternalTransformRegistrarImpl |
|
| ExternalWrite |
Exposes PubsubIO.Write as an external transform for cross-language usage.
|
| ExternalWrite.Configuration |
Parameters class to expose the transform to an external SDK.
|
| ExternalWrite.ParsePubsubMessageProtoAsPayloadFromWindowedValue |
|
| ExternalWrite.WriteBuilder |
|
| FakeBigQueryServices |
A fake implementation of BigQuery's query service..
|
| FakeBigQueryServices.FakeBigQueryServerStream<T> |
|
| FakeDatasetService |
A fake dataset service that can be serialized, for use in testReadFromTable.
|
| FakeJobService |
A fake implementation of BigQuery's job service.
|
| FhirBundleParameter |
FhirBundleParameter represents a FHIR bundle in JSON format to be executed on a FHIR store.
|
| FhirBundleResponse |
|
| FhirIO |
|
| FhirIO.Deidentify |
Deidentify FHIR resources from a FHIR store to a destination FHIR store.
|
| FhirIO.Deidentify.DeidentifyFn |
A function that schedules a deidentify operation and monitors the status.
|
| FhirIO.ExecuteBundles |
The type Execute bundles.
|
| FhirIO.ExecuteBundlesResult |
ExecuteBundlesResult contains both successfully executed bundles and information help debugging
failed executions (eg metadata & error msgs).
|
| FhirIO.Export |
Export FHIR resources from a FHIR store to new line delimited json files on GCS or BigQuery.
|
| FhirIO.Export.ExportResourcesFn |
A function that schedules an export operation and monitors the status.
|
| FhirIO.Import |
Writes each bundle of elements to a new-line delimited JSON file on GCS and issues a
fhirStores.import Request for that file.
|
| FhirIO.Import.ContentStructure |
The enum Content structure.
|
| FhirIO.Read |
The type Read.
|
| FhirIO.Read.Result |
The type Result.
|
| FhirIO.Search<T> |
The type Search.
|
| FhirIO.Search.Result |
|
| FhirIO.Write |
The type Write.
|
| FhirIO.Write.AbstractResult |
|
| FhirIO.Write.Result |
The type Result.
|
| FhirIO.Write.WriteMethod |
The enum Write method.
|
| FhirIOPatientEverything |
The type FhirIOPatientEverything for querying a FHIR Patient resource's compartment.
|
| FhirIOPatientEverything.PatientEverythingParameter |
PatientEverythingParameter defines required attributes for a FHIR GetPatientEverything request
in FhirIOPatientEverything.
|
| FhirIOPatientEverything.Result |
|
| FhirSearchParameter<T> |
FhirSearchParameter represents the query parameters for a FHIR search request, used as a
parameter for FhirIO.Search.
|
| FhirSearchParameterCoder<T> |
|
| FilterForMutationDoFn |
|
| FirestoreIO |
|
| FirestoreOptions |
|
| FirestoreV1 |
|
| FirestoreV1.BatchGetDocuments |
Concrete class representing a PTransform<PCollection<BatchGetDocumentsRequest>, PTransform<BatchGetDocumentsResponse>> which will read from Firestore.
|
| FirestoreV1.BatchGetDocuments.Builder |
|
| FirestoreV1.BatchWriteWithDeadLetterQueue |
Concrete class representing a PTransform<PCollection<Write>, PCollection<FirestoreV1.WriteFailure
> which will write to Firestore.
|
| FirestoreV1.BatchWriteWithDeadLetterQueue.Builder |
|
| FirestoreV1.BatchWriteWithSummary |
Concrete class representing a PTransform<PCollection<Write>, PDone> which will write to Firestore.
|
| FirestoreV1.BatchWriteWithSummary.Builder |
|
| FirestoreV1.FailedWritesException |
Exception that is thrown if one or more Writes is unsuccessful
with a non-retryable status code.
|
| FirestoreV1.ListCollectionIds |
Concrete class representing a PTransform<PCollection<ListCollectionIdsRequest>, PTransform<ListCollectionIdsResponse>> which will read from Firestore.
|
| FirestoreV1.ListCollectionIds.Builder |
|
| FirestoreV1.ListDocuments |
Concrete class representing a PTransform<PCollection<ListDocumentsRequest>, PTransform<ListDocumentsResponse
>> which will read from Firestore.
|
| FirestoreV1.ListDocuments.Builder |
|
| FirestoreV1.PartitionQuery |
Concrete class representing a PTransform<PCollection<PartitionQueryRequest>, PTransform<RunQueryRequest>>
which will read from Firestore.
|
| FirestoreV1.PartitionQuery.Builder |
|
| FirestoreV1.Read |
Type safe builder factory for read operations.
|
| FirestoreV1.RunQuery |
Concrete class representing a PTransform<PCollection<RunQueryRequest>, PTransform<RunQueryResponse>> which
will read from Firestore.
|
| FirestoreV1.RunQuery.Builder |
|
| FirestoreV1.Write |
Type safe builder factory for write operations.
|
| FirestoreV1.WriteFailure |
Failure details for an attempted Write.
|
| FirestoreV1.WriteSuccessSummary |
Summary object produced when a number of writes are successfully written to Firestore in a
single BatchWrite.
|
| GcpIoPipelineOptionsRegistrar |
A registrar containing the default GCP options.
|
| GenerateInitialPartitionsAction |
|
| HealthcareApiClient |
Defines a client to communicate with the GCP HCLS API (version v1).
|
| HealthcareIOError<T> |
Class for capturing errors on IO operations on Google Cloud Healthcare APIs resources.
|
| HealthcareIOErrorCoder<T> |
|
| HealthcareIOErrorToTableRow<T> |
|
| HeartbeatRecord |
A heartbeat record serves as a notification that the change stream query has returned all changes
for the partition less or equal to the record timestamp.
|
| HeartbeatRecordAction |
|
| HL7v2IO |
|
| HL7v2IO.HL7v2Read |
The type Read that reads HL7v2 message contents given a PCollection of HL7v2ReadParameter.
|
| HL7v2IO.HL7v2Read.FetchHL7v2Message |
PTransform to fetch a message from an Google Cloud Healthcare HL7v2 store based on
msgID.
|
| HL7v2IO.HL7v2Read.FetchHL7v2Message.HL7v2MessageGetFn |
DoFn for fetching messages from the HL7v2 store with error handling.
|
| HL7v2IO.HL7v2Read.Result |
|
| HL7v2IO.ListHL7v2Messages |
List HL7v2 messages in HL7v2 Stores with optional filter.
|
| HL7v2IO.Read |
The type Read that reads HL7v2 message contents given a PCollection of message IDs strings.
|
| HL7v2IO.Read.FetchHL7v2Message |
PTransform to fetch a message from an Google Cloud Healthcare HL7v2 store based on
msgID.
|
| HL7v2IO.Read.FetchHL7v2Message.HL7v2MessageGetFn |
DoFn for fetching messages from the HL7v2 store with error handling.
|
| HL7v2IO.Read.Result |
The type Result includes PCollection of HL7v2Message objects for successfully
read results and PCollection of HealthcareIOError objects for failed reads.
|
| HL7v2IO.Write |
The type Write that writes the given PCollection of HL7v2 messages.
|
| HL7v2IO.Write.Result |
|
| HL7v2IO.Write.WriteMethod |
The enum Write method.
|
| HL7v2Message |
The type HL7v2 message to wrap the Message model.
|
| HL7v2MessageCoder |
|
| HL7v2ReadParameter |
HL7v2ReadParameter represents the read parameters for a HL7v2 read request, used as the input
type for HL7v2IO.HL7v2Read.
|
| HL7v2ReadResponse |
HL7v2ReadResponse represents the response format for a HL7v2 read request, used as the output
type of HL7v2IO.HL7v2Read.
|
| HL7v2ReadResponseCoder |
|
| HttpHealthcareApiClient |
A client that talks to the Cloud Healthcare API through HTTP requests.
|
| HttpHealthcareApiClient.AuthenticatedRetryInitializer |
|
| HttpHealthcareApiClient.FhirResourcePagesIterator |
The type FhirResourcePagesIterator for methods which return paged output.
|
| HttpHealthcareApiClient.FhirResourcePagesIterator.FhirMethod |
|
| HttpHealthcareApiClient.HealthcareHttpException |
Wraps HttpResponse in an exception with a statusCode field for use with HealthcareIOError.
|
| HttpHealthcareApiClient.HL7v2MessagePages |
|
| HttpHealthcareApiClient.HL7v2MessagePages.HL7v2MessagePagesIterator |
The type Hl7v2 message id pages iterator.
|
| InitializeDoFn |
A DoFn responsible to initialize the metadata table and prepare it for managing the state of the
pipeline.
|
| InitializeDoFn |
A DoFn responsible for initializing the change stream Connector.
|
| InitialPartition |
Utility class to determine initial partition constants and methods.
|
| InitialPipelineState |
|
| InsertRetryPolicy |
A retry policy for streaming BigQuery inserts.
|
| InsertRetryPolicy.Context |
Contains information about a failed insert.
|
| JsonArrayCoder |
|
| ManagedFactory<T extends java.lang.AutoCloseable> |
A ManagedFactory produces instances and tears down any produced instances when it is itself
closed.
|
| ManagedFactoryImpl<T extends java.lang.AutoCloseable> |
|
| MapperFactory |
Factory class for creating instances that will map a struct to a connector model.
|
| MetadataSpannerConfigFactory |
This class generates a SpannerConfig for the change stream metadata database by copying only the
necessary fields from the SpannerConfig of the primary database.
|
| MetadataTableAdminDao |
Data access object for creating and dropping the metadata table.
|
| MetadataTableDao |
Data access object for managing the state of the metadata Bigtable table.
|
| MetadataTableEncoder |
Helper methods that simplifies some conversion and extraction of metadata table content.
|
| Mod |
|
| ModType |
|
| MutationGroup |
A bundle of mutations that must be submitted atomically.
|
| NewPartition |
Represent new partition as a result of splits and merges.
|
| NoOpWatermarkCache |
Synchronously compute the earliest partition watermark, by delegating the call to PartitionMetadataDao#getUnfinishedMinWatermark().
|
| NullSizeEstimator<T> |
NoOp implementation of a size estimator.
|
| NullThroughputEstimator<T> |
NoOp implementation of a throughput estimator.
|
| OffsetByteRangeCoder |
|
| OrphanedMetadataCleaner |
|
| PartitionEndRecord |
A partition end record serves as a notification that the client should stop reading the
partition.
|
| PartitionEndRecordAction |
|
| PartitionEventRecord |
A partition event record describes key range changes for a change stream partition.
|
| PartitionEventRecordAction |
|
| PartitionMetadata |
Model for the partition metadata database table used in the Connector.
|
| PartitionMetadata.Builder |
Partition metadata builder for better user experience.
|
| PartitionMetadata.State |
The state at which a partition can be in the system:
CREATED: the partition has been created, but no query has been done against it yet.
|
| PartitionMetadataAdminDao |
Data access object for creating and dropping the partition metadata table.
|
| PartitionMetadataDao |
Data access object for the Connector metadata tables.
|
| PartitionMetadataDao.InTransactionContext |
Represents the execution of a read / write transaction in Cloud Spanner.
|
| PartitionMetadataDao.TransactionResult<T> |
Represents a result from executing a Cloud Spanner read / write transaction.
|
| PartitionMetadataMapper |
|
| PartitionMetadataTableNames |
Configuration for a partition metadata table.
|
| PartitionReconciler |
There can be a race when many splits and merges happen to a single partition in quick succession.
|
| PartitionRecord |
|
| PartitionStartRecord |
A partition start record serves as a notification that the client should schedule the partitions
to be queried.
|
| PartitionStartRecordAction |
This class is part of the process for {@link
org.apache.beam.sdk.io.gcp.spanner.changestreams.dofn..ReadChangeStreamPartitionDoFn} SDF.
|
| PortableBigQueryDestinations |
|
| PostProcessingMetricsDoFn |
|
| PreparePubsubWriteDoFn<InputT> |
|
| PrepareWrite<InputT,DestinationT extends @NonNull java.lang.Object,OutputT> |
Prepare an input PCollection for writing to BigQuery.
|
| ProcessNewPartitionsAction |
|
| ProtoFromBytes<T extends com.google.protobuf.Message> |
|
| ProtoToBytes<T extends com.google.protobuf.Message> |
|
| PublisherOptions |
Options needed for a Pub/Sub Lite Publisher.
|
| PublisherOptions.Builder |
|
| PubsubClient |
An (abstract) helper class for talking to Pubsub via an underlying transport.
|
| PubsubClient.IncomingMessage |
A message received from Pubsub.
|
| PubsubClient.OutgoingMessage |
A message to be sent to Pubsub.
|
| PubsubClient.ProjectPath |
Path representing a cloud project id.
|
| PubsubClient.PubsubClientFactory |
Factory for creating clients.
|
| PubsubClient.SchemaPath |
Path representing a Pubsub schema.
|
| PubsubClient.SubscriptionPath |
Path representing a Pubsub subscription.
|
| PubsubClient.TopicPath |
Path representing a Pubsub topic.
|
| PubsubCoderProviderRegistrar |
A CoderProviderRegistrar for standard types used with PubsubIO.
|
| PubsubDlqProvider |
|
| PubsubGrpcClient |
A helper class for talking to Pubsub via grpc.
|
| PubsubIO |
Read and Write PTransforms for Cloud Pub/Sub streams.
|
| PubsubIO.PubsubSubscription |
Class representing a Cloud Pub/Sub Subscription.
|
| PubsubIO.PubsubTopic |
Class representing a Cloud Pub/Sub Topic.
|
| PubsubIO.Read<T> |
Implementation of read methods.
|
| PubsubIO.Write<T> |
Implementation of write methods.
|
| PubsubJsonClient |
A Pubsub client using JSON transport.
|
| PubsubLiteIO |
I/O transforms for reading from Google Pub/Sub Lite.
|
| PubsubLiteReadSchemaTransformProvider |
|
| PubsubLiteReadSchemaTransformProvider.ErrorFn |
|
| PubsubLiteReadSchemaTransformProvider.PubsubLiteReadSchemaTransformConfiguration |
|
| PubsubLiteReadSchemaTransformProvider.PubsubLiteReadSchemaTransformConfiguration.Builder |
|
| PubsubLiteSink |
A sink which publishes messages to Pub/Sub Lite.
|
| PubsubLiteWriteSchemaTransformProvider |
|
| PubsubLiteWriteSchemaTransformProvider.ErrorCounterFn |
|
| PubsubLiteWriteSchemaTransformProvider.PubsubLiteWriteSchemaTransformConfiguration |
|
| PubsubLiteWriteSchemaTransformProvider.PubsubLiteWriteSchemaTransformConfiguration.Builder |
|
| PubsubLiteWriteSchemaTransformProvider.SetUuidFromPubSubMessage |
|
| PubsubLiteWriteSchemaTransformProvider.SetUuidFromPubSubMessage.SetUuidFn |
|
| PubsubMessage |
Class representing a Pub/Sub message.
|
| PubsubMessagePayloadOnlyCoder |
A coder for PubsubMessage treating the raw bytes being decoded as the message's payload.
|
| PubsubMessages |
Common util functions for converting between PubsubMessage proto and PubsubMessage.
|
| PubsubMessages.DeserializeBytesIntoPubsubMessagePayloadOnly |
|
| PubsubMessages.ParsePayloadAsPubsubMessageProto |
|
| PubsubMessages.ParsePubsubMessageProtoAsPayload |
|
| PubsubMessageSchemaCoder |
Provides a SchemaCoder for PubsubMessage, including the topic and all fields of a
PubSub message from server.
|
| PubsubMessageWithAttributesAndMessageIdAndOrderingKeyCoder |
A coder for PubsubMessage including all fields of a PubSub message from server.
|
| PubsubMessageWithAttributesAndMessageIdCoder |
A coder for PubsubMessage including attributes and the message id from the PubSub server.
|
| PubsubMessageWithAttributesCoder |
A coder for PubsubMessage including attributes.
|
| PubsubMessageWithMessageIdCoder |
A coder for PubsubMessage treating the raw bytes being decoded as the message's payload, with the
message id from the PubSub server.
|
| PubsubMessageWithTopicCoder |
A coder for PubsubMessage including the topic from the PubSub server.
|
| PubsubOptions |
Properties that can be set when using Google Cloud Pub/Sub with the Apache Beam SDK.
|
| PubSubPayloadTranslation |
|
| PubSubPayloadTranslation.ReadRegistrar |
|
| PubSubPayloadTranslation.WriteRegistrar |
|
| PubsubReadSchemaTransformConfiguration |
Configuration for reading from Pub/Sub.
|
| PubsubReadSchemaTransformConfiguration.Builder |
|
| PubsubReadSchemaTransformConfiguration.ErrorHandling |
|
| PubsubReadSchemaTransformConfiguration.ErrorHandling.Builder |
|
| PubsubReadSchemaTransformProvider |
|
| PubsubSchemaIOProvider |
An implementation of SchemaIOProvider for reading and writing JSON/AVRO payloads with
PubsubIO.
|
| PubsubTestClient |
A (partial) implementation of PubsubClient for use by unit tests.
|
| PubsubTestClient.PubsubTestClientFactory |
Closing the factory will validate all expected messages were processed.
|
| PubsubUnboundedSink |
A PTransform which streams messages to Pubsub.
|
| PubsubUnboundedSource |
Users should use PubsubIO#read instead.
|
| PubsubWriteSchemaTransformConfiguration |
Configuration for writing to Pub/Sub.
|
| PubsubWriteSchemaTransformConfiguration.Builder |
|
| PubsubWriteSchemaTransformConfiguration.ErrorHandling |
|
| PubsubWriteSchemaTransformConfiguration.ErrorHandling.Builder |
|
| PubsubWriteSchemaTransformProvider |
|
| PubsubWriteSchemaTransformProvider.ErrorFn |
|
| QueryChangeStreamAction |
Main action class for querying a partition change stream.
|
| RampupThrottlingFn<T> |
An implementation of a client-side throttler that enforces a gradual ramp-up, broadly in line
with Datastore best practices.
|
| ReadChangeStreamPartitionAction |
|
| ReadChangeStreamPartitionDoFn |
|
| ReadChangeStreamPartitionDoFn |
A SDF (Splittable DoFn) class which is responsible for performing a change stream query for a
given partition.
|
| ReadChangeStreamPartitionProgressTracker |
RestrictionTracker used by ReadChangeStreamPartitionDoFn to keep
track of the progress of the stream and to split the restriction for runner initiated
checkpoints.
|
| ReadChangeStreamPartitionRangeTracker |
|
| ReadOperation |
Encapsulates a spanner read operation.
|
| ReadSpannerSchema |
This DoFn reads Cloud Spanner 'information_schema.*' tables to build the SpannerSchema.
|
| ReifyAsIterable<T> |
This transforms turns a side input into a singleton PCollection that can be used as the main
input for another transform.
|
| RestrictionInterrupter<T> |
An interrupter for restriction tracker of type T.
|
| ResumeFromPreviousPipelineAction |
|
| RowMutation |
|
| RowMutation.RowMutationCoder |
|
| RowMutationInformation |
This class indicates how to apply a row update to BigQuery.
|
| RowMutationInformation.MutationType |
|
| RowToEntity |
A PTransform to perform a conversion of Row to Entity.
|
| RowUtils |
|
| RpcQosOptions |
Quality of Service manager options for Firestore RPCs.
|
| RpcQosOptions.Builder |
|
| SchemaAndRecord |
A wrapper for a GenericRecord and the TableSchema representing the schema of the
table (or query) it was generated from.
|
| SizeEstimator<T> |
|
| SizeEstimator<T> |
This class is used to estimate the size in bytes of a given element.
|
| SpannerAccessor |
Manages lifecycle of DatabaseClient and Spanner instances.
|
| SpannerChangestreamsReadSchemaTransformProvider |
|
| SpannerChangestreamsReadSchemaTransformProvider.DataChangeRecordToRow |
|
| SpannerChangestreamsReadSchemaTransformProvider.SpannerChangestreamsReadConfiguration |
|
| SpannerChangestreamsReadSchemaTransformProvider.SpannerChangestreamsReadConfiguration.Builder |
|
| SpannerConfig |
Configuration for a Cloud Spanner client.
|
| SpannerConfig.Builder |
|
| SpannerIO |
Reading from Cloud Spanner
|
| SpannerIO.CreateTransaction |
A PTransform that create a transaction.
|
| SpannerIO.CreateTransaction.Builder |
|
| SpannerIO.FailureMode |
A failure handling strategy.
|
| SpannerIO.Read |
|
| SpannerIO.ReadAll |
|
| SpannerIO.ReadChangeStream |
|
| SpannerIO.SpannerChangeStreamOptions |
Interface to display the name of the metadata table on Dataflow UI.
|
| SpannerIO.Write |
A PTransform that writes Mutation objects to Google Cloud Spanner.
|
| SpannerIO.WriteGrouped |
|
| SpannerReadSchemaTransformProvider |
A provider for reading from Cloud Spanner using a Schema Transform Provider.
|
| SpannerReadSchemaTransformProvider.ErrorFn |
|
| SpannerReadSchemaTransformProvider.SpannerReadSchemaTransformConfiguration |
|
| SpannerReadSchemaTransformProvider.SpannerReadSchemaTransformConfiguration.Builder |
|
| SpannerSchema |
Encapsulates Cloud Spanner Schema.
|
| SpannerSchema.Column |
|
| SpannerSchema.KeyPart |
|
| SpannerSchemaRetrievalException |
Exception to signal that Spanner schema retrieval failed.
|
| SpannerTransformRegistrar |
Exposes SpannerIO.WriteRows, SpannerIO.ReadRows and SpannerIO.ChangeStreamRead as an external transform for cross-language usage.
|
| SpannerTransformRegistrar.ChangeStreamReaderBuilder |
|
| SpannerTransformRegistrar.ChangeStreamReaderBuilder.Configuration |
|
| SpannerTransformRegistrar.CrossLanguageConfiguration |
|
| SpannerTransformRegistrar.DeleteBuilder |
|
| SpannerTransformRegistrar.InsertBuilder |
|
| SpannerTransformRegistrar.InsertOrUpdateBuilder |
|
| SpannerTransformRegistrar.ReadBuilder |
|
| SpannerTransformRegistrar.ReadBuilder.Configuration |
|
| SpannerTransformRegistrar.ReplaceBuilder |
|
| SpannerTransformRegistrar.UpdateBuilder |
|
| SpannerWriteResult |
|
| SpannerWriteSchemaTransformProvider |
|
| SpannerWriteSchemaTransformProvider.SpannerWriteSchemaTransformConfiguration |
|
| SpannerWriteSchemaTransformProvider.SpannerWriteSchemaTransformConfiguration.Builder |
|
| StorageApiCDC |
Constants and variables for CDC support.
|
| StorageApiConvertMessages<DestinationT,ElementT> |
A transform that converts messages to protocol buffers in preparation for writing to BigQuery.
|
| StorageApiConvertMessages.ConvertMessagesDoFn<DestinationT extends @NonNull java.lang.Object,ElementT> |
|
| StorageApiDynamicDestinationsTableRow<T,DestinationT extends @NonNull java.lang.Object> |
|
| StorageApiFlushAndFinalizeDoFn |
This DoFn flushes and optionally (if requested) finalizes Storage API streams.
|
| StorageApiLoads<DestinationT,ElementT> |
This PTransform manages loads into BigQuery using the Storage API.
|
| StorageApiWritePayload |
Class used to wrap elements being sent to the Storage API sinks.
|
| StorageApiWritePayload.Builder |
|
| StorageApiWriteRecordsInconsistent<DestinationT,ElementT> |
A transform to write sharded records to BigQuery using the Storage API.
|
| StorageApiWritesShardedRecords<DestinationT extends @NonNull java.lang.Object,ElementT> |
A transform to write sharded records to BigQuery using the Storage API (Streaming).
|
| StorageApiWriteUnshardedRecords<DestinationT,ElementT> |
Write records to the Storage API using a standard batch approach.
|
| StreamingInserts<DestinationT,ElementT> |
PTransform that performs streaming BigQuery write.
|
| StreamingInsertsMetrics |
Stores and exports metrics for a batch of Streaming Inserts RPCs.
|
| StreamingInsertsMetrics.NoOpStreamingInsertsMetrics |
No-op implementation of StreamingInsertsResults.
|
| StreamingInsertsMetrics.StreamingInsertsMetricsImpl |
Metrics of a batch of InsertAll RPCs.
|
| StreamingWriteTables<ElementT> |
This transform takes in key-value pairs of TableRow entries and the TableDestination it should be written to.
|
| StreamPartitionWithWatermark |
|
| StreamProgress |
|
| SubscriberOptions |
|
| SubscriberOptions.Builder |
|
| SubscribeTransform |
|
| SubscriptionPartition |
|
| SubscriptionPartitionCoder |
|
| TableDestination |
Encapsulates a BigQuery table destination.
|
| TableDestinationCoder |
|
| TableDestinationCoderV2 |
|
| TableDestinationCoderV3 |
A Coder for TableDestination that includes time partitioning and clustering
information.
|
| TableRowJsonCoder |
A Coder that encodes BigQuery TableRow objects in their native JSON format.
|
| TableRowToStorageApiProto |
Utility methods for converting JSON TableRow objects to dynamic protocol message, for use
with the Storage write API.
|
| TableRowToStorageApiProto.SchemaDoesntMatchException |
|
| TableRowToStorageApiProto.SchemaTooNarrowException |
|
| TableRowToStorageApiProto.SingleValueConversionException |
|
| TableSchemaCache |
An updatable cache for table schemas.
|
| TableSchemaUpdateUtils |
Helper utilities for handling schema-update responses.
|
| TestBigQuery |
Test rule which creates a new table with specified schema, with randomized name and exposes few
APIs to work with it.
|
| TestBigQuery.PollingAssertion |
Interface to implement a polling assertion.
|
| TestBigQueryOptions |
|
| TestPubsub |
Test rule which creates a new topic and subscription with randomized names and exposes the APIs
to work with them.
|
| TestPubsub.PollingAssertion |
|
| TestPubsubOptions |
|
| TestPubsubSignal |
Test rule which observes elements of the PCollection and checks whether they match the
success criteria.
|
| ThroughputEstimator<T> |
An estimator to calculate the throughput of the outputted elements from a DoFn.
|
| TimestampConverter |
Convert between different Timestamp and Instant classes.
|
| TimestampEncoding |
This encoder/decoder writes a com.google.cloud.Timestamp object as a pair of long and int to avro
and reads a Timestamp object from the same pair.
|
| TimestampRange |
A restriction represented by a range of timestamps [from, to).
|
| TimestampRangeTracker |
A RestrictionTracker for claiming positions in a TimestampRange in a
monotonically increasing fashion.
|
| TimestampUtils |
Provides methods in order to convert timestamp to nanoseconds representation and back.
|
| TrackerWithProgress |
|
| Transaction |
A transaction object.
|
| TypeCode |
Represents a type of a column within Cloud Spanner.
|
| UnboundedReaderImpl |
|
| UnboundedSourceImpl |
|
| UniqueIdGenerator |
Generate unique IDs that can be used to differentiate different jobs and partitions.
|
| UpdateSchemaDestination<DestinationT> |
Update destination schema based on data that is about to be copied into it.
|
| Uuid |
A Uuid storable in a Pub/Sub Lite attribute.
|
| UuidCoder |
A coder for a Uuid.
|
| UuidDeduplicationOptions |
Options for deduplicating Pub/Sub Lite messages based on the UUID they were published with.
|
| UuidDeduplicationOptions.Builder |
|
| UuidDeduplicationTransform |
A transform for deduplicating Pub/Sub Lite messages based on the UUID they were published with.
|
| ValueCaptureType |
Represents the capture type of a change stream.
|
| WatermarkCache |
|
| WebPathParser |
|
| WebPathParser.DicomWebPath |
|
| WriteResult |
|