@Stability(value=Stable) public static final class CfnDeliveryStream.ParquetSerDeProperty.Builder extends Object implements software.amazon.jsii.Builder<CfnDeliveryStream.ParquetSerDeProperty>
CfnDeliveryStream.ParquetSerDeProperty| Constructor and Description |
|---|
Builder() |
@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder blockSizeBytes(Number blockSizeBytes)
CfnDeliveryStream.ParquetSerDeProperty.getBlockSizeBytes()blockSizeBytes - The Hadoop Distributed File System (HDFS) block size.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder compression(String compression)
CfnDeliveryStream.ParquetSerDeProperty.getCompression()compression - The compression code to use over data blocks.
The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder enableDictionaryCompression(Boolean enableDictionaryCompression)
CfnDeliveryStream.ParquetSerDeProperty.getEnableDictionaryCompression()enableDictionaryCompression - Indicates whether to enable dictionary compression.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder enableDictionaryCompression(IResolvable enableDictionaryCompression)
CfnDeliveryStream.ParquetSerDeProperty.getEnableDictionaryCompression()enableDictionaryCompression - Indicates whether to enable dictionary compression.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder maxPaddingBytes(Number maxPaddingBytes)
CfnDeliveryStream.ParquetSerDeProperty.getMaxPaddingBytes()maxPaddingBytes - The maximum amount of padding to apply.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder pageSizeBytes(Number pageSizeBytes)
CfnDeliveryStream.ParquetSerDeProperty.getPageSizeBytes()pageSizeBytes - The Parquet page size.
Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty.Builder writerVersion(String writerVersion)
CfnDeliveryStream.ParquetSerDeProperty.getWriterVersion()writerVersion - Indicates the version of row format to output.
The possible values are V1 and V2 . The default is V1 .this@Stability(value=Stable) public CfnDeliveryStream.ParquetSerDeProperty build()
build in interface software.amazon.jsii.Builder<CfnDeliveryStream.ParquetSerDeProperty>CfnDeliveryStream.ParquetSerDePropertyNullPointerException - if any required attribute was not providedCopyright © 2022. All rights reserved.