String activeSpeakerPosition
The position of the ActiveSpeakerOnly video tile.
String languageCode
The language code in the configuration.
String vocabularyName
Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.
If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.
For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.
Length Constraints: Minimum length of 1. Maximum length of 200.
String vocabularyFilterName
Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.
If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.
For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.
Length Constraints: Minimum length of 1. Maximum length of 200.
String vocabularyFilterMethod
Specifies how to apply a vocabulary filter to a transcript.
To replace words with ***, choose mask.
To delete words, choose remove.
To flag words without changing them, choose tag.
String languageModelName
Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.
For more information, see Custom language models in the Amazon Transcribe Developer Guide.
Boolean enablePartialResultsStabilization
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.
String partialResultsStability
Specifies the level of stability to use when you enable partial results stabilization (
EnablePartialResultsStabilization).
Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.
String contentIdentificationType
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged
upon complete transcription of an audio segment.
You can’t set ContentIdentificationType and ContentRedactionType in the same request.
If you do, your request returns a BadRequestException.
For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.
String contentRedactionType
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted
upon complete transcription of an audio segment.
You can’t set ContentRedactionType and ContentIdentificationType in the same request.
If you do, your request returns a BadRequestException.
For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.
String piiEntityTypes
Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as
many types as you'd like, or you can select ALL.
To include PiiEntityTypes in your Call Analytics request, you must also include
ContentIdentificationType or ContentRedactionType, but you can't include both.
Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER,
BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY,
CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN,
SSN, or ALL.
Length Constraints: Minimum length of 1. Maximum length of 300.
Boolean filterPartialResults
If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.
PostCallAnalyticsSettings postCallAnalyticsSettings
The settings for a post-call analysis task in an analytics configuration.
List<E> callAnalyticsStreamCategories
By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only
included categories are sent to the insights target.
String languageCode
The language code that represents the language spoken in your audio.
If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable
automatic language identification.
For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.
String vocabularyName
The name of the custom vocabulary that you specified in your Call Analytics request.
Length Constraints: Minimum length of 1. Maximum length of 200.
String vocabularyFilterName
The name of the custom vocabulary filter that you specified in your Call Analytics request.
Length Constraints: Minimum length of 1. Maximum length of 200.
String vocabularyFilterMethod
The vocabulary filtering method used in your Call Analytics transcription.
Boolean showSpeakerLabel
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.
Boolean enablePartialResultsStabilization
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.
For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.
String partialResultsStability
The level of stability to use when you enable partial results stabilization (
EnablePartialResultsStabilization).
Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.
String contentIdentificationType
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged
upon complete transcription of an audio segment.
You can’t set ContentIdentificationType and ContentRedactionType in the same request.
If you set both, your request returns a BadRequestException.
For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.
String contentRedactionType
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.
You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your
request returns a BadRequestException.
For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.
String piiEntityTypes
The types of personally identifiable information (PII) to redact from a transcript. You can include as many types
as you'd like, or you can select ALL.
To include PiiEntityTypes in your Call Analytics request, you must also include
ContentIdentificationType or ContentRedactionType, but you can't include both.
Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER,
BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY,
CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN,
SSN, or ALL.
If you leave this parameter empty, the default behavior is equivalent to ALL.
String languageModelName
The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.
For more information, see Custom language models in the Amazon Transcribe Developer Guide.
Boolean filterPartialResults
If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.
Boolean identifyLanguage
Turns language identification on or off.
String languageOptions
The language options for the transcription, such as automatic language detection.
String preferredLanguage
The preferred language for the transcription.
String vocabularyNames
The names of the custom vocabulary or vocabularies used during transcription.
String vocabularyFilterNames
The names of the custom vocabulary filter or filters using during transcription.
AudioConcatenationConfiguration audio
The configuration for the audio artifacts concatenation.
VideoConcatenationConfiguration video
The configuration for the video artifacts concatenation.
ContentConcatenationConfiguration content
The configuration for the content artifacts concatenation.
DataChannelConcatenationConfiguration dataChannel
The configuration for the data channel artifacts concatenation.
TranscriptionMessagesConcatenationConfiguration transcriptionMessages
The configuration for the transcription messages artifacts concatenation.
MeetingEventsConcatenationConfiguration meetingEvents
The configuration for the meeting events artifacts concatenation.
CompositedVideoConcatenationConfiguration compositedVideo
The configuration for the composited video artifacts concatenation.
AudioArtifactsConfiguration audio
The configuration for the audio artifacts.
VideoArtifactsConfiguration video
The configuration for the video artifacts.
ContentArtifactsConfiguration content
The configuration for the content artifacts.
CompositedVideoArtifactsConfiguration compositedVideo
Enables video compositing.
String muxType
The MUX type of the audio artifact configuration object.
String state
Enables or disables the configuration object.
ArtifactsConcatenationConfiguration artifactsConfiguration
The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.
SourceConfiguration sourceConfiguration
The source configuration for a specified media pipeline.
ArtifactsConfiguration artifactsConfiguration
The configuration for the artifacts in an Amazon Chime SDK meeting.
String arn
The configuration object's Chime SDK meeting ARN.
String muxType
The configuration object's multiplex type.
CompositedVideoArtifactsConfiguration compositedVideo
The media pipeline's composited video.
SourceConfiguration sourceConfiguration
The source configuration settings of the media pipeline's configuration object.
String layout
The layout setting, such as GridView in the configuration object.
String resolution
The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.
GridViewConfiguration gridViewConfiguration
The GridView configuration setting.
String state
Enables or disables the configuration object.
String type
The type of data sink in the configuration object.
S3BucketSinkConfiguration s3BucketSinkConfiguration
The configuration settings for an Amazon S3 bucket sink.
String type
The type of concatenation source in a configuration object.
MediaCapturePipelineSourceConfiguration mediaCapturePipelineSourceConfiguration
The concatenation settings for the media pipeline in a configuration object.
String state
Enables or disables the configuration object.
String sourceType
Source type from which the media artifacts are captured. A Chime SDK Meeting is the only supported source.
String sourceArn
ARN of the source from which the media artifacts are captured.
String sinkType
Destination type to which the media artifacts are saved. You must use an S3 bucket.
String sinkArn
The ARN of the sink type.
String clientRequestToken
The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media pipeline request.
ChimeSdkMeetingConfiguration chimeSdkMeetingConfiguration
The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.
List<E> tags
The tag key-value pairs.
MediaCapturePipeline mediaCapturePipeline
A media pipeline object, the ID, source type, source ARN, sink type, and sink ARN of a media pipeline object.
List<E> sources
An object that specifies the sources for the media concatenation pipeline.
List<E> sinks
An object that specifies the data sinks for the media concatenation pipeline.
String clientRequestToken
The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media concatenation pipeline request.
List<E> tags
The tags associated with the media concatenation pipeline.
MediaConcatenationPipeline mediaConcatenationPipeline
A media concatenation pipeline object, the ID, source type, MediaPipelineARN, and sink of a media
concatenation pipeline object.
String mediaInsightsPipelineConfigurationName
The name of the media insights pipeline configuration.
String resourceAccessRoleArn
The ARN of the role used by the service to access Amazon Web Services resources, including
Transcribe and Transcribe Call Analytics, on the caller’s behalf.
RealTimeAlertConfiguration realTimeAlertConfiguration
The configuration settings for the real-time alerts in a media insights pipeline configuration.
List<E> elements
The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.
List<E> tags
The tags assigned to the media insights pipeline configuration.
String clientRequestToken
The unique identifier for the media insights pipeline configuration request.
MediaInsightsPipelineConfiguration mediaInsightsPipelineConfiguration
The configuration settings for the media insights pipeline.
String mediaInsightsPipelineConfigurationArn
The ARN of the pipeline's configuration.
KinesisVideoStreamSourceRuntimeConfiguration kinesisVideoStreamSourceRuntimeConfiguration
The runtime configuration for the Kinesis video stream source of the media insights pipeline.
Map<K,V> mediaInsightsRuntimeMetadata
The runtime metadata for the media insights pipeline. Consists of a key-value map of strings.
KinesisVideoStreamRecordingSourceRuntimeConfiguration kinesisVideoStreamRecordingSourceRuntimeConfiguration
The runtime configuration for the Kinesis video recording stream source.
S3RecordingSinkRuntimeConfiguration s3RecordingSinkRuntimeConfiguration
The runtime configuration for the S3 recording sink. If specified, the settings in this structure override any
settings in S3RecordingSinkConfiguration.
List<E> tags
The tags assigned to the media insights pipeline.
String clientRequestToken
The unique identifier for the media insights pipeline request.
MediaInsightsPipeline mediaInsightsPipeline
The media insights pipeline object.
List<E> sources
The media live connector pipeline's data sources.
List<E> sinks
The media live connector pipeline's data sinks.
String clientRequestToken
The token assigned to the client making the request.
List<E> tags
The tags associated with the media live connector pipeline.
MediaLiveConnectorPipeline mediaLiveConnectorPipeline
The new media live connector pipeline.
String state
Enables or disables the configuration object.
String mediaPipelineId
The ID of the media pipeline being deleted.
String identifier
The unique identifier of the resource to be deleted. Valid values include the name and ARN of the media insights pipeline configuration.
String mediaPipelineId
The ID of the media pipeline to delete.
String fragmentSelectorType
The origin of the timestamps to use, Server or Producer. For more information, see StartSelectorType
in the Amazon Kinesis Video Streams Developer Guide.
TimestampRange timestampRange
The range of timestamps to return.
String mediaPipelineId
The ID of the pipeline that you want to get.
MediaCapturePipeline mediaCapturePipeline
The media pipeline object.
String identifier
The unique identifier of the requested resource. Valid values include the name and ARN of the media insights pipeline configuration.
MediaInsightsPipelineConfiguration mediaInsightsPipelineConfiguration
The requested media insights pipeline configuration.
String mediaPipelineId
The ID of the pipeline that you want to get.
MediaPipeline mediaPipeline
The media pipeline object.
SpeakerSearchTask speakerSearchTask
The details of the speaker search task.
VoiceToneAnalysisTask voiceToneAnalysisTask
The details of the voice tone analysis task.
String contentShareLayout
Defines the layout of the video tiles when content sharing is enabled.
PresenterOnlyConfiguration presenterOnlyConfiguration
Defines the configuration options for a presenter only video tile.
ActiveSpeakerOnlyConfiguration activeSpeakerOnlyConfiguration
The configuration settings for an ActiveSpeakerOnly video tile.
HorizontalLayoutConfiguration horizontalLayoutConfiguration
The configuration settings for a horizontal layout.
VerticalLayoutConfiguration verticalLayoutConfiguration
The configuration settings for a vertical layout.
VideoAttribute videoAttribute
The attribute settings for the video tiles.
String canvasOrientation
The orientation setting, horizontal or vertical.
String tileOrder
Sets the automatic ordering of the video tiles.
String tilePosition
Sets the position of horizontal tiles.
Integer tileCount
The maximum number of video tiles to display.
String tileAspectRatio
Sets the aspect ratio of the video tiles, such as 16:9.
String ruleName
The name of the issue detection rule.
String ruleName
The name of the keyword match rule.
List<E> keywords
The keywords or phrases that you want to match.
Boolean negate
Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all
the specified keywords or phrases are absent. Default: FALSE.
String insightsTarget
The ARN of the sink.
List<E> streams
The stream or streams to be recorded.
FragmentSelector fragmentSelector
Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.
List<E> streams
The streams in the source runtime configuration of a Kinesis video stream.
String mediaEncoding
Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats in the Amazon Transcribe Developer Guide.
Integer mediaSampleRate
The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
Valid Range: Minimum value of 8000. Maximum value of 48000.
String insightsTarget
The ARN of the sink.
String resourceARN
The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's region, resource ID, and pipeline ID.
String sinkType
The sink configuration's sink type.
LiveConnectorRTMPConfiguration rTMPConfiguration
The sink configuration's RTMP configuration settings.
String sourceType
The source configuration's media source type.
ChimeSdkMeetingLiveConnectorConfiguration chimeSdkMeetingLiveConnectorConfiguration
The configuration settings of the connector pipeline.
String mediaPipelineId
The ID of a media pipeline.
String mediaPipelineArn
The ARN of the media capture pipeline
String sourceType
Source type from which media artifacts are saved. You must use ChimeMeeting.
String sourceArn
ARN of the source from which the media artifacts are saved.
String status
The status of the media pipeline.
String sinkType
Destination type to which the media artifacts are saved. You must use an S3 Bucket.
String sinkArn
ARN of the destination to which the media artifacts are saved.
Date createdTimestamp
The time at which the pipeline was created, in ISO 8601 format.
Date updatedTimestamp
The time at which the pipeline was updated, in ISO 8601 format.
ChimeSdkMeetingConfiguration chimeSdkMeetingConfiguration
The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.
String mediaPipelineArn
The media pipeline ARN in the configuration object of a media capture pipeline.
ChimeSdkMeetingConcatenationConfiguration chimeSdkMeetingConfiguration
The meeting configuration settings in a media capture pipeline configuration object.
String mediaPipelineId
The ID of the media pipeline being concatenated.
String mediaPipelineArn
The ARN of the media pipeline that you specify in the SourceConfiguration object.
List<E> sources
The data sources being concatenated.
List<E> sinks
The data sinks of the concatenation pipeline.
String status
The status of the concatenation pipeline.
Date createdTimestamp
The time at which the concatenation pipeline was created.
Date updatedTimestamp
The time at which the concatenation pipeline was last updated.
String mediaPipelineId
The ID of a media insights pipeline.
String mediaPipelineArn
The ARN of a media insights pipeline.
String mediaInsightsPipelineConfigurationArn
The ARN of a media insight pipeline's configuration settings.
String status
The status of a media insights pipeline.
KinesisVideoStreamSourceRuntimeConfiguration kinesisVideoStreamSourceRuntimeConfiguration
The configuration settings for a Kinesis runtime video stream in a media insights pipeline.
Map<K,V> mediaInsightsRuntimeMetadata
The runtime metadata of a media insights pipeline.
KinesisVideoStreamRecordingSourceRuntimeConfiguration kinesisVideoStreamRecordingSourceRuntimeConfiguration
The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline.
S3RecordingSinkRuntimeConfiguration s3RecordingSinkRuntimeConfiguration
The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline.
Date createdTimestamp
The time at which the media insights pipeline was created.
List<E> elementStatuses
The statuses that the elements in a media insights pipeline can have during data processing.
String mediaInsightsPipelineConfigurationName
The name of the configuration.
String mediaInsightsPipelineConfigurationArn
The ARN of the configuration.
String resourceAccessRoleArn
The ARN of the role used by the service to access Amazon Web Services resources.
RealTimeAlertConfiguration realTimeAlertConfiguration
Lists the rules that trigger a real-time alert.
List<E> elements
The elements in the configuration.
String mediaInsightsPipelineConfigurationId
The ID of the configuration.
Date createdTimestamp
The time at which the configuration was created.
Date updatedTimestamp
The time at which the configuration was last updated.
String type
The element type.
AmazonTranscribeCallAnalyticsProcessorConfiguration amazonTranscribeCallAnalyticsProcessorConfiguration
The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.
AmazonTranscribeProcessorConfiguration amazonTranscribeProcessorConfiguration
The transcription processor configuration settings in a media insights pipeline configuration element.
KinesisDataStreamSinkConfiguration kinesisDataStreamSinkConfiguration
The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.
S3RecordingSinkConfiguration s3RecordingSinkConfiguration
The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.
VoiceAnalyticsProcessorConfiguration voiceAnalyticsProcessorConfiguration
The voice analytics configuration settings in a media insights pipeline configuration element.
LambdaFunctionSinkConfiguration lambdaFunctionSinkConfiguration
The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.
SqsQueueSinkConfiguration sqsQueueSinkConfiguration
The configuration settings for an SQS queue sink in a media insights pipeline configuration element.
SnsTopicSinkConfiguration snsTopicSinkConfiguration
The configuration settings for an SNS topic sink in a media insights pipeline configuration element.
VoiceEnhancementSinkConfiguration voiceEnhancementSinkConfiguration
The configuration settings for voice enhancement sink in a media insights pipeline configuration element.
String mediaInsightsPipelineConfigurationName
The name of the media insights pipeline configuration.
String mediaInsightsPipelineConfigurationId
The ID of the media insights pipeline configuration.
String mediaInsightsPipelineConfigurationArn
The ARN of the media insights pipeline configuration.
List<E> sources
The connector pipeline's data sources.
List<E> sinks
The connector pipeline's data sinks.
String mediaPipelineId
The connector pipeline's ID.
String mediaPipelineArn
The connector pipeline's ARN.
String status
The connector pipeline's status.
Date createdTimestamp
The time at which the connector pipeline was created.
Date updatedTimestamp
The time at which the connector pipeline was last updated.
MediaCapturePipeline mediaCapturePipeline
A pipeline that enables users to capture audio and video.
MediaLiveConnectorPipeline mediaLiveConnectorPipeline
The connector pipeline of the media pipeline.
MediaConcatenationPipeline mediaConcatenationPipeline
The media concatenation pipeline in a media pipeline.
MediaInsightsPipeline mediaInsightsPipeline
The media insights pipeline of a media pipeline.
String state
Enables or disables the configuration object.
String outputLocation
The URL of the Amazon S3 bucket that contains the post-call data.
String dataAccessRoleArn
The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.
String contentRedactionOutput
The content redaction output settings for a post-call analysis task.
String outputEncryptionKMSKeyId
The ID of the KMS (Key Management Service) key used to encrypt the output.
String presenterPosition
Defines the position of the presenter video tile. Default: TopRight.
String type
The type of alert rule.
KeywordMatchConfiguration keywordMatchConfiguration
Specifies the settings for matching the keywords in a real-time alert rule.
SentimentConfiguration sentimentConfiguration
Specifies the settings for predicting sentiment in a real-time alert rule.
IssueDetectionConfiguration issueDetectionConfiguration
Specifies the issue detection settings for a real-time alert rule.
String streamArn
The ARN of the recording stream.
String destination
The destination URL of the S3 bucket.
String insightsTarget
The ARN of the SNS sink.
SelectedVideoStreams selectedVideoStreams
The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.
String speakerSearchTaskId
The speaker search task ID.
String speakerSearchTaskStatus
The status of the speaker search task.
Date createdTimestamp
The time at which a speaker search task was created.
Date updatedTimestamp
The time at which a speaker search task was updated.
String insightsTarget
The ARN of the SQS sink.
String identifier
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
String voiceProfileDomainArn
The ARN of the voice profile domain that will store the voice profile.
KinesisVideoStreamSourceTaskConfiguration kinesisVideoStreamSourceTaskConfiguration
The task configuration for the Kinesis video stream source of the media insights pipeline.
String clientRequestToken
The unique identifier for the client request. Use a different token for different speaker search tasks.
SpeakerSearchTask speakerSearchTask
The details of the speaker search task.
String identifier
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
String languageCode
The language code.
KinesisVideoStreamSourceTaskConfiguration kinesisVideoStreamSourceTaskConfiguration
The task configuration for the Kinesis video stream source of the media insights pipeline.
String clientRequestToken
The unique identifier for the client request. Use a different token for different voice tone analysis tasks.
VoiceToneAnalysisTask voiceToneAnalysisTask
The details of the voice tone analysis task.
String streamArn
The ARN of the stream.
String fragmentNumber
The unique identifier of the fragment to begin processing.
StreamChannelDefinition streamChannelDefinition
The streaming channel definition in the stream configuration.
String state
Enables or disables the configuration object.
String identifier
The unique identifier for the resource to be updated. Valid values include the name and ARN of the media insights pipeline configuration.
String resourceAccessRoleArn
The ARN of the role used by the service to access Amazon Web Services resources.
RealTimeAlertConfiguration realTimeAlertConfiguration
The configuration settings for real-time alerts for the media insights pipeline.
List<E> elements
The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream..
MediaInsightsPipelineConfiguration mediaInsightsPipelineConfiguration
The updated configuration settings.
String tileOrder
Sets the automatic ordering of the video tiles.
String tilePosition
Sets the position of vertical tiles.
Integer tileCount
The maximum number of tiles to display.
String tileAspectRatio
Sets the aspect ratio of the video tiles, such as 16:9.
Integer cornerRadius
Sets the corner radius of all video tiles.
String borderColor
Defines the border color of all video tiles.
String highlightColor
Defines the highlight color for the active video tile.
Integer borderThickness
Defines the border thickness for all video tiles.
String state
Enables or disables the configuration object.
Boolean disabled
Disables the VoiceEnhancementSinkConfiguration element.
String voiceToneAnalysisTaskId
The ID of the voice tone analysis task.
String voiceToneAnalysisTaskStatus
The status of a voice tone analysis task.
Date createdTimestamp
The time at which a voice tone analysis task was created.
Date updatedTimestamp
The time at which a voice tone analysis task was updated.
Copyright © 2023. All rights reserved.