public static interface ExplanationMetadata.InputMetadata.VisualizationOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
float |
getClipPercentLowerbound()
Excludes attributions below the specified percentile, from the
highlighted areas.
|
float |
getClipPercentUpperbound()
Excludes attributions above the specified percentile from the
highlighted areas.
|
ExplanationMetadata.InputMetadata.Visualization.ColorMap |
getColorMap()
The color scheme used for the highlighted areas.
|
int |
getColorMapValue()
The color scheme used for the highlighted areas.
|
ExplanationMetadata.InputMetadata.Visualization.OverlayType |
getOverlayType()
How the original image is displayed in the visualization.
|
int |
getOverlayTypeValue()
How the original image is displayed in the visualization.
|
ExplanationMetadata.InputMetadata.Visualization.Polarity |
getPolarity()
Whether to only highlight pixels with positive contributions, negative
or both.
|
int |
getPolarityValue()
Whether to only highlight pixels with positive contributions, negative
or both.
|
ExplanationMetadata.InputMetadata.Visualization.Type |
getType()
Type of the image visualization.
|
int |
getTypeValue()
Type of the image visualization.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofint getTypeValue()
Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.Type type = 1;
ExplanationMetadata.InputMetadata.Visualization.Type getType()
Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.Type type = 1;
int getPolarityValue()
Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.Polarity polarity = 2;
ExplanationMetadata.InputMetadata.Visualization.Polarity getPolarity()
Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.Polarity polarity = 2;
int getColorMapValue()
The color scheme used for the highlighted areas. Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.ColorMap color_map = 3;
ExplanationMetadata.InputMetadata.Visualization.ColorMap getColorMap()
The color scheme used for the highlighted areas. Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.ColorMap color_map = 3;
float getClipPercentUpperbound()
Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
float clip_percent_upperbound = 4;float getClipPercentLowerbound()
Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
float clip_percent_lowerbound = 5;int getOverlayTypeValue()
How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.OverlayType overlay_type = 6;
ExplanationMetadata.InputMetadata.Visualization.OverlayType getOverlayType()
How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
.google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.Visualization.OverlayType overlay_type = 6;
Copyright © 2024 Google LLC. All rights reserved.