| Modifier and Type | Method and Description |
|---|---|
Table |
addColumns(org.apache.flink.table.expressions.Expression... fields)
Adds additional columns.
|
Table |
addColumns(String fields)
Adds additional columns.
|
Table |
addOrReplaceColumns(org.apache.flink.table.expressions.Expression... fields)
Adds additional columns.
|
Table |
addOrReplaceColumns(String fields)
Adds additional columns.
|
AggregatedTable |
aggregate(org.apache.flink.table.expressions.Expression aggregateFunction)
Performs a global aggregate operation with an aggregate function.
|
AggregatedTable |
aggregate(String aggregateFunction)
Performs a global aggregate operation with an aggregate function.
|
Table |
as(org.apache.flink.table.expressions.Expression... fields)
Renames the fields of the expression result.
|
Table |
as(String fields)
Renames the fields of the expression result.
|
static TableImpl |
createTable(TableEnvironment tableEnvironment,
QueryOperation operationTree,
OperationTreeBuilder operationTreeBuilder,
FunctionLookup functionLookup) |
org.apache.flink.table.functions.TemporalTableFunction |
createTemporalTableFunction(org.apache.flink.table.expressions.Expression timeAttribute,
org.apache.flink.table.expressions.Expression primaryKey)
Creates
TemporalTableFunction backed up by this table as a history table. |
org.apache.flink.table.functions.TemporalTableFunction |
createTemporalTableFunction(String timeAttribute,
String primaryKey)
Creates
TemporalTableFunction backed up by this table as a history table. |
Table |
distinct()
Removes duplicate values and returns only distinct (different) values.
|
Table |
dropColumns(org.apache.flink.table.expressions.Expression... fields)
Drops existing columns.
|
Table |
dropColumns(String fields)
Drops existing columns.
|
Table |
fetch(int fetch)
Limits a sorted result to the first n rows.
|
Table |
filter(org.apache.flink.table.expressions.Expression predicate)
Filters out elements that don't pass the filter predicate.
|
Table |
filter(String predicate)
Filters out elements that don't pass the filter predicate.
|
FlatAggregateTable |
flatAggregate(org.apache.flink.table.expressions.Expression tableAggregateFunction)
Perform a global flatAggregate without groupBy.
|
FlatAggregateTable |
flatAggregate(String tableAggregateFunction)
Perform a global flatAggregate without groupBy.
|
Table |
flatMap(org.apache.flink.table.expressions.Expression tableFunction)
Performs a flatMap operation with an user-defined table function or built-in table function.
|
Table |
flatMap(String tableFunction)
Performs a flatMap operation with an user-defined table function or built-in table function.
|
Table |
fullOuterJoin(Table right,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins two
Tables. |
Table |
fullOuterJoin(Table right,
String joinPredicate)
Joins two
Tables. |
QueryOperation |
getQueryOperation()
Returns underlying logical representation of this table.
|
org.apache.flink.table.api.TableSchema |
getSchema()
Returns the schema of this table.
|
TableEnvironment |
getTableEnvironment() |
GroupedTable |
groupBy(org.apache.flink.table.expressions.Expression... fields)
Groups the elements on some grouping keys.
|
GroupedTable |
groupBy(String fields)
Groups the elements on some grouping keys.
|
void |
insertInto(QueryConfig conf,
String tablePath,
String... tablePathContinued)
Writes the
Table to a TableSink that was registered under the specified path. |
void |
insertInto(String tablePath)
Writes the
Table to a TableSink that was registered under the specified path. |
void |
insertInto(String tableName,
QueryConfig conf)
Writes the
Table to a TableSink that was registered under the specified name
in the built-in catalog. |
Table |
intersect(Table right)
Intersects two
Tables with duplicate records removed. |
Table |
intersectAll(Table right)
Intersects two
Tables. |
Table |
join(Table right)
Joins two
Tables. |
Table |
join(Table right,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins two
Tables. |
Table |
join(Table right,
String joinPredicate)
Joins two
Tables. |
Table |
joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Joins this
Table with an user-defined TableFunction. |
Table |
joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins this
Table with an user-defined TableFunction. |
Table |
joinLateral(String tableFunctionCall)
Joins this
Table with an user-defined TableFunction. |
Table |
joinLateral(String tableFunctionCall,
String joinPredicate)
Joins this
Table with an user-defined TableFunction. |
Table |
leftOuterJoin(Table right)
Joins two
Tables. |
Table |
leftOuterJoin(Table right,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins two
Tables. |
Table |
leftOuterJoin(Table right,
String joinPredicate)
Joins two
Tables. |
Table |
leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Joins this
Table with an user-defined TableFunction. |
Table |
leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins this
Table with an user-defined TableFunction. |
Table |
leftOuterJoinLateral(String tableFunctionCall)
Joins this
Table with an user-defined TableFunction. |
Table |
leftOuterJoinLateral(String tableFunctionCall,
String joinPredicate)
Joins this
Table with an user-defined TableFunction. |
Table |
map(org.apache.flink.table.expressions.Expression mapFunction)
Performs a map operation with an user-defined scalar function or built-in scalar function.
|
Table |
map(String mapFunction)
Performs a map operation with an user-defined scalar function or a built-in scalar function.
|
Table |
minus(Table right)
Minus of two
Tables with duplicate records removed. |
Table |
minusAll(Table right)
Minus of two
Tables. |
Table |
offset(int offset)
Limits a sorted result from an offset position.
|
Table |
orderBy(org.apache.flink.table.expressions.Expression... fields)
Sorts the given
Table. |
Table |
orderBy(String fields)
Sorts the given
Table. |
void |
printSchema()
Prints the schema of this table to the console in a tree format.
|
Table |
renameColumns(org.apache.flink.table.expressions.Expression... fields)
Renames existing columns.
|
Table |
renameColumns(String fields)
Renames existing columns.
|
Table |
rightOuterJoin(Table right,
org.apache.flink.table.expressions.Expression joinPredicate)
Joins two
Tables. |
Table |
rightOuterJoin(Table right,
String joinPredicate)
Joins two
Tables. |
Table |
select(org.apache.flink.table.expressions.Expression... fields)
Performs a selection operation.
|
Table |
select(String fields)
Performs a selection operation.
|
String |
toString() |
Table |
union(Table right)
Unions two
Tables with duplicate records removed. |
Table |
unionAll(Table right)
Unions two
Tables. |
Table |
where(org.apache.flink.table.expressions.Expression predicate)
Filters out elements that don't pass the filter predicate.
|
Table |
where(String predicate)
Filters out elements that don't pass the filter predicate.
|
GroupWindowedTable |
window(GroupWindow groupWindow)
Groups the records of a table by assigning them to windows defined by a time or row interval.
|
OverWindowedTable |
window(OverWindow... overWindows)
Defines over-windows on the records of a table.
|
public TableEnvironment getTableEnvironment()
public static TableImpl createTable(TableEnvironment tableEnvironment, QueryOperation operationTree, OperationTreeBuilder operationTreeBuilder, FunctionLookup functionLookup)
public org.apache.flink.table.api.TableSchema getSchema()
Tablepublic void printSchema()
TableprintSchema in interface Tablepublic QueryOperation getQueryOperation()
TablegetQueryOperation in interface Tablepublic Table select(String fields)
TableExample:
tab.select("key, value.avg + ' The average' as average")
public Table select(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.select('key, 'value.avg + " The average" as 'average)
public org.apache.flink.table.functions.TemporalTableFunction createTemporalTableFunction(String timeAttribute, String primaryKey)
TableTemporalTableFunction backed up by this table as a history table.
Temporal Tables represent a concept of a table that changes over time and for which
Flink keeps track of those changes. TemporalTableFunction provides a way how to
access those data.
For more information please check Flink's documentation on Temporal Tables.
Currently TemporalTableFunctions are only supported in streaming.
createTemporalTableFunction in interface TabletimeAttribute - Must points to a time attribute. Provides a way to compare which
records are a newer or older version.primaryKey - Defines the primary key. With primary key it is possible to update
a row or to delete it.TemporalTableFunction which is an instance of TableFunction.
It takes one single argument, the timeAttribute, for which it returns
matching version of the Table, from which TemporalTableFunction
was created.public org.apache.flink.table.functions.TemporalTableFunction createTemporalTableFunction(org.apache.flink.table.expressions.Expression timeAttribute,
org.apache.flink.table.expressions.Expression primaryKey)
TableTemporalTableFunction backed up by this table as a history table.
Temporal Tables represent a concept of a table that changes over time and for which
Flink keeps track of those changes. TemporalTableFunction provides a way how to
access those data.
For more information please check Flink's documentation on Temporal Tables.
Currently TemporalTableFunctions are only supported in streaming.
createTemporalTableFunction in interface TabletimeAttribute - Must points to a time indicator. Provides a way to compare which
records are a newer or older version.primaryKey - Defines the primary key. With primary key it is possible to update
a row or to delete it.TemporalTableFunction which is an instance of TableFunction.
It takes one single argument, the timeAttribute, for which it returns
matching version of the Table, from which TemporalTableFunction
was created.public Table as(String fields)
TableExample:
tab.as("a, b")
public Table as(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.as('a, 'b)
public Table filter(String predicate)
TableExample:
tab.filter("name = 'Fred'")
public Table filter(org.apache.flink.table.expressions.Expression predicate)
TableScala Example:
tab.filter('name === "Fred")
public Table where(String predicate)
TableExample:
tab.where("name = 'Fred'")
public Table where(org.apache.flink.table.expressions.Expression predicate)
TableScala Example:
tab.where('name === "Fred")
public GroupedTable groupBy(String fields)
TableExample:
tab.groupBy("key").select("key, value.avg")
public GroupedTable groupBy(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.groupBy('key).select('key, 'value.avg)
public Table distinct()
TableExample:
tab.select("key, value").distinct()
public Table join(Table right)
TableTables. Similar to a SQL join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary. You can use
where and select clauses after a join to further specify the behaviour of the join.
Note: Both tables must be bound to the same TableEnvironment .
Example:
left.join(right).where("a = b && c > 3").select("a, b, d")
public Table join(Table right, String joinPredicate)
TableTables. Similar to a SQL join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment .
Example:
left.join(right, "a = b")
public Table join(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
TableTables. Similar to a SQL join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment .
Scala Example:
left.join(right, 'a === 'b).select('a, 'b, 'd)
public Table leftOuterJoin(Table right)
TableTables. Similar to a SQL left outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Example:
left.leftOuterJoin(right).select("a, b, d")
leftOuterJoin in interface Tablepublic Table leftOuterJoin(Table right, String joinPredicate)
TableTables. Similar to a SQL left outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Example:
left.leftOuterJoin(right, "a = b").select("a, b, d")
leftOuterJoin in interface Tablepublic Table leftOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
TableTables. Similar to a SQL left outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Scala Example:
left.leftOuterJoin(right, 'a === 'b).select('a, 'b, 'd)
leftOuterJoin in interface Tablepublic Table rightOuterJoin(Table right, String joinPredicate)
TableTables. Similar to a SQL right outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Example:
left.rightOuterJoin(right, "a = b").select("a, b, d")
rightOuterJoin in interface Tablepublic Table rightOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
TableTables. Similar to a SQL right outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Scala Example:
left.rightOuterJoin(right, 'a === 'b).select('a, 'b, 'd)
rightOuterJoin in interface Tablepublic Table fullOuterJoin(Table right, String joinPredicate)
TableTables. Similar to a SQL full outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Example:
left.fullOuterJoin(right, "a = b").select("a, b, d")
fullOuterJoin in interface Tablepublic Table fullOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
TableTables. Similar to a SQL full outer join. The fields of the two joined
operations must not overlap, use as to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment and its
TableConfig must have null check enabled (default).
Scala Example:
left.fullOuterJoin(right, 'a === 'b).select('a, 'b, 'd)
fullOuterJoin in interface Tablepublic Table joinLateral(String tableFunctionCall)
TableTable with an user-defined TableFunction. This join is similar to
a SQL inner join with ON TRUE predicate but works with a table function. Each row of the
table is joined with all rows produced by the table function.
Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
TableFunction<String> split = new MySplitUDTF();
tableEnv.registerFunction("split", split);
table.joinLateral("split(c) as (s)").select("a, b, c, s");
joinLateral in interface Tablepublic Table joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
TableTable with an user-defined TableFunction. This join is similar to
a SQL inner join with ON TRUE predicate but works with a table function. Each row of the
table is joined with all rows produced by the table function.
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.joinLateral(split('c) as ('s)).select('a, 'b, 'c, 's)
joinLateral in interface Tablepublic Table joinLateral(String tableFunctionCall, String joinPredicate)
TableTable with an user-defined TableFunction. This join is similar to
a SQL inner join but works with a table function. Each row of the table is joined with all
rows produced by the table function.
Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
TableFunction<String> split = new MySplitUDTF();
tableEnv.registerFunction("split", split);
table.joinLateral("split(c) as (s)", "a = s").select("a, b, c, s");
joinLateral in interface Tablepublic Table joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
TableTable with an user-defined TableFunction. This join is similar to
a SQL inner join but works with a table function. Each row of the table is joined with all
rows produced by the table function.
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.joinLateral(split('c) as ('s), 'a === 's).select('a, 'b, 'c, 's)
joinLateral in interface Tablepublic Table leftOuterJoinLateral(String tableFunctionCall)
TableTable with an user-defined TableFunction. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of
the table is joined with all rows produced by the table function. If the table function does
not produce any row, the outer row is padded with nulls.
Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
TableFunction<String> split = new MySplitUDTF();
tableEnv.registerFunction("split", split);
table.leftOuterJoinLateral("split(c) as (s)").select("a, b, c, s");
leftOuterJoinLateral in interface Tablepublic Table leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
TableTable with an user-defined TableFunction. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of
the table is joined with all rows produced by the table function. If the table function does
not produce any row, the outer row is padded with nulls.
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.leftOuterJoinLateral(split('c) as ('s)).select('a, 'b, 'c, 's)
leftOuterJoinLateral in interface Tablepublic Table leftOuterJoinLateral(String tableFunctionCall, String joinPredicate)
TableTable with an user-defined TableFunction. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of
the table is joined with all rows produced by the table function. If the table function does
not produce any row, the outer row is padded with nulls.
Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
TableFunction<String> split = new MySplitUDTF();
tableEnv.registerFunction("split", split);
table.leftOuterJoinLateral("split(c) as (s)", "a = s").select("a, b, c, s");
leftOuterJoinLateral in interface Tablepublic Table leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
TableTable with an user-defined TableFunction. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of
the table is joined with all rows produced by the table function. If the table function does
not produce any row, the outer row is padded with nulls.
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.leftOuterJoinLateral(split('c) as ('s), 'a === 's).select('a, 'b, 'c, 's)
leftOuterJoinLateral in interface Tablepublic Table minus(Table right)
TableTables with duplicate records removed.
Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not
exist in the right table. Duplicate records in the left table are returned
exactly once, i.e., duplicates are removed. Both tables must have identical field types.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.minus(right)
public Table minusAll(Table right)
TableTables. Similar to a SQL EXCEPT ALL.
Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in
the right table. A record that is present n times in the left table and m times
in the right table is returned (n - m) times, i.e., as many duplicates as are present
in the right table are removed. Both tables must have identical field types.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.minusAll(right)
public Table union(Table right)
TableTables with duplicate records removed.
Similar to a SQL UNION. The fields of the two union operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.union(right)
public Table unionAll(Table right)
TableTables. Similar to a SQL UNION ALL. The fields of the two union
operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.unionAll(right)
public Table intersect(Table right)
TableTables with duplicate records removed. Intersect returns records that
exist in both tables. If a record is present in one or both tables more than once, it is
returned just once, i.e., the resulting table has no duplicate records. Similar to a
SQL INTERSECT. The fields of the two intersect operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.intersect(right)
public Table intersectAll(Table right)
TableTables. IntersectAll returns records that exist in both tables.
If a record is present in both tables more than once, it is returned as many times as it
is present in both tables, i.e., the resulting table might have duplicate records. Similar
to an SQL INTERSECT ALL. The fields of the two intersect operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment.
Example:
left.intersectAll(right)
intersectAll in interface Tablepublic Table orderBy(String fields)
TableTable. Similar to SQL ORDER BY.
The resulting Table is sorted globally sorted across all parallel partitions.
Example:
tab.orderBy("name.desc")
public Table orderBy(org.apache.flink.table.expressions.Expression... fields)
TableTable. Similar to SQL ORDER BY.
The resulting Table is globally sorted across all parallel partitions.
Scala Example:
tab.orderBy('name.desc)
public Table offset(int offset)
TableTable.offset(int offset) can be combined with a subsequent
Table.fetch(int fetch) call to return n rows after skipping the first o rows.
// skips the first 3 rows and returns all following rows.
tab.orderBy("name.desc").offset(3)
// skips the first 10 rows and returns the next 5 rows.
tab.orderBy("name.desc").offset(10).fetch(5)
public Table fetch(int fetch)
TableTable.fetch(int fetch) can be combined with a preceding
Table.offset(int offset) call to return n rows after skipping the first o rows.
// returns the first 3 records.
tab.orderBy("name.desc").fetch(3)
// skips the first 10 rows and returns the next 5 rows.
tab.orderBy("name.desc").offset(10).fetch(5)
public void insertInto(String tablePath)
TableTable to a TableSink that was registered under the specified path.
For the path resolution algorithm see TableEnvironment.useDatabase(String).
A batch Table can only be written to a
org.apache.flink.table.sinks.BatchTableSink, a streaming Table requires a
org.apache.flink.table.sinks.AppendStreamTableSink, a
org.apache.flink.table.sinks.RetractStreamTableSink, or an
org.apache.flink.table.sinks.UpsertStreamTableSink.
insertInto in interface TabletablePath - The path of the registered TableSink to which the Table is
written.public void insertInto(String tableName, QueryConfig conf)
TableTable to a TableSink that was registered under the specified name
in the built-in catalog.
A batch Table can only be written to a
org.apache.flink.table.sinks.BatchTableSink, a streaming Table requires a
org.apache.flink.table.sinks.AppendStreamTableSink, a
org.apache.flink.table.sinks.RetractStreamTableSink, or an
org.apache.flink.table.sinks.UpsertStreamTableSink.
insertInto in interface TabletableName - The name of the TableSink to which the Table is written.conf - The QueryConfig to use.public void insertInto(QueryConfig conf, String tablePath, String... tablePathContinued)
TableTable to a TableSink that was registered under the specified path.
For the path resolution algorithm see TableEnvironment.useDatabase(String).
A batch Table can only be written to a
org.apache.flink.table.sinks.BatchTableSink, a streaming Table requires a
org.apache.flink.table.sinks.AppendStreamTableSink, a
org.apache.flink.table.sinks.RetractStreamTableSink, or an
org.apache.flink.table.sinks.UpsertStreamTableSink.
insertInto in interface Tableconf - The QueryConfig to use.tablePath - The first part of the path of the registered TableSink to which the Table is
written. This is to ensure at least the name of the TableSink is provided.tablePathContinued - The remaining part of the path of the registered TableSink to which the
Table is written.public GroupWindowedTable window(GroupWindow groupWindow)
TableFor streaming tables of infinite size, grouping into windows is required to define finite groups on which group-based aggregates can be computed.
For batch tables of finite size, windowing essentially provides shortcuts for time-based groupBy.
Note: Computing windowed aggregates on a streaming table is only a parallel operation
if additional grouping attributes are added to the groupBy(...) clause.
If the groupBy(...) only references a GroupWindow alias, the streamed table will be
processed by a single task, i.e., with parallelism 1.
public OverWindowedTable window(OverWindow... overWindows)
TableAn over-window defines for each record an interval of records over which aggregation functions can be computed.
Example:
table
.window(Over partitionBy 'c orderBy 'rowTime preceding 10.seconds as 'ow)
.select('c, 'b.count over 'ow, 'e.sum over 'ow)
Note: Computing over window aggregates on a streaming table is only a parallel operation if the window is partitioned. Otherwise, the whole stream will be processed by a single task, i.e., with parallelism 1.
Note: Over-windows for batch tables are currently not supported.
public Table addColumns(String fields)
TableExample:
tab.addColumns("a + 1 as a1, concat(b, 'sunny') as b1")
addColumns in interface Tablepublic Table addColumns(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.addColumns('a + 1 as 'a1, concat('b, "sunny") as 'b1)
addColumns in interface Tablepublic Table addOrReplaceColumns(String fields)
TableExample:
tab.addOrReplaceColumns("a + 1 as a1, concat(b, 'sunny') as b1")
addOrReplaceColumns in interface Tablepublic Table addOrReplaceColumns(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.addOrReplaceColumns('a + 1 as 'a1, concat('b, "sunny") as 'b1)
addOrReplaceColumns in interface Tablepublic Table renameColumns(String fields)
TableExample:
tab.renameColumns("a as a1, b as b1")
renameColumns in interface Tablepublic Table renameColumns(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.renameColumns('a as 'a1, 'b as 'b1)
renameColumns in interface Tablepublic Table dropColumns(String fields)
TableExample:
tab.dropColumns("a, b")
dropColumns in interface Tablepublic Table dropColumns(org.apache.flink.table.expressions.Expression... fields)
TableScala Example:
tab.dropColumns('a, 'b)
dropColumns in interface Tablepublic Table map(String mapFunction)
TableExample:
ScalarFunction func = new MyMapFunction();
tableEnv.registerFunction("func", func);
tab.map("func(c)");
public Table map(org.apache.flink.table.expressions.Expression mapFunction)
TableScala Example:
val func = new MyMapFunction()
tab.map(func('c))
public Table flatMap(String tableFunction)
TableExample:
TableFunction func = new MyFlatMapFunction();
tableEnv.registerFunction("func", func);
table.flatMap("func(c)");
public Table flatMap(org.apache.flink.table.expressions.Expression tableFunction)
TableScala Example:
val func = new MyFlatMapFunction
table.flatMap(func('c))
public AggregatedTable aggregate(String aggregateFunction)
TableTable.aggregate(String) with a select statement. The output will be flattened if the
output type is a composite type.
Example:
AggregateFunction aggFunc = new MyAggregateFunction()
tableEnv.registerFunction("aggFunc", aggFunc);
table.aggregate("aggFunc(a, b) as (f0, f1, f2)")
.select("f0, f1")
public AggregatedTable aggregate(org.apache.flink.table.expressions.Expression aggregateFunction)
TableTable.aggregate(Expression) with a select statement. The output will be flattened if the
output type is a composite type.
Scala Example:
val aggFunc = new MyAggregateFunction
table.aggregate(aggFunc('a, 'b) as ('f0, 'f1, 'f2))
.select('f0, 'f1)
public FlatAggregateTable flatAggregate(String tableAggregateFunction)
TableExample:
TableAggregateFunction tableAggFunc = new MyTableAggregateFunction();
tableEnv.registerFunction("tableAggFunc", tableAggFunc);
tab.flatAggregate("tableAggFunc(a, b) as (x, y, z)")
.select("x, y, z")
flatAggregate in interface Tablepublic FlatAggregateTable flatAggregate(org.apache.flink.table.expressions.Expression tableAggregateFunction)
TableScala Example:
val tableAggFunc = new MyTableAggregateFunction
tab.flatAggregate(tableAggFunc('a, 'b) as ('x, 'y, 'z))
.select('x, 'y, 'z)
flatAggregate in interface TableCopyright © 2014–2020 The Apache Software Foundation. All rights reserved.