== Physical Plan ==
* HashAggregate (71)
+- * ColumnarToRow (70)
   +- CometColumnarExchange (69)
      +- RowToColumnar (68)
         +- * HashAggregate (67)
            +- Union (66)
               :- * Project (47)
               :  +- * BroadcastHashJoin Inner BuildRight (46)
               :     :- * Project (44)
               :     :  +- * SortMergeJoin LeftSemi (43)
               :     :     :- * ColumnarToRow (25)
               :     :     :  +- CometSort (24)
               :     :     :     +- CometColumnarExchange (23)
               :     :     :        +- CometProject (22)
               :     :     :           +- CometBroadcastHashJoin (21)
               :     :     :              :- CometScan parquet spark_catalog.default.catalog_sales (1)
               :     :     :              +- CometBroadcastExchange (20)
               :     :     :                 +- CometProject (19)
               :     :     :                    +- CometFilter (18)
               :     :     :                       +- CometHashAggregate (17)
               :     :     :                          +- CometColumnarExchange (16)
               :     :     :                             +- CometHashAggregate (15)
               :     :     :                                +- CometProject (14)
               :     :     :                                   +- CometBroadcastHashJoin (13)
               :     :     :                                      :- CometProject (9)
               :     :     :                                      :  +- CometBroadcastHashJoin (8)
               :     :     :                                      :     :- CometFilter (3)
               :     :     :                                      :     :  +- CometScan parquet spark_catalog.default.store_sales (2)
               :     :     :                                      :     +- CometBroadcastExchange (7)
               :     :     :                                      :        +- CometProject (6)
               :     :     :                                      :           +- CometFilter (5)
               :     :     :                                      :              +- CometScan parquet spark_catalog.default.date_dim (4)
               :     :     :                                      +- CometBroadcastExchange (12)
               :     :     :                                         +- CometFilter (11)
               :     :     :                                            +- CometScan parquet spark_catalog.default.item (10)
               :     :     +- * Sort (42)
               :     :        +- * Project (41)
               :     :           +- * Filter (40)
               :     :              +- * HashAggregate (39)
               :     :                 +- * ColumnarToRow (38)
               :     :                    +- CometColumnarExchange (37)
               :     :                       +- RowToColumnar (36)
               :     :                          +- * HashAggregate (35)
               :     :                             +- * ColumnarToRow (34)
               :     :                                +- CometProject (33)
               :     :                                   +- CometBroadcastHashJoin (32)
               :     :                                      :- CometProject (28)
               :     :                                      :  +- CometFilter (27)
               :     :                                      :     +- CometScan parquet spark_catalog.default.store_sales (26)
               :     :                                      +- CometBroadcastExchange (31)
               :     :                                         +- CometFilter (30)
               :     :                                            +- CometScan parquet spark_catalog.default.customer (29)
               :     +- ReusedExchange (45)
               +- * Project (65)
                  +- * BroadcastHashJoin Inner BuildRight (64)
                     :- * Project (62)
                     :  +- * SortMergeJoin LeftSemi (61)
                     :     :- * ColumnarToRow (54)
                     :     :  +- CometSort (53)
                     :     :     +- CometColumnarExchange (52)
                     :     :        +- CometProject (51)
                     :     :           +- CometBroadcastHashJoin (50)
                     :     :              :- CometScan parquet spark_catalog.default.web_sales (48)
                     :     :              +- ReusedExchange (49)
                     :     +- * Sort (60)
                     :        +- * Project (59)
                     :           +- * Filter (58)
                     :              +- * HashAggregate (57)
                     :                 +- * ColumnarToRow (56)
                     :                    +- ReusedExchange (55)
                     +- ReusedExchange (63)


(1) Scan parquet spark_catalog.default.catalog_sales
Output [5]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(cs_sold_date_sk#5), dynamicpruningexpression(cs_sold_date_sk#5 IN dynamicpruning#6)]
ReadSchema: struct<cs_bill_customer_sk:int,cs_item_sk:int,cs_quantity:int,cs_list_price:decimal(7,2)>

(2) Scan parquet spark_catalog.default.store_sales
Output [2]: [ss_item_sk#7, ss_sold_date_sk#8]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ss_sold_date_sk#8), dynamicpruningexpression(ss_sold_date_sk#8 IN dynamicpruning#9)]
PushedFilters: [IsNotNull(ss_item_sk)]
ReadSchema: struct<ss_item_sk:int>

(3) CometFilter
Input [2]: [ss_item_sk#7, ss_sold_date_sk#8]
Condition : isnotnull(ss_item_sk#7)

(4) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_date_sk#10, d_date#11, d_year#12]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [In(d_year, [2000,2001,2002,2003]), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_date:date,d_year:int>

(5) CometFilter
Input [3]: [d_date_sk#10, d_date#11, d_year#12]
Condition : (d_year#12 IN (2000,2001,2002,2003) AND isnotnull(d_date_sk#10))

(6) CometProject
Input [3]: [d_date_sk#10, d_date#11, d_year#12]
Arguments: [d_date_sk#10, d_date#11], [d_date_sk#10, d_date#11]

(7) CometBroadcastExchange
Input [2]: [d_date_sk#10, d_date#11]
Arguments: [d_date_sk#10, d_date#11]

(8) CometBroadcastHashJoin
Left output [2]: [ss_item_sk#7, ss_sold_date_sk#8]
Right output [2]: [d_date_sk#10, d_date#11]
Arguments: [ss_sold_date_sk#8], [d_date_sk#10], Inner, BuildRight

(9) CometProject
Input [4]: [ss_item_sk#7, ss_sold_date_sk#8, d_date_sk#10, d_date#11]
Arguments: [ss_item_sk#7, d_date#11], [ss_item_sk#7, d_date#11]

(10) Scan parquet spark_catalog.default.item
Output [2]: [i_item_sk#13, i_item_desc#14]
Batched: true
Location [not included in comparison]/{warehouse_dir}/item]
PushedFilters: [IsNotNull(i_item_sk)]
ReadSchema: struct<i_item_sk:int,i_item_desc:string>

(11) CometFilter
Input [2]: [i_item_sk#13, i_item_desc#14]
Condition : isnotnull(i_item_sk#13)

(12) CometBroadcastExchange
Input [2]: [i_item_sk#13, i_item_desc#14]
Arguments: [i_item_sk#13, i_item_desc#14]

(13) CometBroadcastHashJoin
Left output [2]: [ss_item_sk#7, d_date#11]
Right output [2]: [i_item_sk#13, i_item_desc#14]
Arguments: [ss_item_sk#7], [i_item_sk#13], Inner, BuildRight

(14) CometProject
Input [4]: [ss_item_sk#7, d_date#11, i_item_sk#13, i_item_desc#14]
Arguments: [d_date#11, i_item_sk#13, _groupingexpression#15], [d_date#11, i_item_sk#13, substr(i_item_desc#14, 1, 30) AS _groupingexpression#15]

(15) CometHashAggregate
Input [3]: [d_date#11, i_item_sk#13, _groupingexpression#15]
Keys [3]: [_groupingexpression#15, i_item_sk#13, d_date#11]
Functions [1]: [partial_count(1)]

(16) CometColumnarExchange
Input [4]: [_groupingexpression#15, i_item_sk#13, d_date#11, count#16]
Arguments: hashpartitioning(_groupingexpression#15, i_item_sk#13, d_date#11, 5), ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=1]

(17) CometHashAggregate
Input [4]: [_groupingexpression#15, i_item_sk#13, d_date#11, count#16]
Keys [3]: [_groupingexpression#15, i_item_sk#13, d_date#11]
Functions [1]: [count(1)]

(18) CometFilter
Input [2]: [item_sk#17, cnt#18]
Condition : (cnt#18 > 4)

(19) CometProject
Input [2]: [item_sk#17, cnt#18]
Arguments: [item_sk#17], [item_sk#17]

(20) CometBroadcastExchange
Input [1]: [item_sk#17]
Arguments: [item_sk#17]

(21) CometBroadcastHashJoin
Left output [5]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Right output [1]: [item_sk#17]
Arguments: [cs_item_sk#2], [item_sk#17], LeftSemi, BuildRight

(22) CometProject
Input [5]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Arguments: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5], [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]

(23) CometColumnarExchange
Input [4]: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Arguments: hashpartitioning(cs_bill_customer_sk#1, 5), ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=2]

(24) CometSort
Input [4]: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Arguments: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5], [cs_bill_customer_sk#1 ASC NULLS FIRST]

(25) ColumnarToRow [codegen id : 1]
Input [4]: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]

(26) Scan parquet spark_catalog.default.store_sales
Output [4]: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21, ss_sold_date_sk#22]
Batched: true
Location [not included in comparison]/{warehouse_dir}/store_sales]
PushedFilters: [IsNotNull(ss_customer_sk)]
ReadSchema: struct<ss_customer_sk:int,ss_quantity:int,ss_sales_price:decimal(7,2)>

(27) CometFilter
Input [4]: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21, ss_sold_date_sk#22]
Condition : isnotnull(ss_customer_sk#19)

(28) CometProject
Input [4]: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21, ss_sold_date_sk#22]
Arguments: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21], [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21]

(29) Scan parquet spark_catalog.default.customer
Output [1]: [c_customer_sk#23]
Batched: true
Location [not included in comparison]/{warehouse_dir}/customer]
PushedFilters: [IsNotNull(c_customer_sk)]
ReadSchema: struct<c_customer_sk:int>

(30) CometFilter
Input [1]: [c_customer_sk#23]
Condition : isnotnull(c_customer_sk#23)

(31) CometBroadcastExchange
Input [1]: [c_customer_sk#23]
Arguments: [c_customer_sk#23]

(32) CometBroadcastHashJoin
Left output [3]: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21]
Right output [1]: [c_customer_sk#23]
Arguments: [ss_customer_sk#19], [c_customer_sk#23], Inner, BuildRight

(33) CometProject
Input [4]: [ss_customer_sk#19, ss_quantity#20, ss_sales_price#21, c_customer_sk#23]
Arguments: [ss_quantity#20, ss_sales_price#21, c_customer_sk#23], [ss_quantity#20, ss_sales_price#21, c_customer_sk#23]

(34) ColumnarToRow [codegen id : 2]
Input [3]: [ss_quantity#20, ss_sales_price#21, c_customer_sk#23]

(35) HashAggregate [codegen id : 2]
Input [3]: [ss_quantity#20, ss_sales_price#21, c_customer_sk#23]
Keys [1]: [c_customer_sk#23]
Functions [1]: [partial_sum((cast(ss_quantity#20 as decimal(10,0)) * ss_sales_price#21))]
Aggregate Attributes [2]: [sum#24, isEmpty#25]
Results [3]: [c_customer_sk#23, sum#26, isEmpty#27]

(36) RowToColumnar
Input [3]: [c_customer_sk#23, sum#26, isEmpty#27]

(37) CometColumnarExchange
Input [3]: [c_customer_sk#23, sum#26, isEmpty#27]
Arguments: hashpartitioning(c_customer_sk#23, 5), ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=3]

(38) ColumnarToRow [codegen id : 3]
Input [3]: [c_customer_sk#23, sum#26, isEmpty#27]

(39) HashAggregate [codegen id : 3]
Input [3]: [c_customer_sk#23, sum#26, isEmpty#27]
Keys [1]: [c_customer_sk#23]
Functions [1]: [sum((cast(ss_quantity#20 as decimal(10,0)) * ss_sales_price#21))]
Aggregate Attributes [1]: [sum((cast(ss_quantity#20 as decimal(10,0)) * ss_sales_price#21))#28]
Results [2]: [c_customer_sk#23, sum((cast(ss_quantity#20 as decimal(10,0)) * ss_sales_price#21))#28 AS ssales#29]

(40) Filter [codegen id : 3]
Input [2]: [c_customer_sk#23, ssales#29]
Condition : (isnotnull(ssales#29) AND (cast(ssales#29 as decimal(38,8)) > (0.500000 * Subquery scalar-subquery#30, [id=#31])))

(41) Project [codegen id : 3]
Output [1]: [c_customer_sk#23]
Input [2]: [c_customer_sk#23, ssales#29]

(42) Sort [codegen id : 3]
Input [1]: [c_customer_sk#23]
Arguments: [c_customer_sk#23 ASC NULLS FIRST], false, 0

(43) SortMergeJoin [codegen id : 5]
Left keys [1]: [cs_bill_customer_sk#1]
Right keys [1]: [c_customer_sk#23]
Join type: LeftSemi
Join condition: None

(44) Project [codegen id : 5]
Output [3]: [cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]
Input [4]: [cs_bill_customer_sk#1, cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5]

(45) ReusedExchange [Reuses operator id: 76]
Output [1]: [d_date_sk#32]

(46) BroadcastHashJoin [codegen id : 5]
Left keys [1]: [cs_sold_date_sk#5]
Right keys [1]: [d_date_sk#32]
Join type: Inner
Join condition: None

(47) Project [codegen id : 5]
Output [1]: [(cast(cs_quantity#3 as decimal(10,0)) * cs_list_price#4) AS sales#33]
Input [4]: [cs_quantity#3, cs_list_price#4, cs_sold_date_sk#5, d_date_sk#32]

(48) Scan parquet spark_catalog.default.web_sales
Output [5]: [ws_item_sk#34, ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ws_sold_date_sk#38), dynamicpruningexpression(ws_sold_date_sk#38 IN dynamicpruning#39)]
ReadSchema: struct<ws_item_sk:int,ws_bill_customer_sk:int,ws_quantity:int,ws_list_price:decimal(7,2)>

(49) ReusedExchange [Reuses operator id: 20]
Output [1]: [item_sk#40]

(50) CometBroadcastHashJoin
Left output [5]: [ws_item_sk#34, ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Right output [1]: [item_sk#40]
Arguments: [ws_item_sk#34], [item_sk#40], LeftSemi, BuildRight

(51) CometProject
Input [5]: [ws_item_sk#34, ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Arguments: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38], [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]

(52) CometColumnarExchange
Input [4]: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Arguments: hashpartitioning(ws_bill_customer_sk#35, 5), ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=4]

(53) CometSort
Input [4]: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Arguments: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38], [ws_bill_customer_sk#35 ASC NULLS FIRST]

(54) ColumnarToRow [codegen id : 6]
Input [4]: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]

(55) ReusedExchange [Reuses operator id: 37]
Output [3]: [c_customer_sk#41, sum#42, isEmpty#43]

(56) ColumnarToRow [codegen id : 8]
Input [3]: [c_customer_sk#41, sum#42, isEmpty#43]

(57) HashAggregate [codegen id : 8]
Input [3]: [c_customer_sk#41, sum#42, isEmpty#43]
Keys [1]: [c_customer_sk#41]
Functions [1]: [sum((cast(ss_quantity#44 as decimal(10,0)) * ss_sales_price#45))]
Aggregate Attributes [1]: [sum((cast(ss_quantity#44 as decimal(10,0)) * ss_sales_price#45))#28]
Results [2]: [c_customer_sk#41, sum((cast(ss_quantity#44 as decimal(10,0)) * ss_sales_price#45))#28 AS ssales#46]

(58) Filter [codegen id : 8]
Input [2]: [c_customer_sk#41, ssales#46]
Condition : (isnotnull(ssales#46) AND (cast(ssales#46 as decimal(38,8)) > (0.500000 * ReusedSubquery Subquery scalar-subquery#30, [id=#31])))

(59) Project [codegen id : 8]
Output [1]: [c_customer_sk#41]
Input [2]: [c_customer_sk#41, ssales#46]

(60) Sort [codegen id : 8]
Input [1]: [c_customer_sk#41]
Arguments: [c_customer_sk#41 ASC NULLS FIRST], false, 0

(61) SortMergeJoin [codegen id : 10]
Left keys [1]: [ws_bill_customer_sk#35]
Right keys [1]: [c_customer_sk#41]
Join type: LeftSemi
Join condition: None

(62) Project [codegen id : 10]
Output [3]: [ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]
Input [4]: [ws_bill_customer_sk#35, ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38]

(63) ReusedExchange [Reuses operator id: 76]
Output [1]: [d_date_sk#47]

(64) BroadcastHashJoin [codegen id : 10]
Left keys [1]: [ws_sold_date_sk#38]
Right keys [1]: [d_date_sk#47]
Join type: Inner
Join condition: None

(65) Project [codegen id : 10]
Output [1]: [(cast(ws_quantity#36 as decimal(10,0)) * ws_list_price#37) AS sales#48]
Input [4]: [ws_quantity#36, ws_list_price#37, ws_sold_date_sk#38, d_date_sk#47]

(66) Union

(67) HashAggregate [codegen id : 11]
Input [1]: [sales#33]
Keys: []
Functions [1]: [partial_sum(sales#33)]
Aggregate Attributes [2]: [sum#49, isEmpty#50]
Results [2]: [sum#51, isEmpty#52]

(68) RowToColumnar
Input [2]: [sum#51, isEmpty#52]

(69) CometColumnarExchange
Input [2]: [sum#51, isEmpty#52]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=5]

(70) ColumnarToRow [codegen id : 12]
Input [2]: [sum#51, isEmpty#52]

(71) HashAggregate [codegen id : 12]
Input [2]: [sum#51, isEmpty#52]
Keys: []
Functions [1]: [sum(sales#33)]
Aggregate Attributes [1]: [sum(sales#33)#53]
Results [1]: [sum(sales#33)#53 AS sum(sales)#54]

===== Subqueries =====

Subquery:1 Hosting operator id = 1 Hosting Expression = cs_sold_date_sk#5 IN dynamicpruning#6
BroadcastExchange (76)
+- * ColumnarToRow (75)
   +- CometProject (74)
      +- CometFilter (73)
         +- CometScan parquet spark_catalog.default.date_dim (72)


(72) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_date_sk#32, d_year#55, d_moy#56]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_year), IsNotNull(d_moy), EqualTo(d_year,2000), EqualTo(d_moy,2), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_year:int,d_moy:int>

(73) CometFilter
Input [3]: [d_date_sk#32, d_year#55, d_moy#56]
Condition : ((((isnotnull(d_year#55) AND isnotnull(d_moy#56)) AND (d_year#55 = 2000)) AND (d_moy#56 = 2)) AND isnotnull(d_date_sk#32))

(74) CometProject
Input [3]: [d_date_sk#32, d_year#55, d_moy#56]
Arguments: [d_date_sk#32], [d_date_sk#32]

(75) ColumnarToRow [codegen id : 1]
Input [1]: [d_date_sk#32]

(76) BroadcastExchange
Input [1]: [d_date_sk#32]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=6]

Subquery:2 Hosting operator id = 2 Hosting Expression = ss_sold_date_sk#8 IN dynamicpruning#9
BroadcastExchange (81)
+- * ColumnarToRow (80)
   +- CometProject (79)
      +- CometFilter (78)
         +- CometScan parquet spark_catalog.default.date_dim (77)


(77) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_date_sk#10, d_date#11, d_year#12]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [In(d_year, [2000,2001,2002,2003]), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_date:date,d_year:int>

(78) CometFilter
Input [3]: [d_date_sk#10, d_date#11, d_year#12]
Condition : (d_year#12 IN (2000,2001,2002,2003) AND isnotnull(d_date_sk#10))

(79) CometProject
Input [3]: [d_date_sk#10, d_date#11, d_year#12]
Arguments: [d_date_sk#10, d_date#11], [d_date_sk#10, d_date#11]

(80) ColumnarToRow [codegen id : 1]
Input [2]: [d_date_sk#10, d_date#11]

(81) BroadcastExchange
Input [2]: [d_date_sk#10, d_date#11]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=7]

Subquery:3 Hosting operator id = 40 Hosting Expression = Subquery scalar-subquery#30, [id=#31]
* HashAggregate (103)
+- * ColumnarToRow (102)
   +- CometColumnarExchange (101)
      +- RowToColumnar (100)
         +- * HashAggregate (99)
            +- * HashAggregate (98)
               +- * ColumnarToRow (97)
                  +- CometColumnarExchange (96)
                     +- RowToColumnar (95)
                        +- * HashAggregate (94)
                           +- * ColumnarToRow (93)
                              +- CometProject (92)
                                 +- CometBroadcastHashJoin (91)
                                    :- CometProject (86)
                                    :  +- CometBroadcastHashJoin (85)
                                    :     :- CometFilter (83)
                                    :     :  +- CometScan parquet spark_catalog.default.store_sales (82)
                                    :     +- ReusedExchange (84)
                                    +- CometBroadcastExchange (90)
                                       +- CometProject (89)
                                          +- CometFilter (88)
                                             +- CometScan parquet spark_catalog.default.date_dim (87)


(82) Scan parquet spark_catalog.default.store_sales
Output [4]: [ss_customer_sk#57, ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ss_sold_date_sk#60), dynamicpruningexpression(ss_sold_date_sk#60 IN dynamicpruning#61)]
PushedFilters: [IsNotNull(ss_customer_sk)]
ReadSchema: struct<ss_customer_sk:int,ss_quantity:int,ss_sales_price:decimal(7,2)>

(83) CometFilter
Input [4]: [ss_customer_sk#57, ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60]
Condition : isnotnull(ss_customer_sk#57)

(84) ReusedExchange [Reuses operator id: 31]
Output [1]: [c_customer_sk#62]

(85) CometBroadcastHashJoin
Left output [4]: [ss_customer_sk#57, ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60]
Right output [1]: [c_customer_sk#62]
Arguments: [ss_customer_sk#57], [c_customer_sk#62], Inner, BuildRight

(86) CometProject
Input [5]: [ss_customer_sk#57, ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60, c_customer_sk#62]
Arguments: [ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60, c_customer_sk#62], [ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60, c_customer_sk#62]

(87) Scan parquet spark_catalog.default.date_dim
Output [2]: [d_date_sk#63, d_year#64]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [In(d_year, [2000,2001,2002,2003]), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_year:int>

(88) CometFilter
Input [2]: [d_date_sk#63, d_year#64]
Condition : (d_year#64 IN (2000,2001,2002,2003) AND isnotnull(d_date_sk#63))

(89) CometProject
Input [2]: [d_date_sk#63, d_year#64]
Arguments: [d_date_sk#63], [d_date_sk#63]

(90) CometBroadcastExchange
Input [1]: [d_date_sk#63]
Arguments: [d_date_sk#63]

(91) CometBroadcastHashJoin
Left output [4]: [ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60, c_customer_sk#62]
Right output [1]: [d_date_sk#63]
Arguments: [ss_sold_date_sk#60], [d_date_sk#63], Inner, BuildRight

(92) CometProject
Input [5]: [ss_quantity#58, ss_sales_price#59, ss_sold_date_sk#60, c_customer_sk#62, d_date_sk#63]
Arguments: [ss_quantity#58, ss_sales_price#59, c_customer_sk#62], [ss_quantity#58, ss_sales_price#59, c_customer_sk#62]

(93) ColumnarToRow [codegen id : 1]
Input [3]: [ss_quantity#58, ss_sales_price#59, c_customer_sk#62]

(94) HashAggregate [codegen id : 1]
Input [3]: [ss_quantity#58, ss_sales_price#59, c_customer_sk#62]
Keys [1]: [c_customer_sk#62]
Functions [1]: [partial_sum((cast(ss_quantity#58 as decimal(10,0)) * ss_sales_price#59))]
Aggregate Attributes [2]: [sum#65, isEmpty#66]
Results [3]: [c_customer_sk#62, sum#67, isEmpty#68]

(95) RowToColumnar
Input [3]: [c_customer_sk#62, sum#67, isEmpty#68]

(96) CometColumnarExchange
Input [3]: [c_customer_sk#62, sum#67, isEmpty#68]
Arguments: hashpartitioning(c_customer_sk#62, 5), ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=8]

(97) ColumnarToRow [codegen id : 2]
Input [3]: [c_customer_sk#62, sum#67, isEmpty#68]

(98) HashAggregate [codegen id : 2]
Input [3]: [c_customer_sk#62, sum#67, isEmpty#68]
Keys [1]: [c_customer_sk#62]
Functions [1]: [sum((cast(ss_quantity#58 as decimal(10,0)) * ss_sales_price#59))]
Aggregate Attributes [1]: [sum((cast(ss_quantity#58 as decimal(10,0)) * ss_sales_price#59))#69]
Results [1]: [sum((cast(ss_quantity#58 as decimal(10,0)) * ss_sales_price#59))#69 AS csales#70]

(99) HashAggregate [codegen id : 2]
Input [1]: [csales#70]
Keys: []
Functions [1]: [partial_max(csales#70)]
Aggregate Attributes [1]: [max#71]
Results [1]: [max#72]

(100) RowToColumnar
Input [1]: [max#72]

(101) CometColumnarExchange
Input [1]: [max#72]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, CometColumnarShuffle, [plan_id=9]

(102) ColumnarToRow [codegen id : 3]
Input [1]: [max#72]

(103) HashAggregate [codegen id : 3]
Input [1]: [max#72]
Keys: []
Functions [1]: [max(csales#70)]
Aggregate Attributes [1]: [max(csales#70)#73]
Results [1]: [max(csales#70)#73 AS tpcds_cmax#74]

Subquery:4 Hosting operator id = 82 Hosting Expression = ss_sold_date_sk#60 IN dynamicpruning#61
BroadcastExchange (108)
+- * ColumnarToRow (107)
   +- CometProject (106)
      +- CometFilter (105)
         +- CometScan parquet spark_catalog.default.date_dim (104)


(104) Scan parquet spark_catalog.default.date_dim
Output [2]: [d_date_sk#63, d_year#64]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [In(d_year, [2000,2001,2002,2003]), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_year:int>

(105) CometFilter
Input [2]: [d_date_sk#63, d_year#64]
Condition : (d_year#64 IN (2000,2001,2002,2003) AND isnotnull(d_date_sk#63))

(106) CometProject
Input [2]: [d_date_sk#63, d_year#64]
Arguments: [d_date_sk#63], [d_date_sk#63]

(107) ColumnarToRow [codegen id : 1]
Input [1]: [d_date_sk#63]

(108) BroadcastExchange
Input [1]: [d_date_sk#63]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=10]

Subquery:5 Hosting operator id = 48 Hosting Expression = ws_sold_date_sk#38 IN dynamicpruning#6

Subquery:6 Hosting operator id = 58 Hosting Expression = ReusedSubquery Subquery scalar-subquery#30, [id=#31]


