2018年5月9日 星期三

Spark 小技巧系列 - left join 後把null 改為0


如果使用Spark 的 left outer join 遇到沒有的資料通常就會以NULL顯示,如下圖所示:


這時候如果我想要計算CTR = click/ impression 會發生什麼事?直接噴錯給你看,然後也不知道發生什麼事....

org.apache.spark.sql.AnalysisException: Resolved attribute(s) 'impressionCount,'clickCount missing from viewCount#965L,(impressionCount / total)#582,dsType#1189,rid#14,impressionCount#534L,recommendCount#1198L,clickCount#1441L,siteId#16 in operator 'Project [siteId#16, rid#14, impressionCount#534L, (impressionCount / total)#582, viewCount#965L, dsType#1189, recommendCount#1198L, clickCount#1441L, ('clickCount / 'impressionCount) AS CTR#2022]. Attribute(s) with the same name appear in the operation: impressionCount,clickCount. Please check if the right attribute(s) are used.;;
'Project [siteId#16, rid#14, impressionCount#534L, (impressionCount / total)#582, viewCount#965L, dsType#1189, recommendCount#1198L, clickCount#1441L, ('clickCount / 'impressionCount) AS CTR#2022]
+- AnalysisBarrier
      +- LogicalRDD [siteId#16, rid#14, impressionCount#534L, (impressionCount / total)#582, viewCount#965L, dsType#1189, recommendCount#1198L, clickCount#1441L], false


    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:91)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:289)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:104)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3295)
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1307)
    at org.apache.spark.sql.Dataset.withColumns(Dataset.scala:2192)
    at org.apache.spark.sql.Dataset.withColumn(Dataset.scala:2159)


其實原因就是有Null的存在,這時候只要使用以下技巧補零就可以了。

Dataset join1 = impression.join(broadcast(view), col, LeftOuter.toString())
                               .na()
                               .fill(0, new String[] {"viewCount"});

這段的意思就是會把null 的值,補上任何你想要的值,然後就解決了~


沒有留言 :