2018年5月10日 星期四

Spark 小技巧系列 - 讀取檔案過大發生 java.lang.NegativeArraySizeException 該怎麼處理?


雖然我們知道單一檔案不要太大,或太小,但是有時候人在江湖身不由己,如果遇到單一檔案太大時,系統可能就會噴出以下錯誤:


[WARN] BlockManager   : Putting block rdd_12_0 failed due to exception java.lang.NegativeArraySizeException.
[WARN] BlockManager   : Block rdd_12_0 could not be removed as it was not found on disk or in memory
[ERROR] Executor          : Exception in task 0.0 in stage 3.0 (TID 259)
java.lang.NegativeArraySizeException
        at org.apache.spark.unsafe.types.UTF8String.concatWs(UTF8String.java:901)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
        at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateResultProjection$1.apply(AggregationIterator.scala:234)
        at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateResultProjection$1.apply(AggregationIterator.scala:223)
        at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.next(ObjectAggregationIterator.scala:86)
        at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.next(ObjectAggregationIterator.scala:33)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.hasNext(InMemoryRelation.scala:139)
        at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:378)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1109)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:809)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748) 




這時候該怎麼辦呢?在stackoverflow 上看到這篇文章:Spark 2.0.1 java.lang.NegativeArraySizeException 跟我的案例蠻類似的,裡面有提到原來是Kryo 在 serializer 大檔案時可能會遇到問題 - Very large object graphs:

Stack size

The serializers Kryo provides use the call stack when serializing nested objects. Kryo does minimize stack calls, but for extremely deep object graphs, a stack overflow can occur. This is a common issue for most serialization libraries, including the built-in Java serialization. The stack size can be increased using -Xss, but note that this is for all threads. Large stack sizes in a JVM with many threads may use a large amount of memory.

Reference limits

Kryo stores references in a map that is based on an int array. Since Java array indices are limited to Integer.MAX_VALUE, serializing large (> 1 billion) objects may result in a java.lang.NegativeArraySizeException.
A workaround for this issue is disabling Kryo's reference tracking as indicated below:
    Kryo kryo = new Kryo();
    kryo.setReferences(false);
 
 
另外的解法就是在Spark Config 增加以下參數:
 
spark.kryo.refferenceTrackingEnabled=false
 

Reference:
[1] Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
[2] Why you should care about data layout in the file system with Cheng Lian and Vida Ha  

張貼留言