pyspark 提交任务失败了 concurrent.TimeoutException: Futures timed out after [100000 milliseconds]

图数据库猫 发布于 12/02 16:28
阅读 25
收藏 0

 scheduler.DAGScheduler: Job 91 finished: runJob at SparkHadoopWriter.scala:78, took 0.200772 s
19/12/02 16:26:47 INFO io.SparkHadoopWriter: Job job_20191202162647_0306 committed.
19/12/02 16:28:09 ERROR yarn.ApplicationMaster: Uncaught exception: 
java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:447)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:275)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:805)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:804)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:804)
    at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
19/12/02 16:28:09 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: Uncaught exception: java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:447)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:275)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:805)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:804)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:804)
    at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
)
19/12/02 16:28:09 INFO spark.SparkContext: Invoking stop() from shutdown hook
19/12/02 16:28:09 INFO server.AbstractConnector: Stopped Spark@71c64aea{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
19/12/02 16:28:09 INFO ui.SparkUI: Stopped Spark web UI at http://slave3:40706
19/12/02 16:28:09 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/12/02 16:28:09 INFO memory.MemoryStore: MemoryStore cleared
19/12/02 16:28:09 INFO storage.BlockManager: BlockManager stopped
19/12/02 16:28:09 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/12/02 16:28:09 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/12/02 16:28:09 INFO spark.SparkContext: Successfully stopped SparkContext
19/12/02 16:28:09 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://rongan/user/hbase/.sparkStaging/application_1575274779153_0001
19/12/02 16:28:09 INFO util.ShutdownHookManager: Shutdown hook called
19/12/02 16:28:09 INFO util.ShutdownHookManager: Deleting directory /data/yarn/nm2/usercache/hbase/appcache/application_1575274779153_0001/spark-b0810b14-8a22-45b9-9402-9991baed11fa
19/12/02 16:28:09 INFO util.ShutdownHookManager: Deleting directory /data/yarn/nm2/usercache/hbase/appcache/application_1575274779153_0001/spark-b0810b14-8a22-45b9-9402-9991baed11fa/pyspark-9d090c5e-a11b-4e64-b874-7c7c507a24cc


For more detailed output, check the application tracking page: http://master:8088/cluster/app/application_1575274779153_0001 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1575274779153_0001 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1158)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1606)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/12/02 16:28:10 INFO util.ShutdownHookManager: Shutdown hook called
19/12/02 16:28:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-be8c7c37-ef48-4bcb-8c83-a9f6a4fdf270
19/12/02 16:28:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-eb54063e-70be-45ff-b8d4-86ac60329439
You have new mail in /var/spool/mail/root
 

加载中
返回顶部
顶部