Advertisement

spark的yarn集群运行问题集锦

阅读量:

环境:spark是2.1.1,hadoop是2.6.0

1. 对于yarn集群提交方式conf.setMaster("spark://master:7077")不是必须设置的。

2. spark yarn提交命令,网上说的那些很多是基于spark standalone或者local方式的提交,具体也可参考官网http://spark.apache.org/docs/2.1.1/running-on-yarn.html

—cluster模式
spark-submit --class com.tgh.spark.WordCount --master yarn --deploy-mode cluster --driver-memory 2g --executor-memory 1g --executor-cores 1 hdfs://hadoop0:9000/tmp/sparktest-1.0-SNAPSHOT.jar hdfs://hadoop0:9000/test/wordcount.txt

spark-submit --class com.tgh.spark.WordCount --master yarn --deploy-mode cluster --driver-memory 2g --executor-memory 1g --executor-cores 1 /Users/tamir/Desktop/workspace/SparkTest/target/sparktest-1.0-SNAPSHOT.jar hdfs://hadoop0:9000/test/wordcount.txt

—client模式
spark-submit --class com.tgh.spark.WordCount --master yarn --deploy-mode client --driver-memory 2g --executor-memory 1g --executor-cores 1 /Users/tamir/Desktop/workspace/SparkTest/target/sparktest-1.0-SNAPSHOT.jar hdfs://hadoop0:9000/test/wordcount.txt

3. spark standalone方式,查看spark运行情况,http://master:28080/,端口在SPARK_HOME/conf/spark-defaults.conf指定

“spark.ui.port 28080”或者在文件spark-env.sh中通过“export SPARK_MASTER_WEBUI_PORT=28080”指定,如果都指定以SPARK_MASTER_WEBUI_PORT为准

4. yarn集群查看任务地址http://localhost:8088/,通过HADOOP_HOME/etc/hadoop/yarn-site.xml指定,附上本地一些配置

yarn.resourcemanager.hostname localhost The address of the applications manager interface in the RM. yarn.resourcemanager.address ${yarn.resourcemanager.hostname}:8032 The address of the scheduler interface. yarn.resourcemanager.scheduler.address ${yarn.resourcemanager.hostname}:8030 The http address of the RM web application. yarn.resourcemanager.webapp.address ${yarn.resourcemanager.hostname}:8088 The https adddress of the RM web application. yarn.resourcemanager.webapp.https.address ${yarn.resourcemanager.hostname}:8090 yarn.resourcemanager.resource-tracker.address ${yarn.resourcemanager.hostname}:8031 The address of the RM admin interface. yarn.resourcemanager.admin.address ${yarn.resourcemanager.hostname}:8033 yarn.nodemanager.aux-services spark_shuffle,mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.nodemanager.aux-services.spark_shuffle.class org.apache.spark.network.yarn.YarnShuffleService

hdfs默认集群地址:http://localhost:8100/dfshealth.html#tab-overview,可在hdfs-site.xml指定

hdfs应用地址在HADOOP_HOME/etc/hadoop/core-site.xml配置,示例如下

fs.default.name hdfs://hadoop0:9000

5. 类型强转错误,关于这个错误网上说是在spark-default.conf中,事先设置了序列化方式为Kryo:

spark.serializer org.apache.spark.serializer.KryoSerializer造成的,经过个人测试不是这个原因,并且按照他们提供的解决方案也是行不通的至少2.1.1版本不行。

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.ListSerializationProxy to field org.apache.spark.rdd.RDD.orgapachesparkrddRDDdependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD at java.io.ObjectStreamClassFieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)

解决步骤:

a. 首先感觉是序列化的问题,按照网上方法去掉KryoSerializer,失败!

b. 继续设置spark.kryo.registrator支持list类型,代码如下:

conf.set("spark.kryo.registrator", new KryoRegistrator(){

@Override
public void registerClasses(Kryo kryo)
{
List supportClass = new ArrayList();
supportClass.add(List.class);
supportClass.add(ArrayList.class);
for (Class cls : supportClass){
kryo.register(cls, new FieldSerializer(kryo, cls));
}
}
}.getClass().getName());

,设置完成不报这个错误又引入spark_shuffle的错误。

org.apache.spark.SparkException: Exception while starting container container_1532789919735_0003_01_000002 on host 192.168.0.101
at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:127)
at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:67)
at org.apache.spark.deploy.yarn.YarnAllocatoranon1.run(YarnAllocator.scala:520) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:206)
at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:124)

d. The auxService:spark_shuffle does not exist,错误解决方法如下:

1). 在“yarn-site.xml”中添加如下配置项:

yarn.nodemanager.aux-services
spark_shuffle


yarn.nodemanager.aux-services.spark_shuffle.class
org.apache.spark.network.yarn.YarnShuffleService

2). 添加依赖的jar包
拷贝“{SPARK_HOME}/lib/spark-1.3.0-yarn-shuffle.jar”到“{HADOOP_HOME}/share/hadoop/yarn/lib/”目录下

3). 如果1)如果“yarn.nodemanager.aux-services” 配置项已存在,则在value中添加 “spark_shuffle”, 且用逗号和其他值分开。

e. 解决完之后,感觉类型强转会不会就是这个实际是spark_shuffle引起的,去掉上边kyro对ArrayList的序列化代码,果然 没有问题了,于是推论这个类型强转的错误就是由于shuffle引起的,强转只是表象。结合强转中的错误提示看到org.apache.spark.rdd.RDD.orgapachesparkrddRDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD再看RDD collect()源码,可以确定就是collect()时进行shuffle发生错误。再结合上边的问题解决过程可以确定这里的类型强转错误实际是collect时不支持spark_shuffle引起的。到此spark demo在集群上完美运行。

复制代码

全部评论 (0)

还没有任何评论哟~