九:Operation category READ/WRITE is not supported in state standby解决案例
1:问题现象:
在IDEA中编写程序并打包后上传至hadoop-001执行时

2:处理过程
2.1依据提示找到相应问题解答:
在HA集群启用的情况下,默认情况下DFS客户端无法预判操作时刻的具体活跃NameNode。因此当客户端尝试与某个NameNode进行通信时,在HA集群中无法预判具体活跃的NameNode是哪一个会导致该操作被拒绝并记录相关信息。随后客户端会尝试联系另一个备用节点再次执行该操作。只要集群中存在至少一个活跃节点和一个备用节点这种情况下的错误信息会被视为正常现象而无需理会[1](注:[1] JIRA HDFS-3447讨论降低此类错误信息的重要性及其相关问题)。如果应用程序仅与单个NameNode建立连接关系则此类错误信息将表明应用无法进行读写操作这需要对应用逻辑进行相应调整以支持HA架构模式)。JIRA HDFS-3447已针对降低此类错误信息的重要程度及相关问题进行了深入研究
When an application is set up to communicate exclusively with a single name node at all times, this message signifies that it cannot execute any read or write operations. In such instances, it would be necessary for users or administrators to modify their configurations by implementing HA settings for better fault tolerance. Jira's HDFS-3447 issue focuses on reducing this message's severity and similar ones from WARNING level down to DEBUG level in order to minimize noise in logs; however, it remains unresolved as of July 2015.
2.2:通过查看相应NN节点状态:
[hadoop@hadoop001 shell] hdfs haadmin -getServiceState nn1
standby
[hadoop@hadoop001 shell] hdfs haadmin -getServiceState nn2
active
确定hadoop001状态确实为standby,所以把shell和包放到hadoop002上去执行;
问题依旧
2.3:仔细核查shell脚本:里面是写为hadoop001,
将Hadoop集群配置为以下路径:
hdfs://hadoop-1:8020/logs/input/ 和 hdfs://hadoop-1:8020/logs/output/
继续操作:
$ org.apache.hadoop.mapred.FileAlreadyExistsException 错误提示:指定输出目录 hdfs://hadoop-2:8020/logs/**output 已存在
2.4:主要是文件夹已存在,删除文件或者优化代码后执行:
当线程为“主”时抛出了一个java.lang.IllegalArgumentException异常:指定的HDFS文件系统IP地址为weizhong***, 而实际提供的IP地址为hadoop002:8020/logs/output。
主要问题在于core-site.xml文件中的配置出现异常情况:
defaultFS hdfs://**weizhonggui** 修改后运行正常: [hadoop@hadoop002 shell]$ hadoop fs -ls /logs/output Found 3 items -rw-r–r-- 3 hadoop hadoop 0 2018-12-18 17:40 /logs/output/_SUCCESS -rw-r–r-- 3 hadoop hadoop 199 2018-12-18 17:40 /logs/output/part-00000 -rw-r–r-- 3 hadoop hadoop 279 2018-12-18 17:40 /logs/output/part-00001