java - jobtracker.info could only be replicated to 0 nodes, instead of 1 -


hadoop encounter error when starting.the follows log info of jobtracker:

2015-06-03 09:38:26,106 warn org.apache.hadoop.hdfs.dfsclient: datastreamer exception: org.apache.hadoop.ipc.remoteexception: java.io.ioexception: file /tmp/hadoop-hadooptest/mapred/system/jobtracker.info replicated 0 nodes, instead of 1     @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem.java:2091)     @ org.apache.hadoop.hdfs.server.namenode.namenode.addblock(namenode.java:795)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:606)     @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:587)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1432)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1428)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:415)     @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1190)     @ org.apache.hadoop.ipc.server$handler.run(server.java:1426)      @ org.apache.hadoop.ipc.client.call(client.java:1113)     @ org.apache.hadoop.ipc.rpc$invoker.invoke(rpc.java:229)     @ com.sun.proxy.$proxy7.addblock(unknown source)     @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)     @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)     @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)     @ java.lang.reflect.method.invoke(method.java:606)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:85)     @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:62)     @ com.sun.proxy.$proxy7.addblock(unknown source)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock(dfsclient.java:3779)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream(dfsclient.java:3639)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2600(dfsclient.java:2842)     @ org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run(dfsclient.java:3082) 

continue above:

  2015-06-03 09:38:26,107 warn org.apache.hadoop.hdfs.dfsclient: error recovery null bad datanode[0] nodes == null 2015-06-03 09:38:26,107 warn org.apache.hadoop.hdfs.dfsclient: not block locations. source file "/tmp/hadoop-hadooptest/mapred/system/jobtracker.info" - aborting... 2015-06-03 09:38:26,107 warn org.apache.hadoop.mapred.jobtracker: writing file hdfs://172.18.11.9:9000/tmp/hadoop-hadooptest/mapred/system/jobtracker.info failed! 2015-06-03 09:38:26,107 warn org.apache.hadoop.mapred.jobtracker: filesystem not ready yet! 2015-06-03 09:38:26,130 warn org.apache.hadoop.mapred.jobtracker: failed initialize recovery manager. 
disk space enough, , have closed firewall, result:

[hadooptest@hw009 logs]$ chkconfig iptables --list iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off 

that's why? how can resolve problem? lots.

there methods have try, couldn't resolve provlem.

enter link1

enter link2

enter link3

i have resolve problem.i used commond bin/start-all.sh start hadoop cluster.but run commond bin/start-dfs.sh firstly, , run commond bin/start-mapred.sh after 5 minutes.

i think maybe server old, start hdfs need long time.when hdfs running, start mapred system not encounter problem.


Comments