Hadoop使用常见问题以及解决方法(3)
关键词:hadoop常见问题,hadoop能解决什么问题,hadoop 解决什么问题
Problem: Storage directory not exist
2010-02-09 21:37:49,890 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = yijian/192.168.0.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1
STARTUP_MSG: build = http://svn.apache.org/repos/asf/ … /release-0.20.1-rc1 -r 810220; compiled by ‘oom’ on Tue Sep 1 20:55:56 UTC 2009
************************************************************/
2010-02-09 21:37:52,093 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=8888
2010-02-09 21:37:52,125 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 127.0.0.1/127.0.0.1:8888
2010-02-09 21:37:52,140 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2010-02-09 21:37:52,156 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=jian,None,root,Administrators,Users
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2010-02-09 21:37:53,031 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:37:53,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2010-02-09 21:37:53,203 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directoryD:hadooprundfs_name_dir does not exist.
2010-02-09 21:37:53,203 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:hadooprundfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:37:53,234 INFO org.apache.hadoop.ipc.Server: Stopping server on 8888
2010-02-09 21:37:53,234 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:hadooprundfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:37:53,250 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yijian/192.168.0.13
************************************************************/
solution: 是因为存储目录D:hadooprundfs_name_dir不存在,所以只需要手动创建好这个目录即可。
Problem: NameNode is not formatted
2010-02-09 21:52:49,343 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = yijian/192.168.0.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1
STARTUP_MSG: build = http://svn.apache.org/repos/asf/ … /release-0.20.1-rc1 -r 810220; compiled by ‘oom’ on Tue Sep 1 20:55:56 UTC 2009
************************************************************/
2010-02-09 21:52:49,531 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=8888
2010-02-09 21:52:49,531 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 127.0.0.1/127.0.0.1:8888
2010-02-09 21:52:49,546 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2010-02-09 21:52:49,546 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=jian,None,root,Administrators,Users
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2010-02-09 21:52:50,265 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:52:50,265 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2010-02-09 21:52:50,359 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:52:50,359 INFO org.apache.hadoop.ipc.Server: Stopping server on 8888
2010-02-09 21:52:50,359 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:52:50,359 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yijian/192.168.0.13
************************************************************/
solution: 是因为HDFS还没有格式化,只需要运行hadoop namenode -format一下,然后再启动即可。
bin/hadoop jps后报如下异常:
Exception in thread “main” java.lang.NullPointerException
at sun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)
at sun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)
at sun.tools.jps.Jps.main(Jps.java:45)
原因为:
系统根目录/tmp文件夹被删除了。重新建立/tmp文件夹即可。
bin/hive
中出现 unable to create log directory /tmp/…也可能是这个原因
转载请注明:数据分析 » Hadoop使用常见问题以及解决方法(3)_Hadoop培训