hadoop总结.docx
- 文档编号:8923752
- 上传时间:2023-02-02
- 格式:DOCX
- 页数:34
- 大小:32.07KB
hadoop总结.docx
《hadoop总结.docx》由会员分享,可在线阅读,更多相关《hadoop总结.docx(34页珍藏版)》请在冰豆网上搜索。
hadoop总结
目录
一、准备机器1
二、修改各个主机的名称,修改/etc/hosts文件1
三、使用ntp同步所有机器的时间2
五、在官网上下载jdk-7u2-linux-i586.rpm包安装5
六、设置环境变量5
七、创建安装目录6
八、安装hadoop6
九、编辑配置文件7
十、运行hadoop9
十一、hadoop的namenode和secondnamenode分开部署在不同服务器11
十二、HadoopNameNodeNFS备份及恢复14
十三、通过rpm包安装、配置及卸载mysql15
十四、使用MYSQL作为HIVE的METASTORE16
十五、hive安装19
十六、hbase安装21
十七、hadoop增加新的节点23
十八、hadoop的一些基本命令23
十九、HadoopShell命令24
一、准备机器
本文所使用的版本是hadoop1.0.0,linux操作系统为rethat企业版5.0
有了四台linux机器。
启动三台机器,假设IP分别为192.168.1.201,192.168.1.202,192.168.1.203,192.168.1.204,201为master,202为slave01,203为slave02,204为secondary。
并且所有电脑系统的防火墙关闭。
SELINUX设置为display
master为名称节点,slave0x为数据节点,secondary为第二名称节点
二、修改各个主机的名称,修改/etc/hosts文件
使用命令:
hostnamemaster
192.168.1.201master
192.168.1.202slave01
192.168.1.203slave02
192.168.1.204secondary
所有的机器这个文件都做修改。
cat/etc/hosts查看信息为:
127.0.0.1localhost
192.168.1.201master
192.168.1.202slave01
192.168.1.203slave02
192.168.1.204secondary
三、使用ntp同步所有机器的时间
使用master机器做时间服务器,或者单独用一台机器做时间服务器,其他的电脑都和时间服务器的时间同步。
具体用法查看附带的.txt
四、配置master无密码登陆到所有机器(包括本机)
[hadoop@master~]$mkdir.ssh
[hadoop@master~]$chmod755.ssh
[hadoop@master~]$ssh-keygen-trsa
----以上所有机器都执行
Generatingpublic/privatersakeypair.
Enterfileinwhichtosavethekey(/home/hadoop/.ssh/id_rsa):
Enterpassphrase(emptyfornopassphrase):
Entersamepassphraseagain:
Youridentificationhasbeensavedin/home/hadoop/.ssh/id_rsa.
Yourpublickeyhasbeensavedin/home/hadoop/.ssh/id_rsa.pub.
Thekeyfingerprintis:
10:
45:
c4:
2b:
24:
fe:
95:
16:
33:
25:
5d:
c3:
07:
20:
6a:
29hadoop@master
[hadoop@master~]$cat.ssh/id_rsa.pub>>.ssh/authorized_keys
[hadoop@master~]$cd/home/hadoop/.ssh
[hadoop@master.ssh]$chmod644authorized_keys
[hadoop@master.ssh]$scpauthorized_keys192.168.1.203:
/home/hadoop/.ssh/
Theauthenticityofhost'192.168.1.203(192.168.1.203)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'192.168.1.203'(RSA)tothelistofknownhosts.
hadoop@192.168.1.203'spassword:
authorized_keys100%3950.4KB/s00:
00
[hadoop@master.ssh]$sshslave02date
Theauthenticityofhost'slave02(192.168.1.203)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'slave02'(RSA)tothelistofknownhosts.
MonJan3020:
36:
58CST2012
[hadoop@master.ssh]$sshslave02date
MonJan3020:
37:
00CST2012
[hadoop@master.ssh]$scpauthorized_keys192.168.1.202:
/home/hadoop/.ssh/
Theauthenticityofhost'192.168.1.202(192.168.1.202)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'192.168.1.202'(RSA)tothelistofknownhosts.
hadoop@192.168.1.202'spassword:
authorized_keys100%3950.4KB/s00:
00
[hadoop@master.ssh]$sshslave01date
Theauthenticityofhost'slave01(192.168.1.202)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'slave01'(RSA)tothelistofknownhosts.
MonJan3020:
36:
48CST2012
[hadoop@master.ssh]$sshslave01date
MonJan3020:
36:
49CST2012
-----以上在master机器上执行
[hadoop@slave01~]$cat.ssh/id_rsa.pub>>.ssh/authorized_keys
[hadoop@slave01~]$cd/home/hadoop/.ssh
[hadoop@slave01.ssh]$chmod644authorized_keys
[hadoop@slave01.ssh]$scpauthorized_keys192.168.1.203:
/home/hadoop/.ssh/
hadoop@192.168.1.203'spassword:
authorized_keys100%7910.8KB/s00:
00
[hadoop@slave01.ssh]$sshslave02date
MonJan3020:
40:
35CST2012
[hadoop@slave01.ssh]$scpauthorized_keys192.168.1.201:
/home/hadoop/.ssh/
Theauthenticityofhost'192.168.1.201(192.168.1.201)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'192.168.1.201'(RSA)tothelistofknownhosts.
hadoop@192.168.1.201'spassword:
authorized_keys100%7910.8KB/s00:
00
[hadoop@slave01.ssh]$sshmasterdate
Theauthenticityofhost'master(192.168.1.201)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'master'(RSA)tothelistofknownhosts.
MonJan3020:
43:
09CST2012
[hadoop@slave01.ssh]$sshmasterdate
MonJan3020:
43:
11CST2012
--------------以上在slave01机器上执行
[hadoop@slave02~]$cat.ssh/id_rsa.pub>>.ssh/authorized_keys
[hadoop@slave02~]$cd/home/hadoop/.ssh
[hadoop@slave02.ssh]$chmod644authorized_keys
[hadoop@slave02.ssh]$scpauthorized_keys192.168.1.202:
/home/hadoop/.ssh/
hadoop@192.168.1.203'spassword:
authorized_keys100%7910.8KB/s00:
00
[hadoop@slave02.ssh]$sshslave01date
MonJan3020:
40:
35CST2012
[hadoop@slave02.ssh]$scpauthorized_keys192.168.1.201:
/home/hadoop/.ssh/
Theauthenticityofhost'192.168.1.201(192.168.1.201)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'192.168.1.201'(RSA)tothelistofknownhosts.
hadoop@192.168.1.201'spassword:
authorized_keys100%7910.8KB/s00:
00
[hadoop@slave02.ssh]$sshmasterdate
Theauthenticityofhost'master(192.168.1.201)'can'tbeestablished.
RSAkeyfingerprintisad:
32:
f3:
b6:
c9:
bf:
49:
57:
f6:
ea:
dc:
37:
5d:
99:
d4:
8a.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'master'(RSA)tothelistofknownhosts.
MonJan3020:
43:
09CST2012
[hadoop@slave02.ssh]$sshmasterdate
MonJan3020:
43:
11CST2012
-------以上在slave02机器上执行
在secondary机器上做相同的操作
测试结果如下:
[hadoop@master.ssh]$sshmasterdate
MonJan3020:
45:
49CST2012
[hadoop@master.ssh]$sshslave01date
MonJan3020:
43:
41CST2012
[hadoop@master.ssh]$sshslave02date
MonJan3020:
44:
18CST2012
[hadoop@master.ssh]$sshsecondarydate
MonJan3020:
44:
18CST2012
[hadoop@slave01.ssh]$sshslave01date
MonJan3020:
43:
54CST2012
[hadoop@slave01.ssh]$sshslave02date
MonJan3020:
44:
29CST2012
[hadoop@slave01.ssh]$sshmasterdate
MonJan3020:
46:
10CST2012
[hadoop@slave01.ssh]$sshsecondarydate
MonJan3020:
44:
18CST2012
[hadoop@slave02.ssh]$sshslave02date
MonJan3020:
44:
45CST2012
[hadoop@slave02.ssh]$sshslave01date
MonJan3020:
44:
13CST2012
[hadoop@slave02.ssh]$sshmasterdate
MonJan3020:
46:
28CST2012
[hadoop@slave02.ssh]$sshsecondarydate
MonJan3020:
44:
18CST2012
[hadoop@secondary.ssh]$sshslave02date
MonJan3020:
44:
45CST2012
[hadoop@secondary.ssh]$sshslave01date
MonJan3020:
44:
13CST2012
[hadoop@secondary.ssh]$sshmasterdate
MonJan3020:
46:
28CST2012
[hadoop@secondary.ssh]$sshsecondarydate
MonJan3020:
44:
18CST2012
五、在官网上下载jdk-7u2-linux-i586.rpm包安装
在ROOT用户下在所有的电脑上都安装这个软件包:
Rpm–ivhjdk-7u2-linux-i586.rpm
六、设置环境变量
设置master的环境变量
dev@master:
~$vi.bashrc
exportJAVA_HOME=/usr/java/jdk1.7.0_02
exportCLASSPATH=$JAVA_HOME/lib/dt.jar:
$JAVA_HOME/lib/tools.jar
exportPATH=$PATH:
$JAVA_HOME/bin
exportHADOOP_HOME=/u01/app/hadoop-1.0.0
exportPATH=$PATH:
$HADOOP_HOME/bin:
$HADOOP_HOME/sbin
exportCLASSPATH=$CLASSPATH:
$HADOOP_HOME/share/hadoop/hadoop-core-1.0.0.jar
exportHADOOP_HOME=/u01/app/hadoop-1.0.0
exportPATH=$PATH:
$HADOOP_HOME/bin
dev@master:
~$source.bashrc
将master上的.bashrc拷贝到其他机器,并source刷新。
dev@master:
~$scp.bashrc192.168.1.202:
~
dev@master:
~$scp.bashrc192.168.1.203:
~
dev@master:
~$scp.bashrc192.168.1.204:
~
dev@slave01:
~$source.bashrc
dev@slave02:
~$source.bashrc
dev@secondary:
~$source.bashrc
七、创建安装目录
[root@master/]#mkdir–R/u01/app
[root@master/]#mkdir–R/u01/data
把所有的hadoop软件都安装在/u01/app目录下,把所有的数据都放在/u01/data目录下,各自可以根据需要改变路径
八、安装hadoop
从hadoop.apache.org下载hadoop-1.0.0-bin.tar.gz,上传到master中,放在/u01/app目录下,解压,然后复制到其他机器的相同目录,解压。
[root@masterapp]#tar-zxvfhadoop-1.0.0-bin.tar.gz
[root@masterapp]#scphadoop-1.0.0-bin.tar.gz192.168.1.202:
~
[root@masterapp]#scphadoop-1.0.0-bin.tar.gz192.168.1.203:
~
[root@masterapp]#scphadoop-1.0.0-bin.tar.gz192.168.1.204:
~
[root@masterapp]#tar-zxvfhadoop-1.0.0-bin.tar.gz在其他机器上使用一样的命令解压
九、编辑配置文件
[root@masterapp]#cdhadoop-1.0.0/etc/hadoop
[root@masterhadoop]#vihadoop-env.sh
仅需要设置JAVA_HOME,把exportJAVA_HOME=/usr/java/jdk1.7.0_02加在hadoop-env.sh这个文件的最后,保存退出
修改core-site.xml文件
[root@masterhadoop]#catcore-site.xml
xmlversion="1.0"?
>
xml-stylesheettype="text/xsl"href="configuration.xsl"?
>
--Putsite-specificpropertyoverridesinthisfile.-->
//192.168.1.201:
9000
Numberofminutesbetweentrashcheckpoints.Ifzero,thetrashfeatureisdisabled
修改hdfs-site.xml文件
[root@masterhadoop]#cathdfs-site.xml
xmlversion="1.0"?
>
xml-stylesheettype="text/xsl"href="configu
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- hadoop 总结
![提示](https://static.bdocx.com/images/bang_tan.gif)