Hadoop安装手册.docx
- 文档编号:23826489
- 上传时间:2023-05-21
- 格式:DOCX
- 页数:39
- 大小:96.90KB
Hadoop安装手册.docx
《Hadoop安装手册.docx》由会员分享,可在线阅读,更多相关《Hadoop安装手册.docx(39页珍藏版)》请在冰豆网上搜索。
Hadoop安装手册
Hadoop
目录
1Centos3
1.1安装环境3
1.2初始环境配置3
1.2.1设置centos开机自启动网络3
1.2.2关闭防火墙4
1.2.3关闭selinux4
1.2.4修改hosts4
1.3安装JDK5
1.3.1安装5
1.3.2设置环境变量6
1.4添加用户6
1.5复制虚拟机7
1Hadoop1.1.2安装
1.1安装环境
操作系统:
Centos6.564位
Hadoop:
hadoop-1.1.2
Jdk:
jdk-7u51
1.2初始环境配置
1.2.1设置centos开机自启动网络
[root@localhostnetwork-scripts]#cd/etc/sysconfig/network-scripts
[root@localhostnetwork-scripts]#ls
ifcfg-eth0ifdown-bnepifdown-ipv6ifdown-pppifdown-tunnelifup-bnepifup-ipv6ifup-plusbifup-routesifup-wirelessnetwork-functions
ifcfg-loifdown-ethifdown-isdnifdown-routesifupifup-ethifup-isdnifup-postifup-sitinit.ipv6-globalnetwork-functions-ipv6
ifdownifdown-ipppifdown-postifdown-sitifup-aliasesifup-ipppifup-plipifup-pppifup-tunnelnet.hotplug
[root@localhostnetwork-scripts]#viifcfg-eth0
DEVICE=eth0
HWADDR=00:
0C:
29:
09:
8B:
B0
TYPE=Ethernet
UUID=3192d1fe-8ff3-411f-81b7-8270e86f5959
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
1.2.2关闭防火墙
关闭防火墙
[root@master01/]#serviceiptablesstop
iptables:
将链设置为政策ACCEPT:
filter[确定]
iptables:
清除防火墙规则:
[确定]
iptables:
正在卸载模块:
[确定]
设置开机不启用
[root@master01/]#chkconfigiptablesoff
1.2.3关闭selinux
[root@master01/]#vi/etc/selinux/config
#ThisfilecontrolsthestateofSELinuxonthesystem.
#SELINUX=cantakeoneofthesethreevalues:
#enforcing-SELinuxsecuritypolicyisenforced.
#permissive-SELinuxprintswarningsinsteadofenforcing.
#disabled-NoSELinuxpolicyisloaded.
SELINUX=disabled
#SELINUXTYPE=cantakeoneofthesetwovalues:
#targeted-Targetedprocessesareprotected,
#mls-MultiLevelSecurityprotection.
#SELINUXTYPE=targeted
1.2.4修改hosts
[root@master01etc]#vi/etc/hosts
127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4
:
:
1localhostlocalhost.localdomainlocalhost6localhost6.localdomain6
192.168.239.100master01
192.168.239.101slave01
192.168.239.102slave02
1.3安装JDK
1.3.1安装
cd/home/nunchakus
[root@localhostnunchakus]#ls
jdk-7u51-linux-x64.rpm
[root@localhostnunchakus]#rpm-ivhjdk-7u51-linux-x64.rpm
Preparing...###########################################[100%]
1:
jdk###########################################[100%]
UnpackingJARfiles...
rt.jar...
jsse.jar...
charsets.jar...
tools.jar...
localedata.jar...
jfxrt.jar...
[root@localhostnunchakus]#ls
jdk-7u51-linux-x64.rpm
[root@localhostnunchakus]#cd/usr
[root@localhostusr]#ls
binetcgamesincludejavaliblib64libexeclocalsbinsharesrctmp
[root@localhostusr]#cdjava
[root@localhostjava]#ls
defaultjdk1.7.0_51latest
1.3.2设置环境变量
[root@localhost~]#vi/etc/profile
末尾加入
exportJAVA_HOME=/usr/java/jdk1.7.0_51
exportJAVA_BIN=/usr/java/jdk1.7.0_51/bin
exportPATH=$PATH:
$JAVA_HOME/bin
exportCLASSPATH=.:
$JAVA_HOME/lib/dt.jar:
$JAVA_HOME/lib/tools.jar
exportJAVA_HOMEJAVA_BINPATHCLASSPATH
让/etc/profile文件修改后立即生效,可以使用如下命令:
#./etc/profile
注意:
.和/etc/profile有空格.
重启测试
[root@localhost~]#java-version
javaversion"1.7.0_45"
OpenJDKRuntimeEnvironment(rhel-2.4.3.3.el6-x86_64u45-b15)
OpenJDK64-BitServerVM(build24.45-b08,mixedmode)
1.4添加用户
[root@master01/]#useraddhadoop
[root@master01/]#passwdhadoop
更改用户hadoop的密码。
新的密码:
无效的密码:
它没有包含足够的不同字符
无效的密码:
是回文
重新输入新的密码:
passwd:
所有的身份验证令牌已经成功更新。
1.5复制虚拟机
从master01分别复制出slave01和slave02
1.6修改复制虚拟机的主机名
[root@master01~]#vi/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave01
NTPSERVERARGS=iburst
[root@master01~]#vi/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave02
NTPSERVERARGS=iburst
1.7设置SSH免密码登陆
设置master01、slave01、slave02三台机器之间ssh免密码登陆
1.7.1SSH配置
以hadoop用户登陆,每个节点执行如下,生成秘钥
[hadoop@master01~]$ssh-keygen-trsa
Generatingpublic/privatersakeypair.
Enterfileinwhichtosavethekey(/home/hadoop/.ssh/id_rsa):
Enterpassphrase(emptyfornopassphrase):
Entersamepassphraseagain:
Youridentificationhasbeensavedin/home/hadoop/.ssh/id_rsa.
Yourpublickeyhasbeensavedin/home/hadoop/.ssh/id_rsa.pub.
Thekeyfingerprintis:
cf:
e8:
82:
9c:
6e:
22:
6c:
ae:
6c:
df:
39:
3f:
19:
9e:
2f:
1fhadoop@master01
Thekey'srandomartimageis:
+--[RSA2048]----+
||
||
||
||
|S|
|.+|
|..o.=Eo|
|o+.*.oB.|
|*+.=.ooo*o|
+-----------------+
[hadoop@master01~]$cd.ssh
[hadoop@master01.ssh]$cpid_rsa.pubauthorized_keys
1.7.2分发SSH公钥
将每个节点生成的authorized_keys文件互相拷贝至各个节点
[hadoop@master01.ssh]$scpauthorized_keyshadoop@slave01:
/home/hadoop/.ssh/id_rsa.pub.master01
hadoop@slave01'spassword:
authorized_keys
[hadoop@master01.ssh]$scpauthorized_keyshadoop@slave02:
/home/hadoop/.ssh/id_rsa.pub.master01
Theauthenticityofhost'slave02(192.168.239.102)'can'tbeestablished.
RSAkeyfingerprintis4b:
66:
f7:
1d:
d2:
cb:
8d:
c7:
f4:
fe:
e3:
cb:
f4:
7e:
67:
c3.
Areyousureyouwanttocontinueconnecting(yes/no)?
yes
Warning:
Permanentlyadded'slave02,192.168.239.102'(RSA)tothelistofknownhosts.
hadoop@slave02'spassword:
authorized_keys
将每个节点生成的公钥加入authorized_keys文件(每个节点都要做)
[hadoop@master01.ssh]$catid_rsa.pub.slave01>>authorized_keys
[hadoop@master01.ssh]$catid_rsa.pub.slave02>>authorized_keys
修改每个节点的权限(每个节点都要做)
[root@master01home]#chmod-R700hadoop
[root@master01.ssh]#chmod644authorized_keys要保证此文件只有hadoop本用户访问的权限
测试
[hadoop@master01.ssh]$sshslave01
Lastlogin:
TueMar1816:
28:
412014from192.168.239.1
1.8下载并解压Hadoop
[hadoop@master01~]$tarxvfhadoop-1.1.2.tar.gz
1.9版本1.1.2修改配置文件
配置文件路径
[hadoop@master01conf]$pwd
/home/hadoop/hadoop-1.1.2/conf
1.9.1修改core-site.xml
[hadoop@master01conf]$vicore-site.xml
~
xmlversion="1.0"?
>
xml-stylesheettype="text/xsl"href="configuration.xsl"?
>
--Putsite-specificpropertyoverridesinthisfile.-->
//master01:
9000
--默认的hadoop.tmp.dir的选线一般都为/tmp,而linux系统的/tmp目录往往文件系统的类型往往是Hadoop不支持的。
所以我们在反复namenodeformat,restart之后,仍然启动不了HDFS,不妨修改一些hadoop.tmp.dir目录-
1.9.2修改mapred-site.xml
[hadoop@master01conf]$vimapred-site.xml
xmlversion="1.0"?
>
xml-stylesheettype="text/xsl"href="configuration.xsl"?
>
--Putsite-specificpropertyoverridesinthisfile.-->
9001
1.9.3修改hdfs-site.xml
[hadoop@master01conf]$vihdfs-site.xml
xmlversion="1.0"?
>
xml-stylesheettype="text/xsl"href="configuration.xsl"?
>
--Putsite-specificpropertyoverridesinthisfile.-->
1.9.4修改masters
[hadoop@master01conf]$vimasters
master01
1.9.5修改slaves
[hadoop@master01conf]$vimasters
slave01
slave02
1.9.6修改hadoop-env.sh
[hadoop@master01conf]$vihadoop-env.sh
1.10向各个节点复制Hadoop
[hadoop@master01~]$scphadoop-1.1.2slave01:
/home/hadoop
hadoop-examples-1.1.2.jar100%139KB139.1KB/s00:
00
NOTICE.txt100%1010.1KB/s00:
00
DataNode.launch100%29202.9KB/s00:
00
1.11格式化namenode
[hadoop@master01bin]$./hadoopnamenode-format
14/03/1817:
32:
30INFOnamenode.NameNode:
STARTUP_MSG:
/************************************************************
STARTUP_MSG:
StartingNameNode
STARTUP_MSG:
host=master01/192.168.239.100
STARTUP_MSG:
args=[-format]
STARTUP_MSG:
version=1.1.2
STARTUP_MSG:
build=https:
//svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r1440782;compiledby'hortonfo'onThuJan3102:
03:
24UTC2013
************************************************************/
14/03/1817:
32:
30INFOutil.GSet:
VMtype=64-bit
14/03/1817:
32:
30INFOutil.GSet:
2%maxmemory=19.33375MB
14/03/1817:
32:
30INFOutil.GSet:
capacity=2^21=2097152entries
14/03/1817:
32:
30INFOutil.GSet:
recommended=2097152,actual=2097152
14/03/1817:
32:
30INFOnamenode.FSNamesystem:
fsOwner=hadoop
14/03/1817:
32:
31INFOnamenode.FSNamesystem:
supergroup=supergroup
14/03/1817:
32:
31INFOnamenode.FSNamesystem:
isPermissionEnabled=true
14/03/1817:
32:
31INFOnamenode.FSNamesystem:
dfs.block.invalidate.limit=100
14/03/1817:
32:
31INFOnamenode.FSNamesystem:
isAccessTokenEnabled=falseaccessKeyUpdateInterval=0min(s),accessTokenLifetime=0min(s)
14/03/1817:
32:
31INFOnamenode.NameNode:
Cachingfilenamesoccuringmorethan10times
14/03/1817:
32:
31INFOcommon.Storage:
Imagefileofsize112savedin0seconds.
14/03/1817:
32:
31INFOnamenode.FSEditLog:
closingeditlog:
position=4,editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
14/03/1817:
32:
31INFOnamenode.FSEditLog:
closesuccess:
truncateto4,editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
14/03/1817:
32:
32INFOcommon.Storage:
Storagedirectory/tmp/hadoop-hadoop/dfs/namehasbeensuccessfullyformatted.
14/03/1817:
32:
32INFOnamenode.NameNode:
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG:
ShuttingdownNameNodeatmaster01/192.168.239.100
************************************************************/
1.12启动守护进程
[hadoop@master01bin]$./start-all.sh
startingnamenode,loggingto/home/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-hadoop-namenode-master01.out
slave02:
startingdatanode,loggingto/home/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-hadoop-datanode-slave02.out
slave01:
startingdatanode,loggingto/home/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-hadoop-datanode-slave01.out
master01:
startingsecondarynamenode,loggingto/home/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-hadoop-secondarynamenode-master01.out
startingjobtracker,loggingto/home/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-hadoop-jobtracke
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- Hadoop 安装 手册