https://www.cnblogs.com/snguo/p/16717947.html
服务器 | 运行角色 |
---|---|
master | namenode(HDFS主角色) datanode(HDFS从角色) resourcemanager(YARN主角色) nodemanager(YARN从角色) |
node1 | secondarynamenode(HDFS辅助角色) datanode(HDFS从角色) nodemanager(YARN从角色) |
node2 | datanode(HDFS从角色) nodemanager(YARN从角色) |
# 其他节点修改成node1~2hostnamectl set-hostname master
/etc/hosts
文件永久关闭
3台机器的SELinux
和防火墙
# 临时关闭SELinuxsetenforce 0# 修改SELinux的状态为disabled## sed "s/原字符串/替换字符串/" filenamesed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config# 禁止开机启动并现在关闭systemctl disable --now firewalld
本机
到本机
)oracle-JDK1.8
(要卸载openjdk
)安装OracleJDK8
https://blog.csdn.net/omaidb/article/details/128634443
在master
安装好JDK后,将JDK程序
和修改后的/etc/profile.d/jdk.sh
同步到集群其他机器
# 将jdk程序同步到其他机器scp -r /usr/java/ node1:/usr/## 或使用xsync脚本同步到集群内机器xsync /usr/java/# 将修改后的/etc/profile同步到其他机器scp /etc/profile.d/jdk.sh node1:/etc/profile.d## 或使用xsync脚本同步到集群内机器xsync /etc/profile.d/jdk.sh
# 集群内机器创建统一工作目录# mkdir -p /data/hadoop/{data,tmp,namenode,src}mkdir -p /export/{server,data,src}## 软件安装路径mkdir -p /export/server/## 数据存储路径mkdir -p /export/data/## 安装包存放路径mkdir -p /export/src/
# 上传安装包到/export/src/scp hadoop-3.1.4-bin-snappy-Centos7.tar.gz node1:/export/src/# 解压hadoop安装包到/export/server/## -C 解压到指定路径tar xvf hadoop-3.1.4-bin-snappy-Centos7.tar.gz -C /export/server/
配置文件在/export/server/hadoop-3.1.4/etc/hadoop
目录
# cd到hadoop目录cd /export/server/hadoop-3.1.4/etc/hadoop
默认配置文件以-default
结尾
自定义配置文件以-site
结尾
# 备份hadoop-env.shcp /export/server/hadoop-3.1.4/etc/hadoop/hadoop-env.sh{,.bak}# 编辑hadoop-env.shvim /export/server/hadoop-3.1.4/etc/hadoop/hadoop-env.sh
配置内容
# 指定JAVA_HOME路径export JAVA_HOME=jdk的安装目录# 若SSH端口不是默认的22,需指定端口# export HADOOP_SSH_OPTS="-p 1234"# 设置用户以执行对应角色的shell命令export HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODENAMEGER_USER=root
https://blog.csdn.net/wjt199866/article/details/106473174
# 备份cp /export/server/hadoop-3.1.4/etc/hadoop/core-site.xml{,.bak}# 编辑core-site.xmlvim /export/server/hadoop-3.1.4/etc/hadoop/core-site.xml
配置内容如下
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/export/data/hadoop-3.1.4</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property> <property> <name>fs.checkpoint.period</name> <value>3600</value> </property></configuration>
# 备份hdfs-site.xmlcp /export/server/hadoop-3.1.4/etc/hadoop/hdfs-site.xml{,.bak}# 编辑hdfs-site.xmlvim /export/server/hadoop-3.1.4/etc/hadoop/hdfs-site.xml
配置内容如下
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>node2:9868</value> </property></configuration>
# 备份mapred-site.xmlcp /export/server/hadoop-3.1.4/etc/hadoop/mapred-site.xml{,.bak}# 编辑mapred-site.xmlvim /export/server/hadoop-3.1.4/etc/hadoop/mapred-site.xml
配置内容如下
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop102:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop102:19888</value> </property></configuration>
# 备份yarn-site.xmlcp /export/server/hadoop-3.1.4/etc/hadoop/yarn-site.xml{,.bak}# 编辑yarn-site.xmlvim /export/server/hadoop-3.1.4/etc/hadoop/yarn-site.xml
配置内容如下
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>512</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log.server.url</name> <value>http://hadoop102:19888/jobhistory/logs</value></property><property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property></configuration>
配置从角色
的主机名或IP
# 备份yarn-site.xmlcp /export/server/hadoop-3.1.4/etc/hadoop/workers{,.bak}# 编辑yarn-site.xmlvim /export/server/hadoop-3.1.4/etc/hadoop/workers
配置内容: 添加Hadoop
集群机器主机名
或IP
到该文件中
masternode1node2node3
https://support.huaweicloud.com/usermanual-mrs/admin_guide_000277.html
通过scp
将master
上的/export/server/
目录同步
到集群内其他机器
# 登陆masterssh master# 复制安装包到node1 node2 node3scp -r /export/server/hadoop-3.1.4 node1:/export/server/## 复制到node2scp -r /export/server/hadoop-3.1.4 node2:/export/server/
https://blog.csdn.net/omaidb/article/details/121746997
vim /etc/profile.d/hadoop.sh
# 指定HADOOP_HOME路径export HADOOP_HOME=/export/server/hadoop-3.1.4export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# 使新增的HADOOP变量生效source /etc/profile# 查看hadoop命令是否找到which hadoop
# 查看hadoop版本信息cat /export/data/dfs/name/current
/etc/profile.d/hadoop.sh
同步到其他机器# 将修改后的/etc/profile同步到其他机器scp /etc/profile.d/hadoop.sh node1:/etc/profile.dscp /etc/profile.d/hadoop.sh node2:/etc/profile.d
# 首次启动HDFS时,必需对其进行格式化操作hdfs namenode -format
下图提示初始化成功了。
# 查看数据存储路径ll /export/server/hadoop-3.1.4/dfs/name/ll /export/server/hadoop-3.1.4/dfs/name/current
# 启动hdoop服务start-all.sh
# 整体启动或停止hdfs## 启动hdfssbin/start-dfs.sh## 关闭hdfssbin/stop-dfs.sh# hdfs集群启动角色hdfs --daemon start namenode|datanode|secondarynamenode# hdfs关闭角色hdfs --daemon stop namenode|datanode|secondarynamenode
# 整体启动或停止yarn## 启动yarnsbin/start-yarn.sh## 停止yarnsbin/stop-yarn.sh# YARN启动NodeManger、ResourceManger角色yarn --daemon start resourcemanager|nodemanager# 关闭NodeManger、ResourceManger角色yarn --daemon stop resourcemanager|nodemanager
# 启动历史服务器(HistoryServer角色)mapred --daemon start historyserver# 关闭历史服务器(HistoryServer角色)mapred --daemon stop historyserver
启动**datanode**
时会自动创建**logs**
目录
# 使用jps查看启动的java进程jps
start-all.sh
一键启动所有角色stop-all.sh
一键关闭所有角色# 日志所在目录/export/server/hadoop-3.1.4/logs
webui界面地址 http://namenode_host:9870
webui界面地址 http://resourcemanager_host:8088