前提条件安装jdk

配置hostname

1
2
vi  /etc/hosts
hostname my.hadoop.cn

新建用户hadoop

1
2
3
4
5
6
useradd hadoop
passwd hadoop
chmod u + w /etc/sudoers
vim /etc/sudoers
#在root ALL=(ALL)ALL下添加hadoop ALL(ALL)ALL
chmod u - w /etc/sudoers

配置ssh

1
2
3
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

安装hadoop

1
2
3
4
5
mkdir /soft

tar -zxvf hadoop-3.3.5.tar.gz -C /soft/

sudo chown -R hadoop:hadoop /soft/hadoop-3.3.5

配置JAVA_HOME

进入/soft/hadoop-3.3.5,编辑文件etc/hadoop/hadoop-env.sh配置JAVA_HOME

1
export JAVA_HOME=/usr/java/jdk1.8.0_231-amd64

配置HADOOP_HOME

1
2
export HADOOP_HOME=/soft/hadoop-3.3.5
export PATH=$HADOOP_HOME/bin

编辑etc/hadoop/core-site.xml文件

1
2
3
4
5
6
7
8
9
10
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://my.hadoop.cn:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/soft/hadoop-3.3.5/tmp</value>
</property>
</configuration>

编辑etc/hadoop/hdfs-site.xml文件

1
2
3
4
5
6
7
8
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

编辑文件etc/hadoop/mapred-site.xml,添加如下配置

1
2
3
4
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

编辑文件etc/hadoop/yarn-site.xml

1
2
3
4
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

格式化hdfs

1
hdfs namenode -format

启动

1
./start-all.sh

添加hdfs权限

1
hdfs dfs -chmod 777 /