本文共 5429 字,大约阅读时间需要 18 分钟。
java -version
查看当前系统自带的open jdk版本信息
rpm -qa | grep java
查看包含java字符串的文件,其中删除类似下面这四个文件(不一定是四个)
java-1.7.0-openjdk-1.7.0.111-2.6.7.8.el7.x86_64java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64java-1.8.0-openjdk-headless-1.8.0.102-4.b14.el7.x86_64java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.8.el7.x86_64
包含noarch的文件不必删除,如下:
python-javapackages-3.4.1-11.el7.noarchtzdata-java-2016g-2.el7.noarchjavapackages-tools-3.4.1-11.el7.noarch
rpm -e --nodeps java-1.7.0-openjdk-1.7.0.111-2.6.7.8.el7.x86_64rpm -e --nodeps java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.102-4.b14.el7.x86_64rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.8.el7.x86_64
执行完以上步骤后可以再次使用java -version查看是否已经删除成功。
http://www.oracle.com/technetwork/java/javase/downloads/index.html
解压到/usr/java
tar -zxvf jdk-8u191-linux-x64.tar.gz
配置环境变量
vi /etc/profile
最后一行
#java environmentexport JAVA_HOME=/usr/java/jdk1.8.0_191export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jarexport PATH=$PATH:${JAVA_HOME}/bin
source /etc/profilejava -version 检查是否成功
下载安装包
http://apache.claz.org/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz 解压到usr/local/tar -zxvf hadoop-3.1.1.tar.gz
重命名
mkdir hadoopmv hadoop-3.1.1/* hadoop/
环境变量配置
vi /etc/profile.d/hadoop.sh# set the hadoop homeexport HADOOP_HOME="/usr/local/hadoop"export HADOOP_MAPRED_HOME="/usr/local/hadoop"export HADOOP_PID_DIR="${HADOOP_HOME}/pids"export YARN_PID_DIR=${HADOOP_PID_DIR}# set hadoop log direxport HADOOP_LOG_DIR="/data/bigdata/log/hadoop-hdfs"export YARN_LOG_DIR="/data/bigdata/log/hadoop-yarn"export HADOOP_MAPRED_LOG_DIR="/data/bigdata/log/hadoop-mapred"if [[ -n $HADOOP_HOME ]]; then export PATH=$HADOOP_HOME/bin:$PATH export PATH=$HADOOP_HOME/sbin:$PATHfi
source /etc/profilehadoop -version 检查是否成功
sudo groupadd hadoopsudo useradd -G hadoop hduser
mkdir -p /data/bigdata/logchown hduser /data/bigdata/log -R
passwd hduser123456
fs.defaultFS hdfs://master:9000 hadoop.proxyuser.hadoop.hosts * hadoop.proxyuser.hadoop.groups hadoop
mapreduce.framework.name yarn yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=/usr/local/hadoop mapreduce.map.env HADOOP_MAPRED_HOME=/usr/local/hadoop mapreduce.reduce.env HADOOP_MAPRED_HOME=/usr/local/hadoop
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname master yarn.nodemanager.resource.memory-mb 2048 yarn.nodemanager.resource.cpu-vcores 1 yarn.web-proxy.address master:9001
dfs.namenode.http-address master:50070
export JAVA_HOME=/usr/java/jdk1.8.0_191 export HADOOP_HOME=/usr/local/hadoop
./hadoop namenode -format
$ vi sbin/start-dfs.sh$ vi sbin/stop-dfs.sh
两处增加以下内容
HDFS_DATANODE_USER=rootHADOOP_SECURE_DN_USER=hdfsHDFS_NAMENODE_USER=rootHDFS_SECONDARYNAMENODE_USER=root
$ vi sbin/start-yarn.sh$ vi sbin/stop-yarn.sh
两处增加以下内容
YARN_RESOURCEMANAGER_USER=rootHADOOP_SECURE_DN_USER=yarnYARN_NODEMANAGER_USER=rootYARN_PROXYSERVER_USER=root
vi /etc/hosts192.168.1.30 master192.168.1.31 node1192.168.1.32 node2
1、生成公钥和私钥
在主节点中,执行:
ssh-keygen -trsa
然后,不断的按回车键。
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keyschmod 600 ~/.ssh/authorized_keys
2、将公钥复制到其他node(node复制到master同理)
scp ~/.ssh/authorized_keys root@node1:~/.ssh/scp ~/.ssh/authorized_keys root@node2:~/.ssh/scp ~/.ssh/authorized_keys root@master:~/.ssh/
./start-all.sh
//引入使用到的依赖export CLASSPATH="$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:$HADOOP_HOME/share/hadoop/common/hadoop-common-3.1.1.jar:~/MapReduceTutorial/SalesCountry/*:$HADOOP_HOME/lib/*"//编译javac -d . SalesMapper.java SalesCountryReducer.java SalesCountryDriver.java //设置程序入口vi Manifest.txtMain-Class: SalesCountry.SalesCountryDriver //打包$JAVA_HOME/bin/jar cfm ProductSalePerCountry.jar Manifest.txt SalesCountry/*.class//拷贝数据源到hdfs$HADOOP_HOME/bin/hdfs dfs -copyFromLocal Sales2014.csv ///运行作业$HADOOP_HOME/bin/hadoop jar ProductSalePerCountry.jar /Sales2014.csv /mapreduce_output_sales
http://master:8088
http://master:50070
如果50070访问不了,可能是namenode启动失败了,需要重新格式化HDFS
$HADOOP_HOME/bin/hadoop namenode -format 再重启应该就可以了。详细见《》
Datanode启动不了,需要保持ClusterID一致。
将name/current目录中VERSION的ClusterID复制到data/current目录中VERSION的ClusterID即可。Datanode启动命令:$HADOOP_HOME/bin/hdfs --daemon start datanode
详细见《》
转载地址:http://xnbgi.baihongyu.com/