高端网站开发哪家强做网店的网站
高端网站开发哪家强,做网店的网站,购物网页模板,网站开发建设项目服务清单Docker 部署分布式 Hadoop#xff08;超详细实战版#xff09;一#xff1a;背景二#xff1a;部署1#xff09;创建基础镜像2#xff09;创建 Hadoop3#xff09;启动 Hadoop4#xff09;保存镜像总结适合人群#xff1a;大数据初学者、运维工程师、想在本地快速搭建…Docker 部署分布式 Hadoop超详细实战版一背景二部署1创建基础镜像2创建 Hadoop3启动 Hadoop4保存镜像总结适合人群大数据初学者、运维工程师、想在本地快速搭建 Hadoop 集群的同学技术环境CentOS 7 Docker Hadoop 2.7.2架构模式1 Master 2 Slave 分布式集群一背景在传统方式下部署 Hadoop需要准备多台虚拟机手动配置网络、主机名、SSH 免密重复安装 JDK、Hadoop环境不可复用而使用 Docker 的优势✅ 环境隔离✅ 一台机器模拟多节点✅ 可快速销毁重建✅ 镜像可复用✅ 非常适合学习与实验在真实生产中例如在基于 Apache Hadoop 的数据平台环境中也常结合容器化与编排技术使用。二部署1创建基础镜像查看系统版本[roothadoop108 ~]# uname -r3.10.0-862.el7.x86_64安装 Docker[roothadoop108 ~]# yum install -y docker启动 Docker[roothadoop108 ~]# systemctl start docker设置 Docker 开机自启[roothadoop108 ~]# systemctl enable docker查看 Docker 状态[roothadoop108 ~]# systemctl status docker配置 Docker 镜像加速器[roothadoop108 ~]# vim /etc/docker/daemon.json{ registry-mirrors: [https://3iy7bctt.mirror.aliyuncs.com] }[roothadoop108 ~]# systemctl daemon-reload[roothadoop108 ~]# systemctl restart docker[roothadoop108 ~]# docker info搜索并拉取 CentOS 镜像[roothadoop108 ~]# docker search centos[roothadoop108 ~]# docker pull centos:7查看镜像列表[roothadoop108 ~]# docker images运行 CentOS 容器带特权模式用于启动 systemd 服务[roothadoop108 ~]# docker run --privilegedtrue --name centos7 -h hadoop -itd centos:7 /usr/sbin/init查看运行中的容器[roothadoop108 ~]# docker ps进入容器[roothadoop108 ~]# docker exec -it centos7 /bin/bash在容器内安装必要工具[roothadoop ~]# yum install -y vim net-tools openssh-server openssh-clients rsync配置 SSH 服务并启动[roothadoop ~]# vim /etc/ssh/sshd_configPort22PermitRootLoginyes[roothadoop ~]# systemctl start sshd.service[roothadoop ~]# systemctl enable sshd.service[roothadoop ~]# systemctl status sshd.service创建软件目录[roothadoop ~]# mkdir -p /opt/module /opt/software退出容器并提交为新的镜像[roothadoop ~]# exit[roothadoop108 ~]# docker commit 容器ID centos:hadoop[roothadoop108 ~]# docker images2创建 Hadoop基于新镜像启动 Master 和 Slave 容器[roothadoop108 ~]# docker run --privilegedtrue --name master -h master -p 50070:50070 -itd centos:hadoop /usr/sbin/init[roothadoop108 ~]# docker run --privilegedtrue --name slave01 -h slave01 -p 8088:8088 -itd centos:hadoop /usr/sbin/init[roothadoop108 ~]# docker run --privilegedtrue --name slave02 -h slave02 -itd centos:hadoop /usr/sbin/init进入容器配置 hosts 文件在每个容器中执行[rootmaster ~]# vim /etc/hosts172.17.0.3 master172.17.0.4 slave01172.17.0.5 slave02设置 root 密码[rootmaster ~]# passwd root[rootslave01 ~]# passwd root[rootslave02 ~]# passwd root配置 SSH 免密登录[rootmaster ~]# ssh-keygen -t rsa[rootmaster ~]# ssh-copy-id master[rootmaster ~]# ssh-copy-id slave01[rootmaster ~]# ssh-copy-id slavse02从宿主机拷贝 Hadoop 和 JDK 安装包到容器[roothadoop108 ~]# docker cp jdk-8u144-linux-x64.tar.gz master:/opt/software[roothadoop108 ~]# docker cp hadoop-2.7.2.tar.gz master:/opt/software安装 JDK[rootmaster ~]# tar -xzvf /opt/software/jdk-8u144-linux-x64.tar.gz -C /opt/module/[rootmaster ~]# vim /etc/profile# JAVA_HOMEexportJAVA_HOME/opt/module/jdk1.8.0_144exportCLASSPATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexportPATH$PATH:$JAVA_HOME/bin[rootmaster ~]# source /etc/profile[rootmaster ~]# java -version安装 Hadoop[rootmaster ~]# tar -xzvf /opt/software/hadoop-2.7.2.tar.gz -C /opt/module/[rootmaster ~]# vim /etc/profile# HADOOP_HOMEexportHADOOP_HOME/opt/module/hadoop-2.7.2exportPATH$PATH:$HADOOP_HOME/binexportPATH$PATH:$HADOOP_HOME/sbin[rootmaster ~]# source /etc/profile[rootmaster ~]# hadoop version配置 Hadoop 配置文件[rootmaster ~]# cd /opt/module/hadoop-2.7.2/etc/hadoop配置core-site.xmlconfiguration property namefs.defaultFS/name valuehdfs://master:9000/value /property property namehadoop.tmp.dir/name value/opt/module/hadoop-2.7.2/data/tmp/value /property /configuration配置hadoop-env.shexport JAVA_HOME/opt/module/jdk1.8.0_144配置hdfs-site.xmlproperty namedfs.replication/name value3/value /property property namedfs.namenode.secondary.http-address/name valueslave02:50090/value /property property namedfs.permissions.enabled/name valuefalse/value /property配置slaves文件master slave01 slave02配置yarn-env.shexport JAVA_HOME/opt/module/jdk1.8.0_144配置yarn-site.xmlproperty nameyarn.nodemanager.aux-services/name valuemapreduce_shuffle/value /property property nameyarn.resourcemanager.hostname/name valueslave01/value /property配置mapred-site.xml[rootmaster hadoop]# mv mapred-site.xml.template mapred-site.xmlproperty namemapreduce.framework.name/name valueyarn/value /property配置mapred-env.shexport JAVA_HOME/opt/module/jdk1.8.0_144分发配置文件到 slave 节点[rootmaster ~]# scp -r /opt/module/jdk1.8.0_144/ rootslave01:/opt/module/[rootmaster ~]# scp -r /opt/module/jdk1.8.0_144/ rootslave02:/opt/module/[rootmaster ~]# scp -r /opt/module/hadoop-2.7.2/ rootslave01:/opt/module/[rootmaster ~]# scp -r /opt/module/hadoop-2.7.2/ rootslave02:/opt/modusle/在 slave 节点配置环境变量在 slave01 和 slave02 中执行[rootslave01 ~]# vim /etc/profile# JAVA_HOMEexportJAVA_HOME/opt/module/jdk1.8.0_144exportCLASSPATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexportPATH$PATH:$JAVA_HOME/bin# HADOOP_HOMEexportHADOOP_HOME/opt/module/hadoop-2.7.2exportPATH$PATH:$HADOOP_HOME/binexportPATH$PATH:$HADOOP_HOME/sbin[rootslave01 ~]# source /etc/profile3启动 Hadoop格式化 HDFS 并启动 Hadoop在 master 节点执行[rootmaster ~]# hdfs namenode -format[rootmaster ~]# start-dfs.sh在 slave01 节点执行[rootslave01 ~]# start-yarn.sh浏览器访问HDFShttp://宿主机IP:50070YARNhttp://宿主机IP:80884保存镜像停止 Hadoop 集群[rootslave01 ~]# stop-yarn.sh[rootmaster ~]# stop-dfs.sh将容器提交为镜像[roothadoop108 ~]# docker commit master centos:master[roothadoop108 ~]# docker commit slave01 centos:slave01[roothadoop108 ~]# docker commit slave02 centos:slave02总结本文完整演示了制作 Hadoop 基础镜像构建三节点集群配置 SSH 免密配置 HDFS YARN启动并验证 Web UI保存为可复用镜像核心思想用 Docker 模拟分布式环境用容器复刻真实大数据架构对于正在做大数据方向学习或毕业设计的同学这种方式可以极大降低实验成本。