Hadoop 2.x HDFS和YARN的啟動方式
一.三種啟動方式介紹
方式一:逐一啟動(實際生產環境中的啟動方式)
- hadoop-daemon.sh start|stop namenode|datanode| journalnode
- yarn-daemon.sh start |stop resourcemanager|nodemanager
方式二:分開啟動
- start-dfs.sh
- start-yarn.sh
方式三:一起啟動
- start-all.sh
二.腳本解讀
start-dfs.sh腳本:
(1) 通過命令bin/hdfs getconf –namenodes查看namenode在那些節點上
(2) 通過ssh方式登錄到遠程主機,啟動hadoop-deamons.sh腳本
(3) hadoop-deamon.sh腳本啟動slaves.sh腳本
(4) slaves.sh腳本啟動hadoop-deamon.sh腳本,再逐一啟動
start-all.sh腳本:
說明:start-all.sh實際上是調用sbin/start-dfs.sh腳本和sbin/start-yarn.sh腳本
三.三種啟動方式的關系
start-all.sh其實調用start-dfs.sh和start-yarn.sh
start-dfs.sh調用hadoop-deamon.sh
start-yarn.sh調用yarn-deamon.sh
如下圖:
四.為什么要設置ssh協議
當執行start-dfs.sh腳本時,會調用slaves.sh腳本,通過ssh協議無密碼登陸到其他節點去啟動進程。
為了能自動啟動遠程節點的進程,需要進行免密碼登錄。
五.采用第二種啟動方式
上面已經配置好了ssh公鑰登錄,接下來用第二種啟動方式啟動
Step1:先停止所以進程(如果已經啟動)
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/yarn-daemon.sh stop nodemanager
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/yarn-daemon.sh stop resourcemanager
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/hadoop-daemon.sh stop datanode
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/hadoop-daemon.sh stop secondarynamenode
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/hadoop-daemon.sh stop namenode
Step2:啟動所以進程
- [hadoop@hadoop-yarn hadoop-2.2.0]$sbin/start-dfs.sh
- [hadoop@hadoop-yarn hadoop-2.2.0]$ sbin/start-yarn.sh
Step3:查看管理界面