Mac Hadoop 3.0.3 单机安装

#前言

    这周被安排搭建Hadoop+Hive查询平台。之前看过厦门林子雨的一套视频教程,现在终于有机会用用了。想想都有点鸡动。上午把服务器一台主机安装好了,由于没有权限开通端口,接着在自己的笔记上本安装了一次。其它照着官方给的文档弄就行了。


#步骤

1,Hadoop 要求大于java 1.7 版本,配置JAVA_HOME
   export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home"
   
2,Mac 要开启SSH 
   系统偏好设置-》共享-》远程登录(选中)
   ssh-keygen -t rsa -P ""
   cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
   
   ssh localhost
   
3,设置Hadoop环境变量
    export HADOOP_HOME="/Users/hubs/Hadoop/hadoop"
    export HADOOP_CONF_DIR="/Users/hubs/Hadoop/hadoop/etc/hadoop"   
    
4,vim $HADOOP_CONF_DIR/core-site.xml
<configuration>
      <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost/</value>
        </property>
    </configuration>
    
5, vim $HADOOP_CONF_DIR/hdfs-site.xml
   <configuration>
       <property>
            <name> dfs.replication </name>
            <value> 1 </value>
        </property>
    <configuration>

6,vim $HADOOP_CONF_DIR/mapred-site.xml      
    <configuration>
        <property>
            <name> mapreduce.framework.name </name>
            <value> yarn </value>
         </property>
        <property>
            <name> mapreduce.application.classpath </name>
            <value> $HADOOP_HOME/share/hadoop/mapreduce/\*:$HADOOP_HOME/share/hadoop/mapreduce/lib/\* </value>
         </property>
    <configuration>
    
7,vim $HADOOP_CONF_DIR/yarn-site.xml    
    <configuration>
       <property>
            <name> yarn.nodemanager.aux-services </name>
            <value> mapreduce_shuffle </value>
       </property>
         <property>
            <name> yarn.nodemanager.env-whitelist </name>
            <value> HADOOP_HOME,JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME </value>
         </property>
    <configuration>
    
8, 格式化节点
   cd $HADOOP_HOME && bin/hdfs namenode -format
  
9, 启动
   $HADOOP_HOME/sbin/start-dfs.sh
   $HADOOP_HOME/sbin/start-yarn.sh
   
10,查看jps

11,执行
    $HADOOP_HOME/bin/hdfs dfs -mkdir /user   
    $HADOOP_HOME/bin/hdfs dfs -mkdir /user/hubs 
    $HADOOP_HOME/bin/hdfs dfs -mkdir input
    $HADOOP_HOME/bin/hdfs dfs -put etc/hadoop/*.xml input
    $HADOOP_HOME/bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'
    $HADOOP_HOME/bin/hdfs dfs -cat output/*

参考

https://medium.com/@jeremytarling/apache-spark-and-hadoop-on-a-macbook-air-running-osx-sierra-66bfbdb0b6f7

http://hadoop.apache.org/docs/r3.0.3/hadoop-project-dist/hadoop-common/SingleCluster.html