SapHanaTutorial.Com HOME     Learning-Materials Interview-Q&A Certifications Quiz Online-Courses Forum Jobs Trendz FAQs  
     Explore The World of Hana With Us     
About Us
Contact Us
 Apps
X
HANA App
>>>
Hadoop App
>>>
Tutorial App on SAP HANA
This app is an All-In-One package to provide everything to HANA Lovers.

It contains
1. Courses on SAP HANA - Basics, Modeling and Administration
2. Multiple Quizzes on Overview, Modelling, Architeture, and Administration
3. Most popular articles on SAP HANA
4. Series of Interview questions to brushup your HANA skills
Tutorial App on Hadoop
This app is an All-In-One package to provide everything to Hadoop Lovers.

It contains
1. Courses on Hadoop - Basics and Advanced
2. Multiple Quizzes on Basics, MapReduce and HDFS
3. Most popular articles on Hadoop
4. Series of Interview questions to brushup your skills
Apps
HANA App
Hadoop App
';
Search
Stay Connected
Search Topics
Course Index
Close
X
Install Your Own Hadoop on Windows and Run MapReduce Programs
Course Overview
1. Introduction
2. Installation of Hadoop 1.0
3. MapReduce Programs in Hadoop 1.0
4. Installation of Hadoop 2.0
5. MapReduce Programs in Hadoop 2.0
6. What is Next?
<< Previous
Next >>
4.5. Configure Hadoop

    1. Extract hadoop-2.2.0.tar.gz to newly created folder "C:\Hadoop". You will see bin, sbin and other folders

      Hadoop Installation on Windows

    2. Add Environment Variable HADOOP_HOME and edit Path Variable to add bin directory of HADOOP_HOME (C:\Hadoop\bin).

      Hadoop Installation on Windows



      Hadoop Installation on Windows

    3. Open C:\hadoop\etc\hadoop\core-site.xml and add below code
      <configuration>
      <property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
      </property>
      </configuration>


      fs.defaultFS:
      The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri’s scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri’s authority is used to determine the host, port, etc. for a filesystem.
    4. Open C:\hadoop\etc\hadoop\hdfs-site.xml and add below code
      <configuration>
      <property>
      <name>dfs.replication</name>
      <value>1</value>
      </property>
      <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/hadoop/data/dfs/namenode</value>
      </property>
      <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/hadoop/data/dfs/datanode</value>
      </property>
      </configuration>

        • dfs.replication:
          Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
        • dfs.namenode.name.dir:
          Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
        • dfs.datanode.data.dir:
          Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
    5. Open C:\hadoop\etc\hadoop\yarn-site.xml and add below code
      <configuration>
      <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
      </property>
      <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
      </property>
      <property>
      <name>yarn.application.classpath</name>
      <value>
      %HADOOP_HOME%\etc\hadoop,
      %HADOOP_HOME%\share\hadoop\common\*,
      %HADOOP_HOME%\share\hadoop\common\lib\*,
      %HADOOP_HOME%\share\hadoop\mapreduce\*,
      %HADOOP_HOME%\share\hadoop\mapreduce\lib\*,
      %HADOOP_HOME%\share\hadoop\hdfs\*,
      %HADOOP_HOME%\share\hadoop\hdfs\lib\*,
      %HADOOP_HOME%\share\hadoop\yarn\*,
      %HADOOP_HOME%\share\hadoop\yarn\lib\*
      </value>
      </property>
      </configuration>

        • yarn.nodemanager.aux-services:
          The auxiliary service name. Default value is omapreduce_shuffle.
        • yarn.nodemanager.aux-services.mapreduce.shuffle.class:
          The auxiliary service class to use. Default value is org.apache.hadoop.mapred.ShuffleHandler.
        • yarn.application.classpath:
          CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries.
    6. Open C:\hadoop\etc\hadoop\mapred-site.xml and add below code.
      <configuration>
      <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
      </property>
      </configuration>



      The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn.

Congratulation!! You have successfully installed Hadoop 2.0
Now let’s start running NameNode, DataNode, Yarn and write MapReduce examples in next chapter

<< Previous
Next >>

Leave a Reply

Your email address will not be published. Required fields are marked *

Current day month ye@r *

 © 2017 : saphanatutorial.com, All rights reserved.  Privacy Policy