1、Hadoop FAQHadoop FAQ5.How can I help to make Hadoop better?If you have trouble figuring how to use Hadoop, then, once youve figured something out (perhaps with the help of themailing lists), pass that knowledge on to others by adding something to this wiki.If you find something that you wish were do
2、ne better, and know how to fix it, readHowToContribute, and contribute a patch.6.HDFS. If I add new data-nodes to the cluster will HDFS move the blocks to the newly added nodes in order to balance disk space utilization between the nodes?No, HDFS will not move blocks to new nodes automatically. Howe
3、ver, newly created files will likely have their blocks placed on the new nodes.There are several ways to rebalance the cluster manually.1. Select a subset of files that take up a good percentage of your disk space; copy them to new locations in HDFS; remove the old copies of the files; rename the ne
4、w copies to their original names.2. A simpler way, with no interruption of service, is to turn up the replication of files, wait for transfers to stabilize, and then turn the replication back down.3. Yet another way to re-balance blocks is to turn off the data-node, which is full, wait until its blo
5、cks are replicated, and then bring it back again. The over-replicated blocks will be randomly removed from different nodes, so you really get them rebalanced not just removed from the current node.4. Finally, you can use the bin/start-balancer.sh command to run a balancing process to move blocks aro
6、und the cluster automatically. Seeo HDFS User Guide: Rebalancer;o HDFS Tutorial: Rebalancing;o HDFS Commands Guide: balancer.7.HDFS. What is the purpose of the secondary name-node?The term secondary name-node is somewhat misleading.It is not a name-node in the sense that data-nodes cannot connect to
7、 the secondary name-node,and in no event it can replace the primary name-node in case of its failure.The only purpose of the secondary name-node is to perform periodic checkpoints.The secondary name-node periodically downloads current name-node image and edits log files,joins them into new image and
8、 uploads the new image back to the (primary and the only) name-node.SeeUser Guide.So if the name-node fails and you can restart it on the same physical node then there is no needto shutdown data-nodes, just the name-node need to be restarted.If you cannot use the old node anymore you will need to co
9、py the latest image somewhere else.The latest image can be found either on the node that used to be the primary before failure if available;or on the secondary name-node. The latter will be the latest checkpoint without subsequent edits logs,that is the most recent name space modifications may be mi
10、ssing there.You will also need to restart the whole cluster in this case.8.MR. What is the Distributed Cache used for?The distributed cache is used to distribute large read-only files that are needed by map/reduce jobs to the cluster. The framework will copy the necessary files from a url (either hd
11、fs: orhttp:)on to the slave node before any tasks for the job are executed on that node. The files are only copied once per job and so should not be modified by the application.9.MR. Can I write create/write-to hdfs files directly from my map/reduce tasks?Yes. (Clearly, you want this since you need
12、to create/write-to files other than the output-file written out byOutputCollector.)Caveats:$mapred.output.dir is the eventual output directory for the job (JobConf.setOutputPath/JobConf.getOutputPath).$taskid is the actual id of the individual task-attempt (e.g. task_200709221812_0001_m_000000_0), a
13、 TIP is a bunch of $taskids (e.g. task_200709221812_0001_m_000000).Withspeculative-executionon, one could face issues with 2 instances of the same TIP (running simultaneously) trying to open/write-to the same file (path) on hdfs. Hence the app-writer will have to pick unique names (e.g. using the co
14、mplete taskid i.e. task_200709221812_0001_m_000000_0) per task-attempt, not just per TIP. (Clearly, this needs to be done even if the user doesnt create/write-to files directly via reduce tasks.)To get around this the framework helps the application-writer out by maintaining a special$mapred.output.
15、dir/_$taskidsub-dir for each task-attempt on hdfs where the output of the reduce task-attempt goes. On successful completion of the task-attempt the files in the $mapred.output.dir/_$taskid (of the successful taskid only) are moved to $mapred.output.dir. Of course, the framework discards the sub-dir
16、ectory of unsuccessful task-attempts. This is completely transparent to the application.The application-writer can take advantage of this by creating any side-files required in $mapred.output.dir during execution of his reduce-task, and the framework will move them out similarly - thus you dont have
17、 to pick unique paths per task-attempt.Fine-print: the value of $mapred.output.dir during execution of a particular task-attempt is actually $mapred.output.dir/_$taskid, not the value set byJobConf.setOutputPath.So, just create any hdfs files you want in $mapred.output.dir from your reduce task to t
18、ake advantage of this feature.The entire discussion holds true for maps of jobs with reducer=NONE (i.e. 0 reduces) since output of the map, in that case, goes directly to hdfs.10.MR. How do I get each of my maps to work on one complete input-file and not allow the framework to split-up my files?Esse
19、ntially a jobs input is represented by theInputFormat(interface)/FileInputFormat(base class).For this purpose one would need a non-splittableFileInputFormati.e. an input-format which essentially tells the map-reduce framework that it cannot be split-up and processed. To do this you need your particu
20、lar input-format to returnfalsefor theisSplittablecall.E.g.org.apache.hadoop.mapred.SortValidator.RecordStatsChecker.NonSplitableSequenceFileInputFormatinsrc/test/org/apache/hadoop/mapred/SortValidator.javaIn addition to implementing theInputFormatinterface and having isSplitable(.) returning false,
21、 it is also necessary to implement theRecordReaderinterface for returning the whole content of the input file. (default isLineRecordReader, which splits the file into separate lines)The other, quick-fix option, is to setmapred.min.split.sizeto large enough value.11.Why I do see broken images in jobd
22、etails.jsp page?In hadoop-0.15, Map / Reduce task completion graphics are added. The graphs are produced as SVG(Scalable Vector Graphics) images, which are basically xml files, embedded in html content. The graphics are tested successfully in Firefox 2 on Ubuntu and MAC OS. However for other browser
23、s, one should install an additional plugin to the browser to see the SVG images. Adobes SVG Viewer can be found at12.HDFS. Does the name-node stay in safe mode till all under-replicated files are fully replicated?No. During safe mode replication of blocks is prohibited.The name-node awaits when all
24、or majority of data-nodes report their blocks.Depending on how safe mode parameters are configured the name-node will stay in safe modeuntil a specific percentage of blocks of the system isminimallyreplicateddfs.replication.min.If the safe mode thresholddfs.safemode.threshold.pctis set to 1 then all
25、 blocks of allfiles should be minimally replicated.Minimal replication does not mean full replication. Some replicas may be missing and inorder to replicate them the name-node needs to leave safe mode.Learn more about safe modehere.13.MR. I see a maximum of 2 maps/reduces spawned concurrently on eac
26、h TaskTracker, how do I increase that?Use the configuration knob:mapred.tasktracker.map.tasks.maximumandmapred.tasktracker.reduce.tasks.maximumto control the number of maps/reduces spawned simultaneously on a TaskTracker. By default, it is set to2, hence one sees a maximum of 2 maps and 2 reduces at
27、 a given instance on a TaskTracker.You can set those on a per-tasktracker basis to accurately reflect your hardware (i.e. set those to higher nos. on a beefier tasktracker etc.).14.MR. Submitting map/reduce jobs as a different user doesnt work.The problem is that you havent configured your map/reduc
28、e systemdirectory to a fixed value. The default works for single node systems, but not forreal clusters. I like to use: mapred.system.dir /hadoop/mapred/system The shared directory where MapReduce stores control files. Note that this directory is in your default file system and must beaccessible fro
29、m both the client and server machines and is typicallyin HDFS.15.HDFS. How do I set up a hadoop node to use multiple volumes?Data-nodescan store blocks in multiple directories typically allocated on different local disk drives.In order to setup multiple directories one needs to specify a comma separ
30、ated list of pathnames as a value ofthe configuration parameterdfs.data.dir.Data-nodes will attempt to place equal amount of data in each of the directories.Thename-nodealso supports multiple directories, which in the case store the name space image and the edits log.The directories are specified vi
31、a thedfs.name.dirconfiguration parameter.The name-node directories are used for the name space data replication so that the image and thelog could be restored from the remaining volumes if one of them fails.16.HDFS. What happens if one Hadoop client renames a file or a directory containing this file while another client is still writing into it?Starting with release hadoop-0.15, a file will appear in the name space as soon as it is created.If a writer is writing to a file and another client renames either the file itself or any of its pathcomponents, then the original writer will g
copyright@ 2008-2022 冰豆网网站版权所有
经营许可证编号:鄂ICP备2022015515号-1