ImageVerifierCode 换一换
格式:DOCX , 页数:8 ,大小:21.39KB ,
资源ID:5383026      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/5383026.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Hadoop FAQ.docx)为本站会员(b****5)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

Hadoop FAQ.docx

1、Hadoop FAQHadoop FAQ5.How can I help to make Hadoop better?If you have trouble figuring how to use Hadoop, then, once youve figured something out (perhaps with the help of themailing lists), pass that knowledge on to others by adding something to this wiki.If you find something that you wish were do

2、ne better, and know how to fix it, readHowToContribute, and contribute a patch.6.HDFS. If I add new data-nodes to the cluster will HDFS move the blocks to the newly added nodes in order to balance disk space utilization between the nodes?No, HDFS will not move blocks to new nodes automatically. Howe

3、ver, newly created files will likely have their blocks placed on the new nodes.There are several ways to rebalance the cluster manually.1. Select a subset of files that take up a good percentage of your disk space; copy them to new locations in HDFS; remove the old copies of the files; rename the ne

4、w copies to their original names.2. A simpler way, with no interruption of service, is to turn up the replication of files, wait for transfers to stabilize, and then turn the replication back down.3. Yet another way to re-balance blocks is to turn off the data-node, which is full, wait until its blo

5、cks are replicated, and then bring it back again. The over-replicated blocks will be randomly removed from different nodes, so you really get them rebalanced not just removed from the current node.4. Finally, you can use the bin/start-balancer.sh command to run a balancing process to move blocks aro

6、und the cluster automatically. Seeo HDFS User Guide: Rebalancer;o HDFS Tutorial: Rebalancing;o HDFS Commands Guide: balancer.7.HDFS. What is the purpose of the secondary name-node?The term secondary name-node is somewhat misleading.It is not a name-node in the sense that data-nodes cannot connect to

7、 the secondary name-node,and in no event it can replace the primary name-node in case of its failure.The only purpose of the secondary name-node is to perform periodic checkpoints.The secondary name-node periodically downloads current name-node image and edits log files,joins them into new image and

8、 uploads the new image back to the (primary and the only) name-node.SeeUser Guide.So if the name-node fails and you can restart it on the same physical node then there is no needto shutdown data-nodes, just the name-node need to be restarted.If you cannot use the old node anymore you will need to co

9、py the latest image somewhere else.The latest image can be found either on the node that used to be the primary before failure if available;or on the secondary name-node. The latter will be the latest checkpoint without subsequent edits logs,that is the most recent name space modifications may be mi

10、ssing there.You will also need to restart the whole cluster in this case.8.MR. What is the Distributed Cache used for?The distributed cache is used to distribute large read-only files that are needed by map/reduce jobs to the cluster. The framework will copy the necessary files from a url (either hd

11、fs: orhttp:)on to the slave node before any tasks for the job are executed on that node. The files are only copied once per job and so should not be modified by the application.9.MR. Can I write create/write-to hdfs files directly from my map/reduce tasks?Yes. (Clearly, you want this since you need

12、to create/write-to files other than the output-file written out byOutputCollector.)Caveats:$mapred.output.dir is the eventual output directory for the job (JobConf.setOutputPath/JobConf.getOutputPath).$taskid is the actual id of the individual task-attempt (e.g. task_200709221812_0001_m_000000_0), a

13、 TIP is a bunch of $taskids (e.g. task_200709221812_0001_m_000000).Withspeculative-executionon, one could face issues with 2 instances of the same TIP (running simultaneously) trying to open/write-to the same file (path) on hdfs. Hence the app-writer will have to pick unique names (e.g. using the co

14、mplete taskid i.e. task_200709221812_0001_m_000000_0) per task-attempt, not just per TIP. (Clearly, this needs to be done even if the user doesnt create/write-to files directly via reduce tasks.)To get around this the framework helps the application-writer out by maintaining a special$mapred.output.

15、dir/_$taskidsub-dir for each task-attempt on hdfs where the output of the reduce task-attempt goes. On successful completion of the task-attempt the files in the $mapred.output.dir/_$taskid (of the successful taskid only) are moved to $mapred.output.dir. Of course, the framework discards the sub-dir

16、ectory of unsuccessful task-attempts. This is completely transparent to the application.The application-writer can take advantage of this by creating any side-files required in $mapred.output.dir during execution of his reduce-task, and the framework will move them out similarly - thus you dont have

17、 to pick unique paths per task-attempt.Fine-print: the value of $mapred.output.dir during execution of a particular task-attempt is actually $mapred.output.dir/_$taskid, not the value set byJobConf.setOutputPath.So, just create any hdfs files you want in $mapred.output.dir from your reduce task to t

18、ake advantage of this feature.The entire discussion holds true for maps of jobs with reducer=NONE (i.e. 0 reduces) since output of the map, in that case, goes directly to hdfs.10.MR. How do I get each of my maps to work on one complete input-file and not allow the framework to split-up my files?Esse

19、ntially a jobs input is represented by theInputFormat(interface)/FileInputFormat(base class).For this purpose one would need a non-splittableFileInputFormati.e. an input-format which essentially tells the map-reduce framework that it cannot be split-up and processed. To do this you need your particu

20、lar input-format to returnfalsefor theisSplittablecall.E.g.org.apache.hadoop.mapred.SortValidator.RecordStatsChecker.NonSplitableSequenceFileInputFormatinsrc/test/org/apache/hadoop/mapred/SortValidator.javaIn addition to implementing theInputFormatinterface and having isSplitable(.) returning false,

21、 it is also necessary to implement theRecordReaderinterface for returning the whole content of the input file. (default isLineRecordReader, which splits the file into separate lines)The other, quick-fix option, is to setmapred.min.split.sizeto large enough value.11.Why I do see broken images in jobd

22、etails.jsp page?In hadoop-0.15, Map / Reduce task completion graphics are added. The graphs are produced as SVG(Scalable Vector Graphics) images, which are basically xml files, embedded in html content. The graphics are tested successfully in Firefox 2 on Ubuntu and MAC OS. However for other browser

23、s, one should install an additional plugin to the browser to see the SVG images. Adobes SVG Viewer can be found at12.HDFS. Does the name-node stay in safe mode till all under-replicated files are fully replicated?No. During safe mode replication of blocks is prohibited.The name-node awaits when all

24、or majority of data-nodes report their blocks.Depending on how safe mode parameters are configured the name-node will stay in safe modeuntil a specific percentage of blocks of the system isminimallyreplicateddfs.replication.min.If the safe mode thresholddfs.safemode.threshold.pctis set to 1 then all

25、 blocks of allfiles should be minimally replicated.Minimal replication does not mean full replication. Some replicas may be missing and inorder to replicate them the name-node needs to leave safe mode.Learn more about safe modehere.13.MR. I see a maximum of 2 maps/reduces spawned concurrently on eac

26、h TaskTracker, how do I increase that?Use the configuration knob:mapred.tasktracker.map.tasks.maximumandmapred.tasktracker.reduce.tasks.maximumto control the number of maps/reduces spawned simultaneously on a TaskTracker. By default, it is set to2, hence one sees a maximum of 2 maps and 2 reduces at

27、 a given instance on a TaskTracker.You can set those on a per-tasktracker basis to accurately reflect your hardware (i.e. set those to higher nos. on a beefier tasktracker etc.).14.MR. Submitting map/reduce jobs as a different user doesnt work.The problem is that you havent configured your map/reduc

28、e systemdirectory to a fixed value. The default works for single node systems, but not forreal clusters. I like to use: mapred.system.dir /hadoop/mapred/system The shared directory where MapReduce stores control files. Note that this directory is in your default file system and must beaccessible fro

29、m both the client and server machines and is typicallyin HDFS.15.HDFS. How do I set up a hadoop node to use multiple volumes?Data-nodescan store blocks in multiple directories typically allocated on different local disk drives.In order to setup multiple directories one needs to specify a comma separ

30、ated list of pathnames as a value ofthe configuration parameterdfs.data.dir.Data-nodes will attempt to place equal amount of data in each of the directories.Thename-nodealso supports multiple directories, which in the case store the name space image and the edits log.The directories are specified vi

31、a thedfs.name.dirconfiguration parameter.The name-node directories are used for the name space data replication so that the image and thelog could be restored from the remaining volumes if one of them fails.16.HDFS. What happens if one Hadoop client renames a file or a directory containing this file while another client is still writing into it?Starting with release hadoop-0.15, a file will appear in the name space as soon as it is created.If a writer is writing to a file and another client renames either the file itself or any of its pathcomponents, then the original writer will g

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1