AllExam Dumps

DUMPS, FREE DUMPS, VCP5 DUMPS| VMWARE DUMPS, VCP DUMPS, VCP4 DUMPS, VCAP DUMPS, VCDX DUMPS, CISCO DUMPS, CCNA, CCNA DUMPS, CCNP DUMPS, CCIE DUMPS, ITIL, EXIN DUMPS,


READ Free Dumps For Cloudera- CCD-410





Question ID 12493

Your clusters HDFS block size in 64MB. You have directory containing 100 plain text files,
each of which is 100MB in size. The InputFormat for your job is TextInputFormat.
Determine how many Mappers will run?

Option A

64

Option B

 100

Option C

200

Option D

640

Correct Answer C
Explanation Explanation: Each file would be split into two as the block size (64 MB) is less than the file size (100 MB), so 200 mappers would be running. Note: If you're not compressing the files then hadoop will process your large files (say 10G), with a number of mappers related to the block size of the file. Say your block size is 64M, then you will have ~160 mappers processing this 10G file (160*64 ~= 10G). Depending on how CPU intensive your mapper logic is, this might be an acceptable blocks size, but if you find that your mappers are executing in sub minute times, then you might want to increase the work done by each mapper (by increasing the block size to 128, 256, 512m - the actual size depends on how you intend to process the data). Reference: http://stackoverflow.com/questions/11014493/hadoop-mapreduce-appropriate- input-files-size (first answer, second paragraph)


Question ID 12494

You want to understand more about how users browse your public website, such as which
pages they visit prior to placing an order. You have a farm of 200 web servers hosting your
website. How will you gather this data for your analysis?

Option A

Ingest the server web logs into HDFS using Flume.

Option B

Write a MapReduce job, with the web servers for mappers, and the Hadoop cluster nodes for reduces.

Option C

Import all users’ clicks from your OLTP databases into Hadoop, using Sqoop.

Option D

Channel these clickstreams inot Hadoop using Hadoop Streaming.

Option E

Sample the weblogs from the web servers, copying them into Hadoop using curl.

Correct Answer A
Explanation

Send email to admin@getfreedumps for new dumps request!!!