AllExam Dumps

DUMPS, FREE DUMPS, VCP5 DUMPS| VMWARE DUMPS, VCP DUMPS, VCP4 DUMPS, VCAP DUMPS, VCDX DUMPS, CISCO DUMPS, CCNA, CCNA DUMPS, CCNP DUMPS, CCIE DUMPS, ITIL, EXIN DUMPS,


READ Free Dumps For Cloudera- CCD-410





Question ID 12515

For each intermediate key, each reducer task can emit:

Option A

As many final key-value pairs as desired. There are no restrictions on the types of those key-value pairs (i.e., they can be heterogeneous).

Option B

As many final key-value pairs as desired, but they must have the same type as the intermediate key-value pairs.

Option C

As many final key-value pairs as desired, as long as all the keys have the same type and all the values have the same type.

Option D

One final key-value pair per value associated with the key; no restrictions on the type.

Option E

One final key-value pair per key; no restrictions on the type.

Correct Answer C
Explanation Reference: Hadoop Map-Reduce Tutorial; Yahoo! Hadoop Tutorial, Module 4: MapReduce


Question ID 12516

Assuming default settings, which best describes the order of data provided to a reducers
reduce method:

Option A

The keys given to a reducer arent in a predictable order, but the values associated with those keys always are.

Option B

Both the keys and values passed to a reducer always appear in sorted order.

Option C

Neither keys nor values are in any predictable order.

Option D

The keys given to a reducer are in sorted order but the values associated with each key are in no predictable order

Correct Answer D
Explanation Explanation: Reducer has 3 primary phases: 1. Shuffle The Reducer copies the sorted output from each Mapper using HTTP across the network. 2. Sort The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key). The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged. SecondarySort To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce. 3. Reduce In this phase the reduce(Object, Iterable, Context) method is called for each in the sorted inputs. The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object). The output of the Reducer is not re-sorted. Reference: org.apache.hadoop.mapreduce, Class Reducer