AllExam Dumps

DUMPS, FREE DUMPS, VCP5 DUMPS| VMWARE DUMPS, VCP DUMPS, VCP4 DUMPS, VCAP DUMPS, VCDX DUMPS, CISCO DUMPS, CCNA, CCNA DUMPS, CCNP DUMPS, CCIE DUMPS, ITIL, EXIN DUMPS,


READ Free Dumps For Cloudera- CCD-410





Question ID 12531

Workflows expressed in Oozie can contain:

Option A

Sequences of MapReduce and Pig. These sequences can be combined with other actions including forks, decision points, and path joins.

Option B

Sequences of MapReduce job only; on Pig on Hive tasks or jobs. These MapReduce sequences can be combined with forks and path joins.

Option C

Sequences of MapReduce and Pig jobs. These are limited to linear sequences of actions with exception handlers but no forks.

Option D

Iterntive repetition of MapReduce jobs until a desired answer or state is reached.

Correct Answer A
Explanation Explanation: Oozie workflow is a collection of actions (i.e. Hadoop Map/Reduce jobs, Pig jobs) arranged in a control dependency DAG (Direct Acyclic Graph), specifying a sequence of actions execution. This graph is specified in hPDL (a XML Process Definition Language). hPDL is a fairly compact language, using a limited amount of flow control and action nodes. Control nodes define the flow of execution and include beginning and end of a workflow (start, end and fail nodes) and mechanisms to control the workflow execution path ( decision, fork and join nodes). Workflow definitions Currently running workflow instances, including instance states and variables Reference: Introduction to Oozie Note: Oozie is a Java Web-Application that runs in a Java servlet-container - Tomcat and uses a database to store:


Question ID 12532

You want to count the number of occurrences for each unique word in the supplied input
data. Youve decided to implement this by having your mapper tokenize each word and
emit a literal value 1, and then have your reducer increment a counter for each literal 1 it
receives. After successful implementing this, it occurs to you that you could optimize this by
specifying a combiner. Will you be able to reuse your existing Reduces as your combiner in
this case and why or why not?

Option A

Yes, because the sum operation is both associative and commutative and the input and output types to the reduce method match.

Option B

No, because the sum operation in the reducer is incompatible with the operation of a Combiner.

Option C

No, because the Reducer and Combiner are separate interfaces.

Option D

No, because the Combiner is incompatible with a mapper which doesnt use the same data type for both the key and value.

Option E

Yes, because Java is a polymorphic object-oriented language and thus reducer code can be reused as a combiner.

Correct Answer A
Explanation Explanation: Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, What are combiners? When should I use a combiner in my MapReduce Job?

Send email to admin@getfreedumps for new dumps request!!!