Cloudera Developer Training For Apache Spark Hadoop will be held on 28th Apr 2016 - Sunday, 1st May 2016 at Xebia IT Architects India Private Limited.
Cloudera’s 4-day training program gives Hadoop developers the expertise to harness the full power of the open source technology and bring their organizations' data to life . Xebia is an official training partner of Cloudera, the leader in Apache Hadoop-based software and services.
Programme and Course Overview
Learn how to harness the power of Apache Hadoop by building robust data processing applications that unlock the full potential of your (big)data. Xebia (based in Hilversum, Amsterdam area) is an official training partner of Cloudera, the leader in Apache Hadoop-based software and services. Xebia University delivers a developer-focused Cloudera Certified training course that closely analyses Hadoop’s structure and provides hands-on exercises that teach you how to import data from existing sources; process data with a variety of techniques such as Java Map Reduce programs and Hadoop Streaming jobs; and work with Apache Hive and Pig.
CertificationUpon completion of the course, attendees receive a Cloudera Certified Developer for Apache Hadoop (CCDH) practice test. Certification is a great differentiator; it helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.
Target Group & Prerequisites:
This course is appropriate for developers who will be writing, maintaining and/or optimizing Hadoop jobs. Participants should have programming experience; knowledge of Java is highly recommended. Understanding of common computer science concepts is a plus. Prior knowledge of Hadoop is not required.
Course Content :
Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:
- The internals of MapReduce and HDFS and how to write MapReduce code
- Best practices for Hadoop development, debugging, and implementation of workflows and common algorithms
- How to leverage Hive, Pig, Sqoop, Flume, Oozie, and other Hadoop ecosystem projects
- Creating custom components such as WritableComparables and InputFormats to manage complex data types
- Writing and executing joins to link data sets in MapReduce
- Advanced Hadoop API topics required for real-world data analysis
Key Promises Of this Training
- The core technologies of Hadoop.
- How to implement data input and output in MapReduce applications.
- How HDFS and MapReduce work.
- Algorithms for common MapReduce tasks.
- How to develop MapReduce applications.
- How to join data sets in MapReduce.
- How to unit test MapReduce applications.
- How Hadoop integrates into the data centre.
- How to use MapReduce combiners, partitioners and the distributed cache.
- How Hive, Impala and Pig can be used for rapid application development.
- Best practices for developing and debugging MapReduce applications.
- How to create large workflows using Oozie.
Please note, that you need to bring your own laptop for this training. This laptop should meet the following requirements:
- At least 4GB RAM;
- 15GB of free hard disk space;
- VMware Player 5.x or above (Windows)/ VMware Fusion 4.x or above (Mac);
- Your laptop must support a 64-bit VMware guest image. If the machines are running a 64-bit version of WIndows, or Mac OS X on a Core DUO 2 processor or later, not other test is required. Otherwise, VMware provides a tool to check compatibility, which can be downloaded from
- Your laptop must have VT-x virtualization support enabled in the BIOS;
- If running Windows XP: 7-Zip or WinZip is needed (due to a bug in Windows XP's built-in Zip utility).
Cost:-
Training : INR 74,400 + Service Tax @ 14.5% = INR 85, 188/- per participant
Nice post,Thank you..
ReplyDeleteBig Data Online Course