International Site

Developer Training for Spark & Hadoop

Learn how to import data into your Apache Hadoop cluster and process it with Spark, Hive, Flume, Sqoop, Impala, and other Hadoop ecosystem tools This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.

Course Contents

• Introduction to Hadoop and the Hadoop Ecosystem
• Hadoop Architecture and HDFS
• Importing Relational Data with Apache Sqoop
• Introduction to Impala and Hive
• Modeling and Managing Data with Impala and Hive
• Data Formats
• Data File Partitioning
• Capturing Data with Apache Flume
• Spark Basics
• Working with RDDs in Spark
• Writing and Deploying Spark Applications
• Parallel Processing in Spark
• Spark RDD Persistence
• Common Patterns in Spark Data Processing
• DataFrames and Spark SQL
• Conclusion

 You will receive the original course documentation by Cloudera in English language as an E-Book (pdf).

 Request your tailor-made course.

Target Group

This course is designed for developers and engineers who have programming experience.

Knowledge Prerequisites

Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful; prior knowledge of Hadoop is not required.