Big data in practice using Hadoop

Nowadays everybody seems to be working with "big data", most often in the context of analytics and "Data Science". Do you also want to store and then interrogate your several data sources (click streams, social media, relational data, sensor data, IoT, ...) and are you experiencing the shortcomings of traditional data tools? Maybe you are in need of distributed data stores like HDFS and a MapReduce infrastructure like Hadoop's.

This course builds on the concepts which are set forth in the Big data architecture and infrastructure course. you will get hands-on practice on linux with Apache Hadoop: HDFS, Yarn, Pig, and Hive. You learn how to implement robust data processing with an SQL-style interface which generates MapReduce jobs. You also learn to work with the graphical tools which allow for easy follow-up of the jobs and the workflows on the distributed Hadoop cluster.

After successful completion of the course, you will have sufficient basic expertise to set up a Hadoop cluster, to import data into HDFS, and to interrogate it clevery using MapReduce.

When you want to use Hadoop with Spark, you are referred to the course Big data in practice using Spark.

Schedule

No public sessions are currently scheduled. We will be pleased to set up an on-site course or to schedule an extra public session (in case of a sufficient number of candidates). Interested? Please let us know.

Intended for

Whoever wants to start practising "big data": developers, data architects, and anyone who needs to work with big data technology.

Background

Familiarity with the concepts of data stores and more specifically of "big data" is necessary; see our course Big data architecture and infrastructure. Additionally, minimal knowledge of SQL, UNIX and Java are useful. Experience with a programming language (e.g. Java, PHP, Python, Perl, C++ or C#) is a must.

Main topics

  • Motivation for Hadoop & base concepts
  • The Apache Hadoop project and the components of Hadoop
  • HDFS: the Hadoop Distributed File System
  • MapReduce: what and how
  • The workings of a Hadoop cluster
  • Writing a MapReduce program
  • Implementing MapReduce drivers, mappers, and reducers in Java
  • Writing Mappers and Reducers by use of an other progamming or scripting language (e.g. Perl)
  • Unit testing
  • Writing partitioners for optimizing the load balancing
  • Debugging a MapReduce program
  • Data Input / Output
  • Reading and writing sequential data from a MapReduce program
  • The use of binary data
  • Data compression
  • Some frequently used MapReduce components
  • Sorting, searching, and indexing of data
  • Word counts and counting pairs of words
  • Working with Hive and Pig
  • Pig as a high-level basic interface, which will generate a sequence of MapReduce jobs for us
  • Hive as a high-level SQL-style interface, which generates a sequence of MapReduce jobs
  • The Parquet file format: structure and typical use; advantages of data compression; interoperability
  • Short introduction to HBase and Cassandra as alternative data stores

Training method

Classroom instruction, with practical examples and supported by extensive practical exercises.

Duration

2 days.

Course leader

Peter Vanroose.


SESSION INFO AND ENROLMENT