Big data in practice using Hadoop

Nowadays everybody seems to be working with "big data", most often in the context of analytics and "Data Science". Do you also want to store and then interrogate your several data sources (click streams, social media, relational data, sensor data, IoT, ...) and are you experiencing the shortcomings of traditional data tools? Maybe you are in need of distributed data stores like HDFS and a MapReduce infrastructure like Hadoop's.

This course builds on the concepts which are set forth in the Big data architecture and infrastructure course. you will get hands-on practice on linux with Apache Hadoop: HDFS, Yarn, Pig, and Hive. You learn how to implement robust data processing with an SQL-style interface which generates MapReduce jobs. You also learn to work with the graphical tools which allow for easy follow-up of the jobs and the workflows on the distributed Hadoop cluster.

After successful completion of the course, you will have sufficient basic expertise to set up a Hadoop cluster, to import data into HDFS, and to interrogate it clevery using MapReduce.

When you want to use Hadoop with Spark, you are referred to the course Big data in practice using Spark.

Schedule

datedur.lang.locationprice 
19 Mar2Leuven 1050 EUR (excl. VAT)
02 Jul2Woerden 1050 EUR (exempt from VAT)
SESSION INFO AND ENROLMENT

Intended for

Whoever wants to start practising "big data": developers, data architects, and anyone who needs to work with big data technology.

Background

Familiarity with the concepts of data stores and more specifically of "big data" is necessary; see our course Big data architecture and infrastructure. Additionally, minimal knowledge of SQL, UNIX and Java are useful. Experience with a programming language (e.g. Java, PHP, Python, Perl, C++ or C#) is a must.

Main topics

  • Motivation for Hadoop & base concepts
  • The Apache Hadoop project and the components of Hadoop
  • HDFS: the Hadoop Distributed File System
  • MapReduce: what and how
  • The workings of a Hadoop cluster
  • Writing a MapReduce program
  • Implementing MapReduce drivers, mappers, and reducers in Java
  • Writing Mappers and Reducers by use of an other progamming or scripting language (e.g. Perl)
  • Unit testing
  • Writing partitioners for optimizing the load balancing
  • Debugging a MapReduce program
  • Data Input / Output
  • Reading and writing sequential data from a MapReduce program
  • The use of binary data
  • Data compression
  • Some frequently used MapReduce components
  • Sorting, searching, and indexing of data
  • Word counts and counting pairs of words
  • Working with Hive and Pig
  • Pig as a high-level basic interface, which will generate a sequence of MapReduce jobs for us
  • Hive as a high-level SQL-style interface, which generates a sequence of MapReduce jobs
  • The Parquet file format: structure and typical use; advantages of data compression; interoperability
  • Short introduction to HBase and Cassandra as alternative data stores

Training method

Classroom instruction, with practical examples and supported by extensive practical exercises.

Duration

2 days.

Course leader

Peter Vanroose.

Reviews

Een dag langer?

 
  (, )

Redelijk veel info voor de beschikbare periode

 
  (, )

Interessante kennismaking. Voor mij soms te veel theorie

 
  (, )

goed overzicht van big data architectuur en de samenhang tussen producten en tools

 
  (, )

Wel ok, ik denk dat de algemene uitleg veel sneller kan. Soms veel focus op details die voor mij bijna irrelevant lijken. Kan ook aan mij liggen.

 
  (, )

De meeste belangrijke punten zijn behandeld in de cursus.

 
  (, )

Zeer goede introductie

 
  (, )

Bon debut pour commencer dans le Big data

 
  (, )

Click here to see more comments...

Happy with the training even if I would spend less time on HDFS and MapReduce and more time in others components (Pig, Hive,...)

 
  (, )

Also interesting

Enrollees for this training also took the following courses:


SESSION INFO AND ENROLMENT