• phone icon +44 7459 302492 email message icon info@uplatz.com
  • Register
0- - 0
Job Meter = High

Big Data Hadoop and Spark

Hours
Online Self-paced Training
USD 17 (USD 140)
Save 88% Offer ends on 30-Jun-2024
Big Data Hadoop and Spark course and certification
133 Learners

About this Course

Hadoop is an open source project for storing data and managed by Apache software. Hadoop runs the application using the MapReduce algorithm. It is used to develop a big data application to perform complete statistical analysis and improves the processing speed. Hadoop has two major layers- first is MapReduce and the second is Hadoop distributed file system. It allows users to quickly write and test distributed system.

The Spark being a distribution processing system is also an open source platform which helps organisations in managing big data workloads and in fast computation. The fundamental data structure of the spark is RDD- a logical collection of data partitioned across machines. The Spark has in-memory cluster computing and uses Hadoop in two ways- first for storage and second for processing that ultimately speeds up the processing. Apache spark is fast, supports multiple languages and handle advanced analytics like SQL queries, machine learning, graph algorithms etc.

In this Big Data Hadoop and Spark course by Uplatz, you will be able to make sense how Hadoop helps the organization to automatically distribute the data and work across the machines and detects a failure on its own, if any at the application layer. You will also learn how to install Hadoop and its setup. You will be introduced to MapReduce, hive and pig. Further, you will be preceded by spark framework and RDD - "a logical collection of data partitioned across machine".

-------------------------------------------------------------------------------------------------------

Big Data Hadoop and Spark

Course Details & Curriculum

Hadoop Installation and Setup

The architecture of Hadoop cluster, what is High Availability and Federation, how to setup a production cluster, various shell commands in Hadoop, understanding configuration files in Hadoop, installing single node cluster with Cloudera Manager and understanding Spark, Scala, Sqoop, Pig and Flume

Introduction to Big Data Hadoop and Understanding HDFS and MapReduce

Introducing Big Data and Hadoop, what is Big Data and where does Hadoop fit in, two important Hadoop ecosystem components, namely, MapReduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager

Hands-on Exercise: HDFS working mechanism, data replication process, how to determine the size of the block, understanding a data node and name node

Deep Dive in MapReduce

Learning the working mechanism of MapReduce, understanding the mapping and reducing stages in MR, various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle and Sort

Hands-on Exercise: How to write a Word Count program in MapReduce, how to write a Custom Partitioner, what is a MapReduce Combiner, how to run a job in a local job runner, deploying unit test, what is a map side join and reduce side join, what is a tool runner, how to use counters, dataset joining with map side and reduce side joins

Introduction to Hive

Introducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, various types of Hive tables, HCatalog, storing the Hive Results, Hive partitioning and Buckets

Hands-on Exercise: Database creation in Hive, dropping a database, Hive table creation, how to change the database, data loading, Hive table creation, dropping and altering table, pulling data by writing Hive queries with filter conditions, table partitioning in Hive and what is a Group by clause

Advanced Hive and Impala

Indexing in Hive, the Map Side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of Impala

Hands-on Exercise: How to work with Hive queries, the process of joining table and writing indexes, external table and sequence table deployment and data storage in a different table

Introduction to Pig

Apache Pig introduction, its various features, various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields

Hands-on Exercise: Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into files and working with Group By, Filter By, Distinct, Cross, Split in Hive

Flume, Sqoop and HBase

Apache Sqoop introduction, overview, importing and exporting data, performance improvement with Sqoop, Sqoop limitations, introduction to Flume and understanding the architecture of Flume and what is HBase and the CAP theorem

Hands-on Exercise: Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase and deploying Disable, Scan and Enable Table

Writing Spark Applications Using Scala

Using Scala for writing Apache Spark applications, detailed study of Scala, the need for Scala, the concept of object oriented programming, executing the Scala code, various classes in Scala like Getters, Setters, Constructors, Abstract, Extending Objects, Overriding Methods, the Java and Scala interoperability, the concept of functional programming and anonymous functions, Bobsrockets package and comparing the mutable and immutable collections, Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), first Spark application using SBT/Eclipse, Spark Web UI, Spark in Hadoop ecosystem.

Hands-on Exercise: Writing Spark application using Scala, understanding the robustness of Scala for Spark real-time analytics operation

Spark framework

Detailed Apache Spark, its various features, comparing with Hadoop, various Spark components, combining HDFS with Spark, Scalding, introduction to Scala and importance of Scala and RDD
Hands-on Exercise: The Resilient Distributed Dataset in Spark and how it helps to speed up Big Data processing

RDD in Spark

Understanding the Spark RDD operations, comparison of Spark with MapReduce, what is a Spark transformation, loading data in Spark, types of RDD operations viz. transformation and action and what is a Key/Value pair
Hands-on Exercise: How to deploy RDD with HDFS, using the in-memory dataset, using file for RDD, how to define the base RDD from external file, deploying RDD via transformation, using the Map and Reduce functions and working on word count and count log severity

Data Frames and Spark SQL

The detailed Spark SQL, the significance of SQL in Spark for working with structured data processing, Spark SQL JSON support, working with XML data and parquet files, creating Hive Context, writing Data Frame to Hive, how to read a JDBC file, significance of a Spark Data Frame, how to create a Data Frame, what is schema manual inferring, how to work with CSV files, JDBC table reading, data conversion from Data Frame to JDBC, Spark SQL user-defined functions, shared variable and accumulators, how to query and transform data in Data Frames, how Data Frame provides the benefits of both Spark RDD and Spark SQL and deploying Hive on Spark as the execution engine

Hands-on Exercise: Data querying and transformation using Data Frames and finding out the benefits of Data Frames over Spark SQL and Spark RDD

Machine Learning Using Spark (MLlib)

Introduction to Spark MLlib, understanding various algorithms, what is Spark iterative algorithm, Spark graph processing analysis, introducing Machine Learning, K-Means clustering, Spark variables like shared and broadcast variables and what are accumulators, various ML algorithms supported by MLlib, Linear Regression, Logistic Regression, Decision Tree, Random Forest, K-means clustering techniques, building a Recommendation Engine
Hands-on Exercise:  Building a Recommendation Engine

Integrating Apache Flume and Apache Kafka

Why Kafka, what is Kafka, Kafka architecture, Kafka workflow, configuring Kafka cluster, basic operations, Kafka monitoring tools, integrating Apache Flume and Apache Kafka
Hands-on Exercise:  Configuring Single Node Single Broker Cluster, Configuring Single Node Multi Broker Cluster, Producing and consuming messages, Integrating Apache Flume and Apache Kafka.

Spark Streaming

Introduction to Spark streaming, the architecture of Spark streaming, working with the Spark streaming program, processing data using Spark streaming, requesting count and DStream, multi-batch and sliding window operations and working with advanced data sources, Introduction to Spark Streaming, features of Spark Streaming, Spark Streaming workflow, initializing StreamingContext, Discretized Streams (DStreams), Input DStreams and Receivers, transformations on DStreams, Output Operations on DStreams, Windowed Operators and why it is useful, important Windowed Operators, Stateful Operators.
Hands-on Exercise: Twitter Sentiment Analysis, streaming using netcat server, Kafka-Spark Streaming and Spark-Flume Streaming

Hadoop Administration – Multi-node Cluster Setup Using Amazon EC2

Create a 4-node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code and working with the Cloudera Manager setup
Hands-on Exercise: The method to build a multi-node Hadoop cluster using an Amazon EC2 instance and working with the Cloudera Manager

Hadoop Administration – Cluster Configuration

The overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include and Exclude configuration files, the administration and maintenance of name node, data node directory structures and files, what is a File system image and understanding Edit log.
Hands-on Exercise: The process of performance tuning in MapReduce

Hadoop Administration – Maintenance, Monitoring and Troubleshooting

Introduction to the checkpoint procedure, name node failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes

Hands-on Exercise: How to go about ensuring the MapReduce File System Recovery for different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule and getting to know the Fair Scheduler and its configuration

ETL Connectivity with Hadoop Ecosystem (Self-Paced)

How ETL tools work in Big Data industry, introduction to ETL and data warehousing, working with prominent use cases of Big Data in ETL industry and end-to-end ETL PoC showing Big Data integration with ETL tool
Hands-on Exercise: Connecting to HDFS from ETL tool and moving data from Local system to HDFS, moving data from DBMS to HDFS, working with Hive with ETL Tool and creating MapReduce job in ETL tool

Project Solution Discussion and Cloudera Certification Tips and Tricks

Working towards the solution of the Hadoop project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera certifications, points to focus for scoring the highest marks and tips for cracking Hadoop interview questions

Hands-on Exercise: The project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the Uplatz team

Hadoop Application Testing

Why testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing and Release testing

Roles and Responsibilities of Hadoop Testing Professional

Understanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure, consolidating all the defects and create defect reports, validating new feature and issues in Core Hadoop

Framework Called MRUnit for Testing of MapReduce Programs

Report defects to the development team or manager and driving them to closure, consolidate all the defects and create defect reports, responsible for creating a testing framework called MRUnit for testing of MapReduce programs

Unit Testing

Automation testing using the OOZIE and data validation using the query surge tool

Test Execution

Test plan for HDFS upgrade, test automation and result

Test Plan Strategy and Writing Test Cases for Testing Hadoop Application

How to test, install and configure


-------------------------------------------------------------------------------------------------------

Job Prospects

---------------------------------------------------------------------------------------------

Big Data Hadoop and Spark Interview Questions

---------------------------------------------------------------------------------------------

1. Compare MapReduce with Spark.

Criteria

MapReduce

Spark

Processing speed

Good

Excellent (up to 100 times faster)

Data caching

Hard disk

In-memory

Performing iterative jobs

Average

Excellent

Dependency on Hadoop

Yes

No

Machine Learning applications

Average

Excellent

2. What is Apache Spark?

Spark is a fast, easy-to-use, and flexible data processing framework. It has an advanced execution engine supporting a cyclic data flow and in-memory computing. Apache Spark can run standalone, on Hadoop, or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, and Cassandra, among others.

 

3. Explain the key features of Spark.

• Apache Spark allows integrating with Hadoop.

• It has an interactive language shell, Scala (the language in which Spark is written).

• Spark consists of RDDs (Resilient Distributed Datasets), which can be cached across the computing nodes in a cluster.

• Apache Spark supports multiple analytic tools that are used for interactive query analysis, real-time analysis, and graph processing.

 

4. Define RDD.

RDD is the acronym for Resilient Distribution Datasets—a fault-tolerant collection of operational elements that run in parallel. The partitioned data in an RDD is immutable and distributed. There are primarily two types of RDDs:

• Parallelized collections: The existing RDDs running in parallel with one another

• Hadoop datasets: Those performing a function on each file record in HDFS or any other storage system.

 

5. What does a Spark Engine do?

A Spark engine is responsible for scheduling, distributing, and monitoring the data application across the cluster.

 

6. Define Partitions.

As the name suggests, a partition is a smaller and logical division of data similar to a ‘split’ in MapReduce. Partitioning is the process of deriving logical units of data to speed up data processing. Everything in Spark is a partitioned RDD.

 

7. What operations does an RDD support?

• Transformations

• Actions

 

8. What do you understand by Transformations in Spark?

Transformations are functions applied to RDDs, resulting in another RDD. It does not execute until an action occurs. Functions such as map() and filer() are examples of transformations, where the map() function iterates over every line in the RDD and splits into a new RDD. The filter() function creates a new RDD by selecting elements from the current RDD that passes the function argument.

 

9. Define Actions in Spark.

In Spark, an action helps in bringing back data from an RDD to the local machine. They are RDD operations giving non-RDD values. The reduce() function is an action that is implemented again and again until only one value if left. The take() action takes all the values from an RDD to the local node.

 

10. Define the functions of Spark Core.

Serving as the base engine, Spark Core performs various important functions like memory management, monitoring jobs, providing fault-tolerance, job scheduling, and interaction with storage systems.

 

11. What is RDD Lineage?

Spark does not support data replication in memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best thing about this is that RDDs always remember how to build from other datasets.

 

12. What is Spark Driver?

Spark driver is the program that runs on the master node of a machine and declares transformations and actions on data RDDs. In simple terms, a driver in Spark creates SparkContext, connected to a given Spark Master. It also delivers RDD graphs to Master, where the standalone Cluster Manager runs.

 

13. What is Hive on Spark?

Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:

hive> set spark.home=/location/to/sparkHome;

hive> set hive.execution.engine=spark;

Hive supports Spark on YARN mode by default.

 

14. Name the commonly used Spark Ecosystems.

• Spark SQL (Shark) for developers

• Spark Streaming for processing live data streams

• GraphX for generating and computing graphs

• MLlib (Machine Learning Algorithms)

• SparkR to promote R programming in the Spark engine

 

15. Define Spark Streaming.

Spark supports stream processing—an extension to the Spark API allowing stream processing of live data streams. Data from different sources like Kafka, Flume, Kinesis is processed and then pushed to file systems, live dashboards, and databases. It is similar to batch processing in terms of the input data which is here divided into streams like batches in batch processing.

 

16. What is GraphX?

Spark uses GraphX for graph processing to build and transform interactive graphs. The GraphX component enables programmers to reason about structured data at scale.

 

17. What does MLlib do?

MLlib is a scalable Machine Learning library provided by Spark. It aims at making Machine Learning easy and scalable with common learning algorithms and use cases like clustering, regression filtering, dimensional reduction, and the like.

 

18. What is Spark SQL?

Spark SQL, better known as Shark, is a novel module introduced in Spark to perform structured data processing. Through this module, Spark executes relational SQL queries on data. The core of this component supports an altogether different RDD called SchemaRDD, composed of row objects and schema objects defining the data type of each column in a row. It is similar to a table in relational databases.

 

19. What is a Parquet file?

Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with the Parquet file and considers it be one of the best Big Data Analytics formats so far.

 

20. What file systems does Apache Spark support?

• Hadoop Distributed File System (HDFS)

• Local file system

• Amazon S3

 

21. What is YARN?

Similar to Hadoop, YARN is one of the key features in Spark, providing a central and resource management platform to deliver scalable operations across the cluster. Running Spark on YARN needs a binary distribution of Spark that is built on YARN support.

 

22. List the functions of Spark SQL.

Spark SQL is capable of:

• Loading data from a variety of structured sources

• Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC), e.g., using Business Intelligence tools like Tableau

• Providing rich integration between SQL and the regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more.

 

23. What are the benefits of Spark over MapReduce?

• Due to the availability of in-memory processing, Spark implements data processing 10–100x faster than Hadoop MapReduce. MapReduce, on the other hand, makes use of persistence storage for any of the data processing tasks.

• Unlike Hadoop, Spark provides in-built libraries to perform multiple tasks using batch processing, steaming, Machine Learning, and interactive SQL queries. However, Hadoop only supports batch processing.

• Hadoop is highly disk-dependent, whereas Spark promotes caching and in-memory data storage.

• Spark is capable of performing computations multiple times on the same dataset, which is called iterative computation. Whereas, there is no iterative computing implemented by Hadoop.

 

24. Is there any benefit of learning MapReduce?

Yes, MapReduce is a paradigm used by many Big Data tools, including Apache Spark. It becomes extremely relevant to use MapReduce when data grows bigger and bigger. Most tools like Pig and Hive convert their queries into MapReduce phases to optimize them better.

 

25. What is Spark Executor?

When SparkContext connects to Cluster Manager, it acquires an executor on the nodes in the cluster. Executors are Spark processes that run computations and store data on worker nodes. The final tasks by SparkContext are transferred to executors for their execution.

 

26. Name the types of Cluster Managers in Spark.The Spark framework supports three major types of Cluster Managers.

• Standalone: A basic Cluster Manager to set up a cluster

• Apache Mesos: A generalized/commonly-used Cluster Manager, running Hadoop MapReduce and other applications

• YARN: A Cluster Manager responsible for resource management in Hadoop.

 

27. What do you understand by a Worker node?

A worker node refers to any node that can run the application code in a cluster.

 

28. What is PageRank?

A unique feature and algorithm in GraphX, PageRank is the measure of each vertex in a graph. For instance, an edge from u to v represents an endorsement of v‘s importance w.r.t. u. In simple terms, if a user at Instagram is followed massively, he/she will be ranked high on that platform.

 

29. Do you need to install Spark on all the nodes of the YARN cluster while running Spark on YARN?

No, because Spark runs on top of YARN.

 

30. Illustrate some demerits of using Spark.

Since Spark utilizes more storage space when compared to Hadoop and MapReduce, there might arise certain problems. Developers need to be careful while running their applications on Spark. To resolve the issue, they can think of distributing the workload over multiple clusters, instead of running everything on a single node.

 

31. How to create an RDD?

Spark provides two methods to create an RDD:

• By parallelizing a collection in the driver program. This makes use of SparkContext’s ‘parallelize’ method val

IntellipaatData = Array(2,4,6,8,10)

valdistIntellipaatData = sc.parallelize(IntellipaatData)

• By loading an external dataset from external storage like HDFS, the shared file system.

 

32. What are Spark DataFrames?

When a dataset is organized into SQL-like columns, it is known as a DataFrame. This is, in concept, equivalent to a data table in a relational database or a literal ‘DataFrame’ in R or Python. The only difference is the fact that Spark DataFrames are optimized for Big Data.

 

33. What are Spark Datasets?

Datasets are data structures in Spark (added since Spark 1.6) that provide the JVM object benefits of RDDs (the ability to manipulate data with lambda functions), alongside a Spark SQL-optimized execution engine.

 

34. Which languages can Spark be integrated with?

Spark can be integrated with the following languages:

• Python, using the Spark Python API

• R, using the R on Spark API

• Java, using the Spark Java API

• Scala, using the Spark Scala API

 

35. What do you mean by in-memory processing?

In-memory processing refers to the instant access of data from physical memory whenever the operation is called for. This methodology significantly reduces the delay caused by the transfer of data. Spark uses this method to access large chunks of data for querying or processing.

 

36. What is lazy evaluation?

 

Spark implements a functionality, wherein if you create an RDD out of an existing RDD or a data source, the materialization of the RDD will not occur until the RDD needs to be interacted with. This is to ensure the avoidance of unnecessary memory and CPU usage that occurs due to certain mistakes, especially in the case of Big Data Analytics.

 

---------------------------------------------------------------------------------------------


Didn't find what you are looking for?  Contact Us

course.php