Big Data Analytics with Spark: A Practitioner's Guide to Using Spark for Large Scale Data Analysis

Big Data Analytics with Spark: A Practitioner's Guide to Using Spark for Large Scale Data Analysis

Language: English

Pages: 277

ISBN: 1484209656

Format: PDF / Kindle (mobi) / ePub

Big Data Analytics with Spark: A Practitioner's Guide to Using Spark for Large Scale Data Analysis

Language: English

Pages: 277

ISBN: 1484209656

Format: PDF / Kindle (mobi) / ePub


Big Data Analytics with Spark is a step-by-step guide for learning Spark, which is an open-source fast and general-purpose cluster computing framework for large-scale data analysis. You will learn how to use Spark for different types of big data analytics projects, including batch, interactive, graph, and stream data analysis as well as machine learning. In addition, this book will help you become a much sought-after Spark expert.

Spark is one of the hottest Big Data technologies. The amount of data generated today by devices, applications and users is exploding. Therefore, there is a critical need for tools that can analyze large-scale data and unlock value from it. Spark is a powerful technology that meets that need. You can, for example, use Spark to perform low latency computations through the use of efficient caching and iterative algorithms; leverage the features of its shell for easy and interactive Data analysis; employ its fast batch processing and low latency features to process your real time data streams and so on. As a result, adoption of Spark is rapidly growing and is replacing Hadoop MapReduce as the technology of choice for big data analytics.

This book provides an introduction to Spark and related big-data technologies. It covers Spark core and its add-on libraries, including Spark SQL, Spark Streaming, GraphX, and MLlib. Big Data Analytics with Spark is therefore written for busy professionals who prefer learning a new technology from a consolidated source instead of spending countless hours on the Internet trying to pick bits and pieces from different sources.

The book also provides a chapter on Scala, the hottest functional programming language, and the program that underlies Spark. You’ll learn the basics of functional programming in Scala, so that you can write Spark applications in it.

What's more, Big Data Analytics with Spark provides an introduction to other big data technologies that are commonly used along with Spark, like Hive, Avro, Kafka and so on. So the book is self-sufficient; all the technologies that you need to know to use Spark are covered. The only thing that you are expected to know is programming in any language.

There is a critical shortage of people with big data expertise, so companies are willing to pay top dollar for people with skills in areas like Spark and Scala. So reading this book and absorbing its principles will provide a boost―possibly a big boost―to your career.

Handcrafted CSS: More Bulletproof Web Design

Scala in Action

Lisp in Small Pieces

Thinking in C++, Volume 2: Practical Programming (2nd Edition)

Programming in Lua (3rd Edition)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

One of the columns useful for troubleshooting is the Storage Memory column. It shows the amount of memory reserved and memory used for caching data. If the memory available is less than the data that you are trying to cache, you will run into performance issues. Similarly, the Shuffle Read and Shuffle Write columns are useful for troubleshooting performance issues. Shuffle reads and writes are expensive operations. If the values are too high, you should refactor application code or tune Spark to

plan using a cost-model. It takes as input the optimized logical plan generated by the logical optimization phase. Using rules, it then generates one or more physical plans that can be executed by the Spark execution engine. Next, it computes their costs and selects an optimal plan for execution. In addition, it performs rule-based physical optimizations, such as pipelining projections or filters into one Spark operation. It also pushes operations from the logical plan into data sources that

inspired by DataFrames in R and Python. Conceptually, it is similar to a table in a relational database. DataFrame is a class defined in the Spark SQL library. It provides various methods for processing and analyzing structured data. For example, it provides methods for selecting columns, filtering rows, aggregating columns, joining tables, sampling data, and other common data processing tasks. Unlike RDD, DataFrame is schema aware. An RDD is a partitioned collection of opaque elements, whereas

statements. The DataFrame API provides an alternative way for processing a dataset. The DataFrame API consists of five types of operations, which are discussed in this section. Since you will be using a DataFrame in the examples, let’s create a few. I will use the Spark shell so that you can follow along. As a first step, launch the Spark shell from a terminal. $ cd SPARK_HOME $ ./bin/spark-shell --master local[*] Once inside the Spark-shell, create a few DataFrames. case class

LassoModel, IsotonicRegressionModel, DecisionTreeModel, GradientBoostedTreesModel, and RandomForestModel. Instances of these classes are returned by the train or trainRegressor methods of the objects mentioned in the previous section. The commonly used methods in these classes are briefly described next. predictThe predict method of a regression model returns a numerical label for a given set of features. It takes a Vector as an argument and returns a value of type Double. A variant of the

Download sample

Download