Mastering Apache Spark

Mastering Apache Spark

Language: English

Pages: 318

ISBN: 1783987146

Format: PDF / Kindle (mobi) / ePub

Mastering Apache Spark

Language: English

Pages: 318

ISBN: 1783987146

Format: PDF / Kindle (mobi) / ePub


Gain expertise in processing and storing data by using advanced techniques with Apache Spark

About This Book

  • Explore the integration of Apache Spark with third party applications such as H20, Databricks and Titan
  • Evaluate how Cassandra and Hbase can be used for storage
  • An advanced guide with a combination of instructions and practical examples to extend the most up-to date Spark functionalities

Who This Book Is For

If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark, then this book is ideal for you. Basic knowledge of Linux, Hadoop and Spark is assumed. Reasonable knowledge of Scala is expected.

What You Will Learn

  • Extend the tools available for processing and storage
  • Examine clustering and classification using MLlib
  • Discover Spark stream processing via Flume, HDFS
  • Create a schema in Spark SQL, and learn how a Spark schema can be populated with data
  • Study Spark based graph processing using Spark GraphX
  • Combine Spark with H20 and deep learning and learn why it is useful
  • Evaluate how graph storage works with Apache Spark, Titan, HBase and Cassandra
  • Use Apache Spark in the cloud with Databricks and AWS

In Detail

Apache Spark is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL. It operates at unprecedented speeds, is easy to use and offers a rich set of data transformations.

This book aims to take your limited knowledge of Spark to the next level by teaching you how to expand Spark functionality. The book commences with an overview of the Spark eco-system. You will learn how to use MLlib to create a fully working neural net for handwriting recognition. You will then discover how stream processing can be tuned for optimal performance and to ensure parallel processing. The book extends to show how to incorporate H20 for machine learning, Titan for graph based storage, Databricks for cloud-based Spark. Intermediate Scala based code examples are provided for Apache Spark module processing in a CentOS Linux and Databricks cloud environment.

Style and approach

This book is an extensive guide to Apache Spark modules and tools and shows how Spark's functionality can be extended for real-time processing and storage with worked examples.

Prototyping Augmented Reality (1st Edition)

Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices

Raspberry Pi Blueprints

Numerical Methods using MATLAB

Data Mashups in R

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

activation curve as shown in the previous graph. The bigger the value, the more similar a function becomes to an on/off step. The value of A sets a minimum for the returned activation. In the previous graph it is zero. So, this provides a mechanism for simulating a neuron, creating weighting matrices as the neuron connections, and managing the neuron activation. But how are the networks organized? The next diagram shows a suggested neuron architecture—the neural network has an input layer of

sent to port 10777 on a host called hc2r1m1 using the Linux netcat (nc) command. This will act as a source (source1) for the Flume agent (agent1), which will have an in-memory channel called channel1. The sink used by agent1 will be Apache Avro based, again on a host called hc2r1m1, but this time, the port number will be 11777. The Apache Spark Flume application stream4 (which I will describe shortly) will listen for Flume stream data on this port. I start the streaming process by executing the

= args(0).trim val groupid = args(1).trim val topics = args(2).trim println("brokers : " + brokers) println("groupid : " + groupid) println("topics : " + topics) The Spark context is defined in terms of an application name. Again the Spark URL has been left as the default. The streaming context has been created using the Spark context. I have left the stream batch interval at 10 seconds, which is the same as the last example. However, you can set it using a parameter of your choice: val

until terminated with awaitTermination: ssc.start() ssc.awaitTermination() } // end main } // end stream6 With all of the scripts explained and the Kafka CDH brokers running, it is time to examine the Kafka configuration, which if you remember is maintained by Apache ZooKeeper (all of the code samples that have been described so far will be released with the book). I will use the zookeeper-client tool, and connect to the zookeeper server on the host called hc2r1m2 on the 2181 port. As you can

Cassandra, and obtain the contents of the edgestore table as an RDD. The size of this RDD is then printed, and the script exits. We won't look at this data at this time, because all that was needed was to prove that a connection to Cassandra could be made: val keySpace = "titan" val tableName = "edgestore" val cassRDD = sparkCxt.cassandraTable( keySpace, tableName ) println( "Cassandra Table Rows : " + cassRDD.count ) println( " >>>>> Script Finished <<<<< " ) } // end main } // end spark3_cass

Download sample

Download