Ramblings of a data dork: from BI and Big Data to Travel and Food
Spark is an in-memory open source cluster computing system allowing for fast iterative and interactive analytics. Spark utilizes Scala – a type-safe objected oriented language with functional properties that is fully interoperable with Java. For more information about Spark, please refer to http://spark-project.org. To test out Spark, you can install the stand-alone version on Mac OSX.
The first thing you will need to do is to install Scala 2.9.2 as Spark 0.6.1 is dependent on it. As of this posting, the current version of Scala is 2.10 but there are some issues with Spark 0.6.1 and Scala 2.10 as noted in this thread.
1) A handy way to installing Scala is to use Home Brew; please reference Installing Hadoop on OSX Lion (10.7) for more information on how to use Home Brew as well installing Hadoop on Mac OSX. It may be handy to install Hadoop so that way you can use Spark against HDFS as well.
2) The current Home Brew scala formula installs Scala 2.10 but you will need to use Scala 2.9.2. A quick way to do to this is to modify the scala.rb formula (/usr/local/Library/Formula/scala.rb) to install Scala 2.9.2.
3) Installing Scala via HomeBrew by typing the command in a bash terminal:
Upon running this command, scala will be located in /usr/local/Cellar/scala
Ensure you have Git for Mac installed (even if you have GitHub for Mac installed; need to install Git so you can run from the command line)
In my case, I have configured my .profile with the following:
1) Obtain the pre-built Spark 0.6.1 package at http://spark-project.org/downloads/. The direct link for the prebuilt package is:
2) Open up the tgz file and place it into a folder where you will install Spark. For example, I placed mine in the HomeBrew Cellar location, i.e.
Follow the instructions as per the README.MD in /usr/local/Cellar/spark-0.6.1
1) Run the Simple Build Tool (SBT) package from /usr/local/Cellar/spark-0.6.1
2) Modify the conf/spark-env.sh
Ensure that SCALA_HOME variable has been set
From here, you can now run Spark examples. Just in case, run the conf/spark-env.sh to set the Scala enviornment variables.
and to run the spark shell:
where local indicates standalone (vs. EC2, cluster, mesos, etc.) and [x] is the number of cores.