首页 > 技术文章 > Spark0.8.0的安装配置

lam99v 2014-01-03 13:02 原文

 

1、profile

export SCALA_HOME=/home/hadoop/scala-2.9.3
SPARK_080=/home/hadoop/spark-0.8.0
export SPARK_HOME=$SPARK_080
export SPARK_EXAMPLES_JAR=$SPARK_HOME/examples/target/spark-examples_2.9.3-0.8.0-incubating.jar
export CLASSPATH=$CLASSPATH:$SPARK_HOME/assembly/target/scala-2.9.3:$SPARK_HOME/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop2.0.0-mr1-cdh4.2.0.jar
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME

2、设置conf/slaves

3、测试Spark

单机运行:

run-example org.apache.spark.examples.SparkPi local

集群运行(运行Start-all.sh,启动各节点后):

run-example org.apache.spark.examples.SparkPi spark://kit-b5:7077

run-example org.apache.spark.examples.SparkLR spark://kit-b5:7077

run-example org.apache.spark.examples.SparkKMeans spark://kit-b5:7077 ./kmeans_data.txt 2 1

run-example org.apache.spark.examples.SparkKMeans spark://kit-b5:7077 hdfs://kit-b5:8020/kmeans_data.txt 2 1 同上

 

从HDFS读取文件并运行WordCount(启动hadoop、spark后):

$ MASTER=spark://kit-b5:7077 spark-shell

scala> val file = sc.textFile("hdfs://kit-b5:8020/input/README.txt")

scala> file.count()

或者:

scala> val file = sc.textFile("hdfs://kit-b5:8020/input/README.txt")

scala> val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)

scala> count.collect()

推荐阅读