Add the following JVM arg when you launch spark-shell or spark-submit:
-Dspark.executor.memory=6g
You may also consider to explicitly set the number of workers when you create an instance of SparkContext:
Distributed Cluster
Set the slave names in the conf/slaves:
val sc = new SparkContext("master", "MyApp")