sparkR.init {SparkR} | R Documentation |
This function initializes a new SparkContext.
sparkR.init(master = "", appName = "SparkR", sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(), sparkExecutorEnv = list(), sparkJars = "", sparkRLibDir = "")
master |
The Spark master URL. |
appName |
Application name to register with cluster manager |
sparkHome |
Spark Home directory |
sparkEnvir |
Named list of environment variables to set on worker nodes. |
sparkExecutorEnv |
Named list of environment variables to be used when launching executors. |
sparkJars |
Character string vector of jar files to pass to the worker nodes. |
sparkRLibDir |
The path where R is installed on the worker nodes. |
sparkRBackendPort |
The port to use for SparkR JVM Backend. |
## Not run:
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark")
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark",
##D list(spark.executor.memory="1g"))
##D sc <- sparkR.init("yarn-client", "SparkR", "/home/spark",
##D list(spark.executor.memory="1g"),
##D list(LD_LIBRARY_PATH="/directory of JVM libraries (libjvm.so) on workers/"),
##D c("jarfile1.jar","jarfile2.jar"))
## End(Not run)