I have an RStudio driver instance which is connected to a Spark Cluster. I wanted to know if there is any way to actually connect to Spark cluster from RStudio using an external configuration file which can specify the number of executors, memory and other spark parameters. I know we can do it using the below command
sparkR.session(sparkConfig = list(spark.cores.max='2',spark.executor.memory = '8g'))
I am specifically looking for a method which takes spark parameters from an external file to start the sparkR session.
Spark uses standardized configuration layout with spark-defaults.conf used for specifying configuration option. This file should be located in one of the following directories:
SPARK_HOME/conf
SPARK_CONF_DIR
All you have to do is to configure SPARK_HOME or SPARK_CONF_DIR environment variables and put configuration there.
Each Spark installation comes with template files you can use as an inspiration.
Related
I have the following problem. I have a data pipeline at work that transforms raw data and loads it to a cloud database, for various projects. There are Python scripts for the project-based transformations, but everything must be done manually (defining the transformer's project-based inputs, run the transformer, load the data).
I want to automate this process with Airflow. I created the above steps as tasks in Python. The Airflow instance is running on some computer, which must reach a network drive, where the raw data and the transformer scripts are located. The required connection type is Samba.
I managed to connect to the drive and create a SambaHook object:
samba_file_share: Final[object] = SambaHook(connection_id, file_share_name)
In one task, I need to call and run the transformer script. With a former solution (without Samba) I used Popen, which worked fine. However, I must use Samba now, and I face the following problem.
I have the path of the transformer script by reading out the root folder of the file share from the Samba object, and join the path of the transformer to it:
samba_file_share._join_path(transformer_path)
If I print this out, the path is correct, and the network is available. If I fed it as a string to Popen (or byte string or path-like object) I got the error "No such file or directory".
Can anyone help with it? How can I fed it to Popen to run the script; or should I use something else, not Popen, to run it? The Samba documentation is totally incomplete, I could not found anything there so far.
Thanks,
Marci
This automated Airflow solution works perfectly if I connect from a machine that easily access the network drive.
However, that is only for development, and in production it must run in some other machine which has no direct access to the drive. I must use Samba to connect to it, and it breaks everything.
I am working with two Linux clusters which share the same file system.
Because of that, when I install libraries in one of the clusters, they
get installed in the same folder (/home/R), shared by both clusters,
which causes conflicts if later I work on the other cluster.
Do you know if there is any external variable or even any R hidden config
I could use, so that, upon starting R (or Rstudio) on one cluster it could
detect the cluster and the corresponding path for the libraries' location
(for instance /home/R/cluster1 and /home/R/cluster2)?
Thanks.
Yes, it should be pretty straightforward. Create an Rprofile.site file (see the Initialization at Startup docs for where this goes). In that file, you can write R code to detect which cluster you're on.
Once you know which cluster you're on, use the .libPaths() function (see libPaths docs) to change the library path.
R will run the Rprofile.site file every time a new session starts up, so each session should get its library path adjusted appropriately for the cluster it's on.
I'm pretty new to cluster computing, so not sure if this is even possible.
I am successfully creating a spark_context in Rstudio (using sparklyr) to connect to our local Spark cluster. Using copy_to I can upload data frames from R to Spark, but I am trying to upload a locally stored CSV file directly to the Spark cluster using spark_read_csv without importing it into the R environment first (it's a big 5GB file). It's not working (even prefixing location with file:///), and it seems that it can only upload files that are ALREADY stored in the cluster.
How do I upload a local file directly to spark without loading it into R first??
Any tips appreciated.
You cannot. File has to be reachable from each machine in your cluster either as a local copy or placed on distributed files system / object storage.
You can upload the files from local to spark by using spark_read_csv() method. Please pass the path properly.
Note: It is not necessary to load the data first into R environment.
We have successfully gone through all the SparkR tutorials about setting it up and running basic programs in RStudio on an EC2 instance.
What we can't figure out now is how to then create a project with SparkR as a dependency, compile/jar it, and run any of the various R programs within it.
We're coming from Scala and Java, so we may be coming at this with the wrong mindset. Is this even possible in R or is it done differently that Java's build files and jars or do you just have to run each R script individually without a packaged jar?
do you just have to run each R script individually without a packaged jar?
More or less. While you can create a R package(-s) to store reusable parts of your code (see for example devtools::create or R packages) and optionally distribute it over the cluster (since current public API is limited to high level interactions with JVM backend it shouldn't be required), what you pass to spark-submit is simply a single R script which:
creates a SparkContext - SparkR::sparkR.init
creates a SQLContext / HiveContexts - SparkR::sparkRSQL.init / SparkR::sparkRHive.init
executes the rest of your code
stops SparkContext - SparkR::sparkR.stop
Assuming that external dependencies are present on the workers, missing packages can installed on the runtime using if not require pattern, for example:
if(!require("some_package")) install.packages("some_package")
or
if(!require("some_package")) devtools::install_github("some_user/some_package")
I have a small cluster with 3 machines, and another machine for developing and testing. When developing, I set SparkContext to local. When everything is OK, I want to deploy the Jar file I build to every node. Basically I manually move this jar to cluster and copy to HDFS which shared by the cluster. Then I could change the code to:
//standalone mode
val sc = new SparkContext(
"spark://mymaster:7077",
"Simple App",
"/opt/spark-0.9.1-bin-cdh4", //spark home
List("hdfs://namenode:8020/runnableJars/SimplyApp.jar") //jar location
)
to run it in my IDE. My question: Is there any way easier to move this jar to cluster?
In Spark, the program creating the SparkContext is called 'the driver'. It's sufficient that the jar file with your job is available to the local file system of the driver in order for it to pick it up and ship it to the master/workers.
In concrete, your config will look like:
//favor using Spark Conf to configure your Spark Context
val conf = new SparkConf()
.setMaster("spark://mymaster:7077")
.setAppName("SimpleApp")
.set("spark.local.ip", "172.17.0.1")
.setJars(Array("/local/dir/SimplyApp.jar"))
val sc = new SparkContext(conf)
Under the hood, the driver will start a server where the workers will download the jar file(s) from the driver. It's therefore important (and often an issue) that the workers have network access to the driver. This can often be ensured by setting 'spark.local.ip' on the driver in a network that's accessible/routable from the workers.