R connecting to EC2 instance for parallel processing - r
I am having trouble initialising a connection to an AWS EC2 instance from R as I seem to keep getting the error: Permission denied (publickey) I am currently using a Mac OS X 10.6.8 as my OS
The code that I try to run in the terminal ($) and then R (>) is as follows:
$ R --vanilla
> require(snowfall)
> sfInit(parallel=TRUE,socketHosts =list("ec2-xx-xxx-xx-xx.zone.compute.amazonaws.com"))
Permission denied (publickey)
but weirdly when trying to ssh into the instance I don't need a password as I had already imported the public key into the instance upon initialization, (I think)
so from my normal terminal...when running
$ ssh ubuntu#ec2-xx-xxx-xx-xx.zone.compute.amazonaws.com
it automatically connects...(so im not 100% sure if its a passwordless issue like in Using snow (and snowfall) with AWS for parallel processing in R)
I have tried looking through a fair amount of the material on keys etc, but none of it seems to be making much of a difference. Also my ~/.ssh/authorized_keys is a folder rather than a file for some reason and I can't access it even when trying sudo cd .ssh/authorized_keys... in terms of permissions it has drw-------
The end goal is to connect to a lot of ec2 instances and use foreach to carry out some parallel processing...but connecting to one for now would be nice...also I would like to use my own ami so the starcluster isn't really what I am looking for....(unless I am able to use private amis and run all commands privately...)
also if doRedis is better than if someone could show me how one would connect to the ec2 instance from a local machine that would be good too...
EDIT
I have managed to deal with the ssh password-less login using the parallel package makePSOCKcluster as shown in R and makePSOCKcluter EC2 socketConnection ...but now coming across socketConnection issues as shown in the question in the link...
Any ideas how to connect to it?
Also proof that everything is working, I guess would mean that the following command/function would work to get in all the different ip addresses
d <- parLapply(cl1, 1:length(cl1),function(x)system("ifconfig",intern=T)[2])
where cl1 is the output of the make*cluster function
NOTE since the bounty is really for the question in the link....I don't mind which question you post up an answer to...but the so long as something is written on this question that links it to the correct answer on the linked question, then I will award the points accordingly...
I had quite a few issues with parallel EC2 setup too when trying to keep the master node local. Using StarCluster to setup the pool helped greatly, but the real improvement came with using StarCluster and having the master node within the EC2 private ip pool.
StarCluster sets up all of the key handling for all the nodes as well as any mounts used. Dynamic node allocation wasn't doable, but unless spot instances are going to be used long term and your bidding strategy doesn't 'keep' your instances then dynamic allocation should be an issue.
Some other lessons learned:
Create a variable containing the private IPs to pass to createCluster and export it, so when you have need to restart with the same nodes it is easier.
Have the master node run byobu and set it up for R session logging.
Running RStudio server on the master can be very helpful at times, but should be a different AMI than the slave nodes. :)
Have the control script offload data rda files to a path that is remotely monitored for new files and automatically download them.
Use htop to monitor the slaves so you can easily see the instances and determine script requirements (memory/cpu/scalability).
Make use of processor hyper-threading enable/disable scripts.
I had quite a bit of an issue with the slave connections and serialize/unserialize and found that one of the things was the connection limit, and that the connection limit needed to be reduced by the number of nodes; and when the control script was stopped the easiest method of cleanup was restarting the master R session, and using a script to kill the slave processes instead of waiting for timeout.
It did take a bit of work to setup, but hopefully these thoughts help...
Although it was 8 months ago and both StarCluster and R have changed here's some of how it was setup... You'll find 90% of this in the StarCluster docs.
Setup .starcluster/config AWS and key-pair sections based on the seurity info from AWS console.
Define the [smallcluster]
key-name
availability-zone
Define a cluster template extending [smallcluster]. Using AMIs based on the StarCluster 64bit HVM AMI. Instead of creating new public AMI instances I just saved a configured instance (with all the tools I needed) and used that as the AMI.
Here's an example of one...
[cluster Rnodes2]
EXTENDS=smallcluster
MASTER_INSTANCE_TYPE = cc1.4xlarge
MASTER_IMAGE_ID= ami-7621f91f
NODE_INSTANCE_TYPE = cc2.8xlarge
NODE_IMAGE_ID= ami-7621f91f
CLUSTER_SIZE= 8
VOLUMES= rdata
PLUGINS= pkginstaller
SPOT_BID= 1.00
Setup the shared volume, this is where the screen/byoubu logs, the main .R script checkpoint output, shared R data, and the source for the production package live. It was monitored for new files in a child path called export so if the cluster or control script died/abended a max number of records would all that would be lost and need to be re-calculated.
After creating the shared volume, the definition was simply:
[volume rdata]
VOLUME_ID = vol-1145497c
MOUNT_PATH = /rdata
The package installer which ensured the latest (and equal) R versions on all nodes.
[plugin pkginstaller]
setup_class = starcluster.plugins.pkginstaller.PackageInstaller
packages = r-base, r-base-dev, r-recommended
Lastly, access permissions for both ssh and RStudio server. Https via proxy would be safer, but since RStudio was only used for the control script setup...
[permission ssh]
# protocol can be: tcp, udp, or icmp
protocol = tcp
from_port = 22
to_port = 22
# [permission http]
protocol = tcp
from_port = 8787
to_port = 8787
Then startup a cluster using the StarCluster interface. It handles all of the access controls, system names, shares, etc... Once the cluster was running I ran an ssh session into each from my local system, and ran a script to stop hyper-threading:
#!/bin/sh
# disable hyperthreading
for cpunum in $(
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list |
cut -s -d, -f2- | tr ',' '\n' | sort -un); do
echo 0 > /sys/devices/system/cpu/cpu$cpunum/online
done
then started an htop session on each for monitoring scalability against the exported checkpoint logs.
Then, logged into the master, started a screen session (I've since preferred byobu) and fired up R from within the StarCluster mounted volume. That way when the cluster stopped for some reason I could easily setup again just by starting R. Once in R the first thing was to create a workers.list variable using the nodeXXX names, which was simply something along the lines of:
cluster.nodes <- c("localhost", paste("node00", 1:7, sep='' ) )
workers.list <- rep( cluster.nodes, 8 )
Then I loaded up the control script, quit and saved the workspace. The control script handled all of the table output for exporting and checkpoints and par wrapped calls to the production package. The main function of the script also took a cpus argument which is where the workers list was placed, which was then passed as cores to the cluster initializer.
initialize.cluster <- function( cores )
{
if( exists( 'cl' ) ) stopCluster( cl )
print("Creating Cluster")
cl <- makePSOCKcluster( cores )
print("Cluster created.")
assign( 'cl', cl, envir=.GlobalEnv )
print( cl )
# All workers need to have the bounds generator functions...
clusterEvalQ( cl, require('scoreTarget') )
# All workers need to have the production script and package.
clusterExport( cl, varlist=list('RScoreTarget', 'scoreTarget'))
return ( cl )
}
Once the R session was restarted (after initially creating the worker.list) the control script was sourced, and the main func called. That was it. With this setup, if the cluster ever stopped, I'd just quit the rsession on the main host; stop the slave processes via htop on each of the slaves and startup again.
Here's an example of it in action::
R
R version 2.15.0 (2012-03-30)
Copyright (C) 2012 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
[Previously saved workspace restored]
> source('/rdata/buildSatisfactionRangeTable.R')
Loading required package: data.table
data.table 1.7.7 For help type: help("data.table")
Loading required package: parallel
Loading required package: scoreTarget
Loading required package: Rcpp
> ls()
[1] "build.satisfaction.range.table" "initialize.cluster"
[3] "initialize.table" "parallel.choices.threshold"
[5] "rolled.lower" "rolled.upper"
[7] "RScoreTarget" "satisfaction.range.table"
[9] "satisfaction.search.targets" "search.range.bound.offsets"
[11] "search.range.bounds" "search.range.center"
[13] "Search.Satisfaction.Range" "update.bound.offset"
[15] "workers.list"
> workers.list
[1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"
[7] "localhost" "localhost" "node001" "node002" "node003" "node004"
[13] "node005" "node006" "node007" "node001" "node002" "node003"
[19] "node004" "node005" "node006" "node007" "node001" "node002"
[25] "node003" "node004" "node005" "node006" "node007" "node001"
[31] "node002" "node003" "node004" "node005" "node006" "node007"
[37] "node001" "node002" "node003" "node004" "node005" "node006"
[43] "node007" "node001" "node002" "node003" "node004" "node005"
[49] "node006" "node007" "node001" "node002" "node003" "node004"
[55] "node005" "node006" "node007" "node001" "node002" "node003"
[61] "node004" "node005" "node006" "node007" "node001" "node002"
[67] "node003" "node004" "node005" "node006" "node007" "node001"
[73] "node002" "node003" "node004" "node005" "node006" "node007"
[79] "node001" "node002" "node003" "node004" "node005" "node006"
[85] "node007" "node001" "node002" "node003" "node004" "node005"
[91] "node006" "node007" "node001" "node002" "node003" "node004"
[97] "node005" "node006" "node007" "node001" "node002" "node003"
[103] "node004" "node005" "node006" "node007" "node001" "node002"
[109] "node003" "node004" "node005" "node006" "node007" "node001"
[115] "node002" "node003" "node004" "node005" "node006" "node007"
> build.satisfaction.range.table(500000, FALSE, workers.list )
[1] "Creating Cluster"
[1] "Cluster created."
socket cluster with 120 nodes on hosts ‘localhost’, ‘node001’, ‘node002’, ‘node003’, ‘node004’, ‘node005’, ‘node006’, ‘node007’
Parallel threshold set to: 11000
Starting at: 2 running to: 5e+05 :: Sat Apr 14 22:21:05 2012
If you have read down to here then you may be interested to know that I tested each cluster setup I could (including openMPI) and found that there wasn't a speed difference, perhaps that is because my calculations where so CPU bound, perhaps not.
Also, don't give up even though it can be a pain to get going with HPC. It can be totally worth it. I would still be waiting to complete the first 100,000 iterations of the calculations I was running had I stuck with a naive implementation in base-R on a commodity workstation (well, not really as I would never have stuck with R :D ). With the cluster, 384,000 iterations completed in under a week. Totally worth the time (and it took a lot of it) to setup.
Related
How to make Julia PkgServer.jl work offline
I work mostly on offline machines and really want to begin to migrate from Python to Julia. Currently the biggest problem I face is how can I setup a package server on my own network that does not have access to the internet. I can copy files to the offline computer/network and want to be able to just cache a good percentage of the Julia Package Ecosystem and copy it to my network, so that I and others can install packages as needed. I have experimented with PkgSever.jl by using the deployment docker-compose script they have, then just installing a long list of packages so that the PkgServer instance would cache everything. Next took the PkgServer machine offline and attempted to install packages from it. This worked well, however when I restarted the docker container the server was running in, everything fell apart quickly. It seems that maybe the PkgServer needs to be able to talk to the Storage Server at least once before being able to serve packages. I tried setting: JULIA_PKG_SERVER_STORAGE_SERVERS from: "https://us-east.storage.juliahub.com,https://kr.storage.juliahub.com" to: "" but that failed miserably. Can someone please point me in the right direction. TIA It looks like the PkgServer is actually trying to contact the Registry before it starts. I don't know enough about the registry stuff enough to know if there is a way to hack this to look locally or just ignore this.. pkgserver_1 | ERROR: LoadError: DNSError: kr.storage.juliahub.com, temporary failure (EAI_AGAIN) pkgserver_1 | Stacktrace: pkgserver_1 | [1] getalladdrinfo(::String) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Sockets/src/addrinfo.jl:112 pkgserver_1 | [2] getalladdrinfo at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Sockets/src/addrinfo.jl:121 [inlined] pkgserver_1 | [3] getconnection(::Type{TCPSocket}, ::SubString{String}, ::String; keepalive::Bool, connect_timeout::Int64, kw::Base.Iterators.Pairs{Symbol,Union{Nothing, Bool},NTuple{4,Symbol},NamedTuple{(:require_ssl_verification, :iofunction, :reached_redirect_limit, :status_exception),Tuple{Bool,Nothing,Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/ConnectionPool.jl:630 pkgserver_1 | [4] #getconnection#29 at /depot/packages/HTTP/IAI92/src/ConnectionPool.jl:682 [inlined] pkgserver_1 | [5] newconnection(::HTTP.ConnectionPool.Pod, ::Type{T} where T, ::SubString{String}, ::SubString{String}, ::Int64, ::Bool, ::Int64; kw::Base.Iterators.Pairs{Symbol,Union{Nothing, Bool},Tuple{Symbol,Symbol,Symbol},NamedTuple{(:iofunction, :reached_redirect_limit, :status_exception),Tuple{Nothing,Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/ConnectionPool.jl:597 pkgserver_1 | [6] getconnection(::Type{HTTP.ConnectionPool.Transaction{MbedTLS.SSLContext}}, ::SubString{String}, ::SubString{String}; connection_limit::Int64, pipeline_limit::Int64, idle_timeout::Int64, reuse_limit::Int64, require_ssl_verification::Bool, kw::Base.Iterators.Pairs{Symbol,Union{Nothing, Bool},Tuple{Symbol,Symbol,Symbol},NamedTuple{(:iofunction, :reached_redirect_limit, :status_exception),Tuple{Nothing,Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/ConnectionPool.jl:541 pkgserver_1 | [7] request(::Type{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}, ::HTTP.URIs.URI, ::HTTP.Messages.Request, ::Array{UInt8,1}; proxy::Nothing, socket_type::Type{T} where T, reuse_limit::Int64, kw::Base.Iterators.Pairs{Symbol,Union{Nothing, Bool},Tuple{Symbol,Symbol,Symbol},NamedTuple{(:iofunction, :reached_redirect_limit, :status_exception),Tuple{Nothing,Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/ConnectionRequest.jl:73 pkgserver_1 | [8] (::Base.var"#56#58"{Base.var"#56#57#59"{ExponentialBackOff,HTTP.RetryRequest.var"#2#3"{Bool,HTTP.Messages.Request},typeof(HTTP.request)}})(::Type{T} where T, ::Vararg{Any,N} where N; kwargs::Base.Iterators.Pairs{Symbol,Union{Nothing, Bool},Tuple{Symbol,Symbol,Symbol},NamedTuple{(:iofunction, :reached_redirect_limit, :status_exception),Tuple{Nothing,Bool,Bool}}}) at ./error.jl:301 pkgserver_1 | [9] #request#1 at /depot/packages/HTTP/IAI92/src/RetryRequest.jl:44 [inlined] pkgserver_1 | [10] request(::Type{HTTP.MessageRequest.MessageLayer{HTTP.RetryRequest.RetryLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}}, ::String, ::HTTP.URIs.URI, ::Array{Pair{SubString{String},SubString{String}},1}, ::Array{UInt8,1}; http_version::VersionNumber, target::String, parent::Nothing, iofunction::Nothing, kw::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:reached_redirect_limit, :status_exception),Tuple{Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/MessageRequest.jl:51 pkgserver_1 | [11] request(::Type{HTTP.BasicAuthRequest.BasicAuthLayer{HTTP.MessageRequest.MessageLayer{HTTP.RetryRequest.RetryLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}}}, ::String, ::HTTP.URIs.URI, ::Array{Pair{SubString{String},SubString{String}},1}, ::Array{UInt8,1}; kw::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:reached_redirect_limit, :status_exception),Tuple{Bool,Bool}}}) at /depot/packages/HTTP/IAI92/src/BasicAuthRequest.jl:28 pkgserver_1 | [12] request(::Type{HTTP.RedirectRequest.RedirectLayer{HTTP.BasicAuthRequest.BasicAuthLayer{HTTP.MessageRequest.MessageLayer{HTTP.RetryRequest.RetryLayer{HTTP.ConnectionRequest.ConnectionPoolLayer{HTTP.StreamRequest.StreamLayer{Union{}}}}}}}}, ::String, ::HTTP.URIs.URI, ::Array{Pair{SubString{String},SubString{String}},1}, ::Array{UInt8,1}; redirect_limit::Int64, forwardheaders::Bool, kw::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol},NamedTuple{(:status_exception,),Tuple{Bool}}}) at /depot/packages/HTTP/IAI92/src/RedirectRequest.jl:24 pkgserver_1 | [13] request(::String, ::String, ::Array{Pair{SubString{String},SubString{String}},1}, ::Array{UInt8,1}; headers::Array{Pair{SubString{String},SubString{String}},1}, body::Array{UInt8,1}, query::Nothing, kw::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol},NamedTuple{(:status_exception,),Tuple{Bool}}}) at /depot/packages/HTTP/IAI92/src/HTTP.jl:314 pkgserver_1 | [14] #get#12 at /depot/packages/HTTP/IAI92/src/HTTP.jl:391 [inlined] pkgserver_1 | [15] get_registries(::String) at /app/src/resource.jl:21 pkgserver_1 | [16] update_registries() at /app/src/resource.jl:130 pkgserver_1 | [17] start(; kwargs::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol,Symbol},NamedTuple{(:listen_addr, :storage_root, :storage_servers),Tuple{Sockets.InetAddr{IPv4},String,Array{SubString{String},1}}}}) at /app/src/PkgServer.jl:88 pkgserver_1 | [18] top-level scope at /app/bin/run_server.jl:43 pkgserver_1 | [19] include(::Function, ::Module, ::String) at ./Base.jl:380 pkgserver_1 | [20] include(::Module, ::String) at ./Base.jl:368 pkgserver_1 | [21] exec_options(::Base.JLOptions) at ./client.jl:296 pkgserver_1 | [22] _start() at ./client.jl:506 pkgserver_1 | in expression starting at /app/bin/run_server.jl:43 This might be helpful but I'm not sure, yet, how to get it started LocalRegistry.jl
Here is a solution that seems to work, based on LocalPackageServer. Preliminary steps Install all required packages. You can either put them in your default environment (e.g. #v1.5) or in a dedicated project. LocalRegistry LocalPackageServer In order to use LocalPackageServer, we'll need to set up a local registry, even though we won't really use it (but it can still be handy if you also have to serve local packages). Something like this should create an empty local registry as local-registry.gitt in the current folder: # Create an empty (bare) git repository to host the registry run(`git init --bare local-registry.git`) # Populate the repository with a new, empty local registry using LocalRegistry Base.Filesystem.mktempdir() do dir create_registry(joinpath(dir, "local-registry"), abspath("local-registry.git"), description = "(unused) local registry", push=true) end Step 1a - run the local package server (online) A script like the following should run a local package server listening on http://localhost:8000. ################# # run_server.jl # ################# using LocalPackageServer config = LocalPackageServer.Config(Dict( # Server parameters "host" => "127.0.0.1", "port" => 8000, "pkg_server" => "https://pkg.julialang.org", # This is where persistent data will be stored # (I use the current directory here; adjust to your constraints) "local_registry" => abspath("local-registry.git"), # In accordance with the preliminary step above "cache_dir" => abspath("cache"), "git_clones_dir" => abspath("data"), )) # The tricky part: arrange for the server to never update its registries # when it is offline if get(ENV, "LOCAL_PKG_SERVER_OFFLINE", "0") == "1" #info "Running offline => no registry updates" config.min_time_between_registry_updates = typemax(Int) end # Start the server LocalPackageServer.start(config) Use this script to run the server online first: shell$ julia --project run_server.jl Step 1b - cache some packages (online) Configure a Julia process to use your local server, and install the packages you want to cache: # Take care to specify the "http://" protocol # (otherwise https might be used by the client, and the server won't answer) shell$ JULIA_PKG_SERVER=http://localhost:8000 julia julia> using Pkg julia> pkg"add Example" [...] At this point, the server should be caching things. Step 2 - run the package server (offline) When you're offline, simply restart the server, ensuring it won't try to update the registries: shell$ LOCAL_PKG_SERVER_OFFLINE=1 julia --project run_server.jl As before, set JULIA_PKG_SERVER as needed for Julia clients to use the local server; they should now be able to install packages, provided that the project environments resolve to the exact same dependency versions that were cached. (You might want to resolve and instantiate your project environments online, and then transfer the relevant manifests to offline systems: this might help guarantee the consistency between the package server cache and what clients ask for.)
Multi Node H2O cluster in R not detecting other EC2 instances
I have been struggling to get a Multi Node H2O cluster up and running using AWS EC2 instances. I have followed the advice from this thread, but still struggle with the nodes not seeing each other. The EC2 instances all use the same AMI that I have pre-built, so the same h2o.jar file is on all of them, I have also tried the following troubleshooting advice: Name cluster -name Rather use -network flag Open port 54321 on security group as 0.0.0.0 Here are my steps: 1) Start AWS EC2 in same availability zone and get private IPs and network cidr (172.31.0.0/20). Put ip addresses into flatfile.txt 172.31.8.210:54321 172.31.9.207:54321 172.31.13.136:54321 2) Copy the flatfile.txt to all servers to which I want to connect as nodes and start H2O # cluster_run library(h2oEnsemble) library(ssh) ips <- gsub("(.*):.*", "\\1", readLines("flatfile.txt")) start_cluster <- function(ip){ # Copy flatfile across session <- ssh_connect(paste0("ubuntu#", ip), keyfile = "mykey.pem") scp_upload(session, "flatfile.txt") # Ensure no h2o instance is already running out <- ssh_exec_wait(session, "sudo pkill java") # Start H2O cluster cmd <- gsub("\\s+", " ", paste0("ssh -i mykey.pem -o 'StrictHostKeyChecking no' ubuntu#", ip, " 'java -Xmx20g -jar /home/rstudio/R/x86_64-pc-linux-gnu-library/3.5/h2o/java/h2o.jar -name mycluster -network 172.31.0.0/20 -flatfile flatfile.txt -port 54321 &'")) system(cmd, wait = FALSE) } start_cluster(ips[3]) start_cluster(ips[2]) start_cluster(ips[1]) 3) Once this has been done, I now want to connect R to my new Multi Node cluster h2o.init(startH2O = F) h2o.shutdown(prompt = FALSE) This is where I see that the nodes aren't being picked up: I have also seen that when I start the H2O cluster on the different nodes, it isnt picking up the other machines within the network:
You need to add port 54321+1 (so 54322) to the security group, as well. The internal communication goes through 54322. (I would also specify /16 for -network because it’s easier for other people to understand. For example, even if you are sure /20 is technically correct for your network setup, I can’t easily be sure. :-) Depending on the actual network setup, you probably don’t need -network flag at all. Your instances probably only have one interface.
Meaning of Julia error: ERROR (unhandled task failure): EOFError: read end of file?
Trying to understand following error in a pmap execution of some embarrassingly parallel tasks. Running on Linux server. might be occurring while writing to HDF (part of parallel call), but I don't think so given stacktrace doesn't point to line in the user-function being executed, and reference to TCP suggests it's part of parallel calls. Has happened in several sequential runs, so not one-time fluke. Worker 139 terminated. ERROR (unhandled task failure): EOFError: read end of file Stacktrace: [1] unsafe_read(::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Ptr{UInt8}, ::UInt64) at ./iobuffer.jl:105 [2] unsafe_read(::TCPSocket, ::Ptr{UInt8}, ::UInt64) at ./stream.jl:752 [3] unsafe_read(::TCPSocket, ::Base.RefValue{NTuple{4,Int64}}, ::Int64) at ./io.jl:361 [4] read at ./io.jl:363 [inlined] [5] deserialize_hdr_raw at ./distributed/messages.jl:170 [inlined] [6] message_handler_loop(::TCPSocket, ::TCPSocket, ::Bool) at ./distributed/process_messages.jl:157 [7] process_tcp_streams(::TCPSocket, ::TCPSocket, ::Bool) at ./distributed/process_messages.jl:118 [8] (::Base.Distributed.##99#100{TCPSocket,TCPSocket,Bool})() at ./event.jl:73 Julia info: julia> versioninfo() Julia Version 0.6.0 Commit 9036443 (2017-06-19 13:05 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Xeon(R) CPU E5620 # 2.40GHz WORD_SIZE: 64 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem) LAPACK: libopenblas64_ LIBM: libopenlibm LLVM: libLLVM-3.9.1 (ORCJIT, westmere) [EDIT: more info] Also, if it's helpful, this appears to be happening well into the run -- output from the first set of parallel runs looks like it's being saved to disk, so this is not an immediate crash, but something that's happening at the end of a run or start of the second set of executions.
OK, so I did finally figure out at a high level what this means: This is the error you get when one of the parallelized workers hits an error. The specific error language (EOFError: read end of file) doesn't really mean anything. And the references to read and io in the stackoverflow just relate to messaging between the overview task and workers. In my case, the error was a memory overflow leading to the task manager terminating the worker.
h2o from R on Windows gives curl error: Protocol "'http" not supported or disabled in libcurl
I've successfully run h2o from R on a linux machine and wanted to install it in Windows too. h2o will not initialise for me. The full output is pasted below but the key seems to be the line [1] "Failed to connect to 127.0.0.1 port 54321: Connection refused" curl: (1) Protocol "'http" not supported or disabled in libcurl Judging from this and this experience it might be something to do with single quotes v double quotes somewhere; but this seems unlikely because then no-one would be able to get h2o / R / Windows combination working and I gather that some people are. On the other hand, this question seems to suggest the problem will be that my curl installation may not have ssl enabled. So I downloaded curl from scratch from this wizard as recommended on the h2o page, selecting the 64 bit version, generic, and selected the version with both SSL and SSH enabled; downloaded it and added the folder it ended up in to my Windows PATH. But no difference. I've just noticed my Java runtime environment is old and will update that as well. But on the face of it it's not obvious that that could be the problem. Any suggestions welcomed. > library(h2o) > h2o.init() H2O is not running yet, starting it now... Note: In case of errors look at the following log files: C:\Users\PETERE~1\AppData\Local\Temp\Rtmpa6G3WA/h2o_Peter_Ellis_started_from_r.out C:\Users\PETERE~1\AppData\Local\Temp\Rtmpa6G3WA/h2o_Peter_Ellis_started_from_r.err java version "1.7.0_75" Java(TM) SE Runtime Environment (build 1.7.0_75-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode) ............................................................ ERROR: Unknown argument (Ellis_cns773) Usage: java [-Xmx<size>] -jar h2o.jar [options] (Note that every option has a default and is optional.) -h | -help Print this help. -version Print version info and exit. -name <h2oCloudName> Cloud name used for discovery of other nodes. Nodes with the same cloud name will form an H2O cloud (also known as an H2O cluster). -flatfile <flatFileName> Configuration file explicitly listing H2O cloud node members. -ip <ipAddressOfNode> IP address of this node. -port <port> Port number for this node (note: port+1 is also used). (The default port is 0.) -network <IPv4network1Specification>[,<IPv4network2Specification> ...] The IP address discovery code will bind to the first interface that matches one of the networks in the comma-separated list. Use instead of -ip when a broad range of addresses is legal. (Example network specification: '10.1.2.0/24' allows 256 legal possibilities.) -ice_root <fileSystemPath> The directory where H2O spills temporary data to disk. -log_dir <fileSystemPath> The directory where H2O writes logs to disk. (This usually has a good default that you need not change.) -log_level <TRACE,DEBUG,INFO,WARN,ERRR,FATAL> Write messages at this logging level, or above. Default is INFO. -flow_dir <server side directory or HDFS directory> The directory where H2O stores saved flows. (The default is 'C:\Users\Peter Ellis\h2oflows'.) -nthreads <#threads> Maximum number of threads in the low priority batch-work queue. (The default is 99.) -client Launch H2O node in client mode. Cloud formation behavior: New H2O nodes join together to form a cloud at startup time. Once a cloud is given work to perform, it locks out new members from joining. Examples: Start an H2O node with 4GB of memory and a default cloud name: $ java -Xmx4g -jar h2o.jar Start an H2O node with 6GB of memory and a specify the cloud name: $ java -Xmx6g -jar h2o.jar -name MyCloud Start an H2O cloud with three 2GB nodes and a default cloud name: $ java -Xmx2g -jar h2o.jar & $ java -Xmx2g -jar h2o.jar & $ java -Xmx2g -jar h2o.jar & [1] "127.0.0.1" [1] 54321 [1] TRUE [1] -1 [1] "Failed to connect to 127.0.0.1 port 54321: Connection refused" curl: (1) Protocol "'http" not supported or disabled in libcurl [1] 1 Error in h2o.init() : H2O failed to start, stopping execution. In addition: Warning message: running command 'curl 'http://localhost:54321'' had status 1 > sessionInfo() R version 3.2.3 (2015-12-10) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 locale: [1] LC_COLLATE=English_New Zealand.1252 LC_CTYPE=English_New Zealand.1252 LC_MONETARY=English_New Zealand.1252 [4] LC_NUMERIC=C LC_TIME=English_New Zealand.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] h2o_3.6.0.8 statmod_1.4.22 loaded via a namespace (and not attached): [1] tools_3.2.3 RCurl_1.95-4.7 jsonlite_0.9.19 bitops_1.0-6
We pushed a fix to master for this issue: https://0xdata.atlassian.net/browse/PUBDEV-2526 If you want to try it out now you can build from master as follows: git clone https://github.com/h2oai/h2o-3 cd h2o-3 ./gradlew build -x test R CMD INSTALL ./h2o-r/R/src/contrib/h2o_3.7.0.99999.tar.gz Or download the next nightly release tomorrow.
R Shiny breaks PostgreSQL authentication with .pgpass
I have my password to database stored in pgpass.conf file. I am connecting to database from R with RPostgres, without specifying password so it is read from pgpass.conf, like this: con <- dbConnect(RPostgres::Postgres(), dbname = "dbname", user = "username", host = "localhost", port = "5432") It usually works perfectly, however when I try to connect to database from shiny app it doesn't work. Connection definition is exactly the same as above and placed in server.R script. When I run Shiny app with default arguments I get an error: FATAL: password authentication failed for user "username" password retrieved from file "C:\Users\...\AppData\Roaming/postgresql/pgpass.conf" When password is explicitly given in connection definition: con <- dbConnect(RPostgres::Postgres(), dbname = "dbname", user = "username", host = "localhost", password = "mypass", port = "5432") everything works. To make things stranger, when port for shiny is set to some value, for example: shiny::runApp(port = 4000), connection is established without specifying password, but ONLY for the first time - that means when app is closed and reopened in the same R session, the error occurs again. I've tested package 'RPostgreSQL' - it doesn't work neither, only error message is different: Error in postgresqlNewConnection(drv, ...) : RS-DBI driver: (could not connect postgres#localhost on dbname "dbname") I use 32-bit R but I've tested it on 64-bit and it was the same. Shiny app was run both in browser (Chrome) and in Rstudio Viewer. Here my session info: R version 3.2.2 (2015-08-14) Platform: i386-w64-mingw32/i386 (32-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 locale: [1] LC_COLLATE=Polish_Poland.1250 LC_CTYPE=Polish_Poland.1250 LC_MONETARY=Polish_Poland.1250 [4] LC_NUMERIC=C LC_TIME=Polish_Poland.1250 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] RPostgres_0.1 DBI_0.3.1.9008 shiny_0.12.2 loaded via a namespace (and not attached): [1] R6_2.1.1 htmltools_0.2.6 tools_3.2.2 rstudioapi_0.3.1 Rcpp_0.12.1.3 jsonlite_0.9.17 digest_0.6.8 [8] xtable_1.7-4 httpuv_1.3.3 mime_0.4 RPostgreSQL_0.4
There's likely something different about the environment in which the command is run between Shiny and your system R GUI. I get around this by storing my credentials in an Renviron file: readRenviron("~/.Renviron") con <- dbConnect(RPostgres::Postgres(), dbname = Sys.getenv('pg_db'), user = Sys.getenv('api_user'), ...) The thing about that is you could maintain separate Renvirons for staging and production environments. This allows your script to take a commandArgs() to specify which DB credentials it should use: #!/usr/bin/env Rscript environ_path <- switch(commandArgs(), 'staging' = {"~/staging.Renviron"}, 'production' = {"~/production/Renviron"}) readRenviron(environ_path) Then from BASH: Rscript analysis.R staging
The Error is in Postgresql: C:\Users\...\AppData\Roaming/postgresql/pgpass.conf the file path contains "/" instead of "\"