I am posting a similar question a second time because I believe I now have a far more precise view of the problem.
Environment : Hadoop 2.2.0 running as a Single Node Cluster on an Ubuntu 14.04 laptop machine. RStudio version 0.98.507, R version 3.0.2 (2013-09-25), Java Version 1.7.0_55
Any R (or Python) program works perfectly with the Hadoop Streaming utility located at /usr/local/hadoop220/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar
Errors appear when we use package "rmr" ( part of RHadoop) and call mapreduce() from inside an R program being run in RStudio.
To simplify this post, I am showing a very simple program that fails ( other bigger programs fail with identical error messages)
Sys.setenv(HADOOP_CMD="/usr/local/hadoop220/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop220/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar")
library('rhdfs')
library('rmr2')
hdfs.init()
hdfs.ls("/user/hduser")
small.ints = to.dfs(1:1000)
mapreduce(
input = small.ints,
map = function(k, v) cbind(v, v^2))
the errors that show up on the R-Studio console are
> Sys.setenv(HADOOP_CMD="/usr/local/hadoop220/bin/hadoop")
> Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop220/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar")
> library('rhdfs')
Loading required package: rJava
HADOOP_CMD=/usr/local/hadoop220/bin/hadoop
Be sure to run hdfs.init()
> library('rmr2')
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: bitops
Loading required package: digest
Loading required package: functional
Loading required package: reshape2
Loading required package: stringr
Loading required package: plyr
Loading required package: caTools
> hdfs.init()
14/05/10 14:20:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> hdfs.ls("/user/hduser")
permission owner group size modtime file
1 drwxr-xr-x hduser supergroup 0 2014-05-07 17:44 /user/hduser/BT
2 drwxr-xr-x hduser supergroup 0 2014-05-09 07:14 /user/hduser/BT-out
3 drwxr-xr-x hduser supergroup 0 2014-05-09 20:30 /user/hduser/BTR-out
4 drwxr-xr-x hduser supergroup 0 2014-05-07 17:44 /user/hduser/BTj-in
> small.ints = to.dfs(1:1000)
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop220/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/05/10 14:20:50 WARN util.NativeCodeLoader: ... using builtin-java classes where applicable
[ these two messages repeat multiple times ]
> mapreduce(
+ input = small.ints,
+ map = function(k, v) cbind(v, v^2))
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop220/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/05/10 14:21:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/10 14:21:20 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
packageJobJar: [/tmp/RtmpYCerEW/rmr-local-env282d4c7a3b53, /tmp/RtmpYCerEW/rmr-global-env282d77c9da92, /tmp/RtmpYCerEW/rmr-streaming-map282d4225651a, /tmp/hadoop-hduser/hadoop-unjar678942474363050554/] [] /tmp/streamjob8073315154972274831.jar tmpDir=null
14/05/10 14:21:21 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/10 14:21:21 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/10 14:21:22 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/10 14:21:22 INFO mapreduce.JobSubmitter: number of splits:2
14/05/10 14:21:22 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
14/05/10 14:21:22 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/05/10 14:21:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1399709731242_0003
14/05/10 14:21:23 INFO impl.YarnClientImpl: Submitted application application_1399709731242_0003 to ResourceManager at /0.0.0.0:8032
14/05/10 14:21:23 INFO mapreduce.Job: The url to track the job: http://yantrajaal:8088/proxy/application_1399709731242_0003/
14/05/10 14:21:23 INFO mapreduce.Job: Running job: job_1399709731242_0003
14/05/10 14:21:30 INFO mapreduce.Job: Job job_1399709731242_0003 running in uber mode : false
14/05/10 14:21:30 INFO mapreduce.Job: map 0% reduce 0%
14/05/10 14:21:43 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
14/05/10 14:21:44 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000001_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
14/05/10 14:22:04 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
14/05/10 14:22:04 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000001_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
14/05/10 14:22:17 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000001_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
14/05/10 14:22:17 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
14/05/10 14:22:26 INFO mapreduce.Job: map 100% reduce 0%
14/05/10 14:22:26 INFO mapreduce.Job: Job job_1399709731242_0003 failed with state FAILED due to: Task failed task_1399709731242_0003_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/05/10 14:22:26 INFO mapreduce.Job: Counters: 10
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=91997
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
14/05/10 14:22:26 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
>
I have Googled the two irritating warnings
(a) disabled stack guard and have discovered from this link that there is "nothing to worry about" just a warning
(b) Unable to load native-hadoop library for your platform... using builtin-java classes where applicble .. this is also a warning as per this link nothing to worry about
After discounting these two warnings as not being the cause, the main error that I find is here
14/05/10 14:21:43 INFO mapreduce.Job: Task Id : attempt_1399709731242_0003_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
i have reinstalled the RHadoop packages, rmr and rhdfs and have also reinstalled rJava. Have in the past tried with Hadoop 1.3 as well, but errors are same.
would be really grateful if someone can suggest some way forward on this
I resolved the problem by changing the directory of installation of rmr2, rhdfs,... packages.
Basically you need to install all the packages in a system folder instead of custom folder.
There seem to be a problem with the location of installation. Initially I installed the packages in a custom folder:
/home/user/R/3.1
By re-installing the packages at:
/usr/lib/R/library
I got the code working.
I hope this will be of some help.
Related
I'm trying to connect from R to a remote spark cluster.
The spark cluster is build on debian jessie and the R version i can install on it is at most 3.3 but I need 3.4 to be able to run FactoMineR. So I installed R on another machine and try to connect the cluster using sparklyr 0.8.4
> sc <- spark_connect(master = "spark://spark-cluster-m:7077", spark_home="/usr/lib/spark/", version="2.2.1")
Error in start_shell(master = master, spark_home = spark_home, spark_version = version, :
SPARK_HOME directory '/usr/lib/spark/' not found
spark isn't installed on the local machine but on the spark-cluster-m, it is :
jc#spark-cluster-m:/usr/lib/spark$ ls
bin conf data examples external jars LICENSE licenses NOTICE python R README.md RELEASE sbin work yarn
Have I missed something ?
The spark cluster is on google cloud (test account) and so is the VM with R. How do I verify the port spark can be connected to ?
Thanks for your clues
#user16... You're right, this particular problem seems to be solved but my way is not ended.
I installed the same spark version (2.2.1 with hadoop > 2.7)
Here is my new error message :
Error in force(code) :
Failed during initialize_connection: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:137)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Log: /tmp/RtmpTUh0z6/file5d231368db0_spark.log
---- Output Log ----
at io.netty.channel.nio.NioEventLoop.processS
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:24:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-cluster-m:7077...
18/07/21 18:24:59 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to spark-cluster-m/10.142.0.3:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
... 4 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: spark-cluster-m/10.142.0.3:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:631)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:25:19 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
18/07/21 18:25:19 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
18/07/21 18:25:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46811.
18/07/21 18:25:19 INFO NettyBlockTransferService: Server created on 10.142.0.5:46811
18/07/21 18:25:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/07/21 18:25:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMasterEndpoint: Registering block manager 10.142.0.5:46811 with 366.3 MB RAM, BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO SparkUI: Stopped Spark web UI at http://10.142.0.5:4040
18/07/21 18:25:19 INFO StandaloneSchedulerBackend: Shutting down all executors
18/07/21 18:25:19 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/07/21 18:25:19 WARN StandaloneAppClient$ClientEndpoint: Drop Unregist
I can see it can resolve the name (=> 10.142.0.3)
Also, it seems to be the good port as if I use port 7000, i have the error :
18/07/21 18:32:54 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from spark-cluster-m/10.142.0.3:7000 is closed
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Could not connect to spark-cluster-m:7000: java.io.IOException: Connection reset by peer
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7000
But I can't figure out what this means.
You say my configuration is "particular". If there is a better (and simple) approach, i would be glad to use it.
Here is how I proceeded in my tests :
I created a google dataproc cluster with spark (2.2.1)
I added Cassandra on each node
At this stage, everything works ok.
Then, i need to install FactoMineR as I'd like to try HMFA. It is said to run with R > 3.0.0 so it seems to be ok but it depends on nlme which can't be installed on R < 3.4.0 (and the one in the debian jessie backports is 3.3.)
So, what can I do ?
I must admit that i'm not very enthusiastic in restarting a full spark / cassandra cluster install from scratch...
I am working with RHadoop by the following code:
Sys.setenv(HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib/native")
Sys.setenv(HADOOP_HOME="/usr/local/hadoop")
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-3.0.0.jar")
Sys.setenv(JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64")
library(rJava)
library(rhdfs)
library(rmr2)
hdfs.init()
mapper = function (., X) {
n=nrow(X);
ones=matrix(rep(1,n),nrow=n,ncol=1);
ag=aggregate(cbind(ones,X[,1:79]),by=list(X[,80]),FUN="sum")
key=factor(ag[,1]);
keyval(key,split(ag[,-1],key))
}
reducer = function(k, A) {
keyval(k,list(Reduce('+', A)))
}
GroupSums <- from.dfs( mapreduce(input = "/ISCXFlowMeter.csv", map = mapper, reduce = reducer, combine = T))
When I run this code, I get an error as:
packageJobJar: [/tmp/hadoop-unjar7138506441946536619/] []
/tmp/streamjob6099552934186757596.jar tmpDir=null 2018-06-12
22:40:04,651 INFO client.RMProxy: Connecting to ResourceManager at
/0.0.0.0:8032 2018-06-12 22:40:04,945 INFO client.RMProxy: Connecting
to ResourceManager at /0.0.0.0:8032 2018-06-12 22:40:05,201 INFO
mapreduce.JobResourceUploader: Disabling Erasure Coding for path:
/tmp/hadoop-yarn/staging/uel/.staging/job_1528838017005_0012
2018-06-12 22:40:06,158 INFO mapred.FileInputFormat: Total input files
to process : 1 2018-06-12 22:40:06,171 INFO net.NetworkTopology:
Adding a new node: /default-rack/127.0.0.1:9866 2018-06-12
22:40:06,233 INFO mapreduce.JobSubmitter: number of splits:2
2018-06-12 22:40:06,348 INFO Configuration.deprecation:
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
Instead, use yarn.system-metrics-publisher.enabled 2018-06-12
22:40:06,608 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1528838017005_0012 2018-06-12 22:40:06,610 INFO
mapreduce.JobSubmitter: Executing with tokens: [] 2018-06-12
22:40:06,945 INFO conf.Configuration: resource-types.xml not found
2018-06-12 22:40:06,945 INFO resource.ResourceUtils: Unable to find
'resource-types.xml'. 2018-06-12 22:40:07,022 INFO
impl.YarnClientImpl: Submitted application
application_1528838017005_0012 2018-06-12 22:40:07,249 INFO
mapreduce.Job: The url to track the job:
http://uel-Deskop-VM:8088/proxy/application_1528838017005_0012/
2018-06-12 22:40:07,251 INFO mapreduce.Job: Running job:
job_1528838017005_0012 2018-06-12 22:40:09,301 INFO mapreduce.Job: Job
job_1528838017005_0012 running in uber mode : false 2018-06-12
22:40:09,305 INFO mapreduce.Job: map 0% reduce 0% 2018-06-12
22:40:09,337 INFO mapreduce.Job: Job job_1528838017005_0012 failed
with state FAILED due to: Application application_1528838017005_0012
failed 2 times due to AM Container for
appattempt_1528838017005_0012_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: [2018-06-12 22:40:08.734]Exception
from container-launch. Container id:
container_1528838017005_0012_02_000001 Exit code: 127
[2018-06-12 22:40:08.736]Container exited with a non-zero exit code
127. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : /bin/bash: /bin/java: No such file or
directory
[2018-06-12 22:40:08.736]Container exited with a non-zero exit code
127. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : /bin/bash: /bin/java: No such file or
directory
For more detailed output, check the application tracking page:
http://uel-Deskop-VM:8088/cluster/app/application_1528838017005_0012
Then click on links to logs of each attempt. . Failing the
application. 2018-06-12 22:40:09,368 INFO mapreduce.Job: Counters: 0
2018-06-12 22:40:09,369 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed! Error in mr(map = map, reduce = reduce,
combine = combine, vectorized.reduce, : hadoop streaming failed
with error code 1
>
ISCXFlowMeter.csv file in hadoop is available here: https://www.dropbox.com/s/rbppzg6x2slzcjz/ISCXFlowMeter.csv?dl=1
Could you please guide me how to rectify this issue?
After a while, by adding the following properties into mapred-site.xml, I could rectify the error.
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
But, the issue now is that, the key-value is NULL after completing map-reduce. Any help, I appreciate it.
I just started integrating RHadoop. It is integrated R-studio server with Hadoop, but I am getting error while running map-reduce jobs. when I run following Line of code.
library(rmr2)
a <- to.dfs(seq(from=1, to=500, by=3), output="/user/hduser/num")
*b <- mapreduce(input=a, map=function(k,v){keyval(v,v*v)})*
StackTrace:
15/03/24 21:13:47 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
packageJobJar: [] [/usr/lib/hadoop-mapreduce/hadoop-streaming-2.5.0-cdh5.2.0.jar] /tmp/streamjob4788227373090541042.jar tmpDir=null
15/03/24 21:13:48 INFO client.RMProxy: Connecting to ResourceManager at tungsten10/192.168.0.123:8032
15/03/24 21:13:48 INFO client.RMProxy: Connecting to ResourceManager at tungsten10/192.168.0.123:8032
15/03/24 21:13:49 INFO mapred.FileInputFormat: Total input paths to process : 1
15/03/24 21:13:50 INFO mapreduce.JobSubmitter: number of splits:2
15/03/24 21:13:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1427104115974_0009
15/03/24 21:13:50 INFO impl.YarnClientImpl: Submitted application application_1427104115974_0009
15/03/24 21:13:50 INFO mapreduce.Job: The url to track the job: http://XXX.XXX.XXX.XXX:8088/proxy/application_1427104115974_0009/
15/03/24 21:13:50 INFO mapreduce.Job: Running job: job_1427104115974_0009
15/03/24 21:14:02 INFO mapreduce.Job: Job job_1427104115974_0009 running in uber mode : false
15/03/24 21:14:03 INFO mapreduce.Job: map 0% reduce 0%
15/03/24 21:14:07 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:08 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000001_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:15 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000001_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:16 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:20 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000001_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:21 INFO mapreduce.Job: Task Id : attempt_1427104115974_0009_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
15/03/24 21:14:25 INFO mapreduce.Job: map 100% reduce 0%
15/03/24 21:14:26 INFO mapreduce.Job: Job job_1427104115974_0009 failed with state FAILED due to: Task failed task_1427104115974_0009_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/03/24 21:14:26 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=27095
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=27095
Total vcore-seconds taken by all map tasks=27095
Total megabyte-seconds taken by all map tasks=27745280
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
15/03/24 21:14:26 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
**Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
15/03/24 21:14:30 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://XXX.XXX.XXX.XXX:8020/tmp/file10076f272b9a' to trash at: hdfs://XXX.XXX.XXX.XXX:8020/user/hduser/.Trash/Current**
I searched a lot for solving this problem, but solution not found yet.
As I am new to RHadoop I am stucked with this problem.
Can, Anyone please help me to resolve this problem, I will be very much thankful.
The error is caused as the HADOOP_STREAMING environment variable is not set in your code. You should specify the full path along with the jar file name. The below R code seems to work fine for me.
R Code (I'm using hadoop 2.4.0 over Ubuntu)
Sys.setenv("HADOOP_CMD"="/usr/local/hadoop/bin/hadoop")
Sys.setenv("HADOOP_STREAMING"="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar")
library(rJava)
library(rhdfs)
# Initialise
hdfs.init()
library(rmr2)
a <- to.dfs(seq(from=1, to=500, by=3), output="/user/hduser/num")
b <- mapreduce(input=a, map=function(k,v){keyval(v,v*v)})
Hope this helps.
Apologies in advance for being a newbie and perhaps asking stupid questions.
I have installed Hadoop on a Single Machine Cluster (Ubuntu 14.04) and successfully tested the very basic program specified in the Apache installation guide. Subsequently I installed R, RStudio, and the packages rhdfs, rmr2 and all dependencies.
Then I have tried to run the following program :
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar")
library('rhdfs')
library('rmr2')
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
the job fails and the ouput on the console is as follows
packageJobJar: [/tmp/RtmprPBBS1/rmr-local-env242520fb4125, /tmp/RtmprPBBS1/rmr-global-env24252518202b, /tmp/RtmprPBBS1/rmr-streaming-map24255b97931e, /tmp/hadoop-hduser/hadoop-unjar4430970496737933525/] [] /tmp/streamjob6651310557292596411.jar tmpDir=null
14/05/05 09:16:08 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/05 09:16:08 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-hduser/mapred/local]
14/05/05 09:16:08 INFO streaming.StreamJob: Running job: job_201405050557_0013
14/05/05 09:16:08 INFO streaming.StreamJob: To kill this job, run:
14/05/05 09:16:08 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201405050557_0013
14/05/05 09:16:08 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201405050557_0013
14/05/05 09:16:09 INFO streaming.StreamJob: map 0% reduce 0%
14/05/05 09:16:41 INFO streaming.StreamJob: map 100% reduce 100%
14/05/05 09:16:41 INFO streaming.StreamJob: To kill this job, run:
14/05/05 09:16:41 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201405050557_0013
14/05/05 09:16:41 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201405050557_0013
14/05/05 09:16:41 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201405050557_0013_m_000001
14/05/05 09:16:41 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
the stderror log is as follows
Error in library(functional) : there is no package called ‘functional’
No traceback available
Error during wrapup:
Execution halted
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
i have tried with a few other simple, demo programs and the result is the same. so it seems that the problem lies with my configuration.
the 'functional' package was already installed and was getting loaded automatically. even loading it manually, does not help. so that is most probably not the problem.
any help or suggestions would me gratefully accepted.
i am running Hadoop 1.2.1, R 3.0.5 and RStudio 0.98.507 on Ubuntu 14.04 in a single cluster mode Java is Oracle 7 Java version 1.7.0_55
Hadoop installation seems to be OK since my regular wordcount program is working fine.
i am getting identical results with even the simplest RHadoop demo
could this be a problem with my machine capacity ? running on a slightly high end laptop machine ? 2.8 GiB Memory and Intel® Core™ i3-2310M CPU # 2.10GHz × 4 processor
i have now moved to Hadoop 2.2.0 and managed to install the same using this tutorial. The demo program for calculating PI executed without errors.
Then I executed this very simple MR program
Sys.setenv(HADOOP_CMD="/usr/local/hadoop220/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop220/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar")
library('rhdfs')
library('rmr2')
library('functional')
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v) cbind(v, v^2))
The program executed upto line 7 but failed in the all important MR step with the following error [ only the last part of the error is shown ]
14/05/06 13:53:36 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:36 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:37 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/06 13:53:37 INFO mapreduce.JobSubmitter: number of splits:2
14/05/06 13:53:37 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/05/06 13:53:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1399363749415_0002
14/05/06 13:53:38 INFO impl.YarnClientImpl: Submitted application application_1399363749415_0002 to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:38 INFO mapreduce.Job: The url to track the job: http://yantrajaal:8088/proxy/application_1399363749415_0002/
14/05/06 13:53:38 INFO mapreduce.Job: Running job: job_1399363749415_0002
14/05/06 13:53:45 INFO mapreduce.Job: Job job_1399363749415_0002 running in uber mode : false
14/05/06 13:53:45 INFO mapreduce.Job: map 0% reduce 0%
14/05/06 13:53:57 INFO mapreduce.Job: map 100% reduce 0%
14/05/06 13:53:57 INFO mapreduce.Job: Task Id : attempt_1399363749415_0002_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
14/05/06 13:54:31 INFO mapreduce.Job: map 100% reduce 0%
14/05/06 13:54:32 INFO mapreduce.Job: Job job_1399363749415_0002 failed with state FAILED due to: Task failed task_1399363749415_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/05/06 13:54:32 INFO mapreduce.Job: Counters: 10
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=72476
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
14/05/06 13:54:32 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
really at my wits end on what to do next !
any suggestions on the way forward would be gratefully received and acknowledged. my suspicion is that RHadoop is perhaps not yet comfortable with Ubuntu 14.04 but that is a guess
Start your terminal and login as super user or root using
sudo su root
then start R in terminal and install the rhadoop packages using following commands
install.packages(c("codetools", "R", "Rcpp", "RJSONIO", "bitops",
"digest", "functional", "stringr", "plyr", "reshape2", "rJava"))
install.packages(c("dplyr","R.methodsS3"))
install.packages(c("Hmisc")) install.packages(c("caTools"))
Sys.setenv(HADOOP_HOME="/usr/local/hadoop")
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/share/hadoop/tools/lib/hadoopversiomentionhere.jar")
after that install the rmr2 rhdfs2 here
after that install these downloaded source file using this command
install.packages(path_to_file, repos = NULL, type="source")
now after installing close the terminal R and then the terminal open
rstudio run the R code for streaming error will be solved as the
above steps will install the R libraries in global folders.
Optionally if u want u can install R itself being a super user for being on the safer side hope this helps
It seems to be an error with the R setup on your single machine cluster.
Is the R package functional installed on the cluster?
I sloved my problem similar to yours with method below.
Have a look at your R libraries
.libPaths()
Check which library package functional was installed to with commands below:
system.file(package="functional")
If it is installed in a personal library, instead of in a library common to all users, jobs will fail with error saying the package cannot loaded.
Hope this will help.
Cheers
Yanchang Zhao
RDataMining.com
the problem is because when you install packages as a non-root user, they end up in a private directory. this is the cause of all the problem. solution is to be logged in as root, or super-user, and then install the packages so that they end up in the system wide R library, which in my case is /usr/lib64/R/library after this, there is no more any problem. programs will work!
I have a hadoop cluster setup with the rmr2 and rhdfs packages installed. I've been able to run some sample MR jobs through the CLI and through rscripts. For example, this works:
#!/usr/bin/env Rscript
require('rmr2')
small.ints = to.dfs(1:1000)
out = mapreduce( input = small.ints, map = function(k, v) keyval(v, v^2))
df = as.data.frame( from.dfs( out) )
colnames(df) = c('n', 'n2')
str(df)
Final Output:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
'data.frame': 1000 obs. of 2 variables:
$ n : int 1 2 3 4 5 6 7 8 9 10 ...
$ n2: num 1 4 9 16 25 36 49 64 81 100 ...
I'm now trying to move on to the next step of writing my own MR job. I have a file (`/user/michael/batsmall.csv') with some batting statistics:
aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
(batsmall.csv is an extract of a much larger file, but really I'm just trying to prove I can read and analyze a file from hdfs)
Here's the script I have:
#!/usr/bin/env Rscript
require('rmr2');
require('rhdfs');
hdfs.init()
hdfs.rmr("/user/michael/rMean")
findMean = function (input, output) {
mapreduce(input = input,
output = output,
input.format = 'csv',
map = function(k, fields) {
myField <- fields[[5]]
keyval(fields[[0]], myField)
},
reduce = function(key, vv) {
keyval(key, mean(as.numeric(vv)))
}
)
}
from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean"))
print(hdfs.read.text.file("/user/michael/batsmall.csv"))
This fails every time and looking at the hadoop logs it seems to be a Broken Pipe error. I cannot figure out what's causing this. As other jobs work I would think it's an issue with my script, not my configuration, but I can't figure it out. I am admittedly and R novice and relatively new to hadoop.
Here's the job output:
[michael#hadoop01 r]$ ./rtest.r
Loading required package: rmr2
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: methods
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
Loading required package: rhdfs
Loading required package: rJava
HADOOP_CMD=/usr/bin/hadoop
Be sure to run hdfs.init()
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean
[1] TRUE
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local]
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:09:43 INFO streaming.StreamJob: map 0% reduce 0%
12/12/19 11:10:15 INFO streaming.StreamJob: map 100% reduce 100%
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA
12/12/19 11:10:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
hadoop streaming failed with error code 1
Calls: findMean -> mapreduce -> mr
Execution halted
And a sample exception from the job tracker:
ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
You need to inspect the stderr of the failed attempt. The jobtracker web UI is the easiest way to get there. Educated guess is that fields is a data frame and you are accessing it like a list, possible but unusual. Errors may follow indirectly from that.
Also we have a debugging document on the RHadoop wiki with this an many more suggestions.
Finally we have a dedicated RHadoop google group where you can interact with a large number of enthusiastic users. Or you can be on your own on SO.