hadoop streaming failed with error code 1 in RHadoop - r

I am working with RHadoop by the following code:
Sys.setenv(HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib/native")
Sys.setenv(HADOOP_HOME="/usr/local/hadoop")
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-3.0.0.jar")
Sys.setenv(JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64")
library(rJava)
library(rhdfs)
library(rmr2)
hdfs.init()
mapper = function (., X) {
n=nrow(X);
ones=matrix(rep(1,n),nrow=n,ncol=1);
ag=aggregate(cbind(ones,X[,1:79]),by=list(X[,80]),FUN="sum")
key=factor(ag[,1]);
keyval(key,split(ag[,-1],key))
}
reducer = function(k, A) {
keyval(k,list(Reduce('+', A)))
}
GroupSums <- from.dfs( mapreduce(input = "/ISCXFlowMeter.csv", map = mapper, reduce = reducer, combine = T))
When I run this code, I get an error as:
packageJobJar: [/tmp/hadoop-unjar7138506441946536619/] []
/tmp/streamjob6099552934186757596.jar tmpDir=null 2018-06-12
22:40:04,651 INFO client.RMProxy: Connecting to ResourceManager at
/0.0.0.0:8032 2018-06-12 22:40:04,945 INFO client.RMProxy: Connecting
to ResourceManager at /0.0.0.0:8032 2018-06-12 22:40:05,201 INFO
mapreduce.JobResourceUploader: Disabling Erasure Coding for path:
/tmp/hadoop-yarn/staging/uel/.staging/job_1528838017005_0012
2018-06-12 22:40:06,158 INFO mapred.FileInputFormat: Total input files
to process : 1 2018-06-12 22:40:06,171 INFO net.NetworkTopology:
Adding a new node: /default-rack/127.0.0.1:9866 2018-06-12
22:40:06,233 INFO mapreduce.JobSubmitter: number of splits:2
2018-06-12 22:40:06,348 INFO Configuration.deprecation:
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
Instead, use yarn.system-metrics-publisher.enabled 2018-06-12
22:40:06,608 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1528838017005_0012 2018-06-12 22:40:06,610 INFO
mapreduce.JobSubmitter: Executing with tokens: [] 2018-06-12
22:40:06,945 INFO conf.Configuration: resource-types.xml not found
2018-06-12 22:40:06,945 INFO resource.ResourceUtils: Unable to find
'resource-types.xml'. 2018-06-12 22:40:07,022 INFO
impl.YarnClientImpl: Submitted application
application_1528838017005_0012 2018-06-12 22:40:07,249 INFO
mapreduce.Job: The url to track the job:
http://uel-Deskop-VM:8088/proxy/application_1528838017005_0012/
2018-06-12 22:40:07,251 INFO mapreduce.Job: Running job:
job_1528838017005_0012 2018-06-12 22:40:09,301 INFO mapreduce.Job: Job
job_1528838017005_0012 running in uber mode : false 2018-06-12
22:40:09,305 INFO mapreduce.Job: map 0% reduce 0% 2018-06-12
22:40:09,337 INFO mapreduce.Job: Job job_1528838017005_0012 failed
with state FAILED due to: Application application_1528838017005_0012
failed 2 times due to AM Container for
appattempt_1528838017005_0012_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: [2018-06-12 22:40:08.734]Exception
from container-launch. Container id:
container_1528838017005_0012_02_000001 Exit code: 127
[2018-06-12 22:40:08.736]Container exited with a non-zero exit code
127. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : /bin/bash: /bin/java: No such file or
directory
[2018-06-12 22:40:08.736]Container exited with a non-zero exit code
127. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : /bin/bash: /bin/java: No such file or
directory
For more detailed output, check the application tracking page:
http://uel-Deskop-VM:8088/cluster/app/application_1528838017005_0012
Then click on links to logs of each attempt. . Failing the
application. 2018-06-12 22:40:09,368 INFO mapreduce.Job: Counters: 0
2018-06-12 22:40:09,369 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed! Error in mr(map = map, reduce = reduce,
combine = combine, vectorized.reduce, : hadoop streaming failed
with error code 1
>
ISCXFlowMeter.csv file in hadoop is available here: https://www.dropbox.com/s/rbppzg6x2slzcjz/ISCXFlowMeter.csv?dl=1
Could you please guide me how to rectify this issue?

After a while, by adding the following properties into mapred-site.xml, I could rectify the error.
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
But, the issue now is that, the key-value is NULL after completing map-reduce. Any help, I appreciate it.

Related

Sqoop Hcatalog import job completed but data is not present in the table

I was trying to integrate hcatalog with sqoop in order to import data from rdbms(oracle) to data lake(in hive).
sqoop-import --connect connection-string --username username --password pass --table --hcatalog-database data_extraction --hcatalog-table --hcatalog-storage-stanza 'stored as orcfile' -m1 --verbose
Job got executed e=successfully but not able to find the data.
Also, checked the location of the table created in hcatalog, after checking the location found that any directory is not created for that and only a 0 byte file _$folder$ was found.
please found the stack trace :
19/09/25 17:53:37 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
19/09/25 17:54:02 DEBUG db.DBConfiguration: Fetching password from job credentials store
19/09/25 17:54:03 INFO db.DBInputFormat: Using read commited transaction isolation
19/09/25 17:54:03 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '1=1' and upper bound '1=1'
19/09/25 17:54:03 INFO mapreduce.JobSubmitter: number of splits:1
19/09/25 17:54:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1569355854349_1231
19/09/25 17:54:04 INFO impl.YarnClientImpl: Submitted application application_1569355854349_1231
19/09/25 17:54:04 INFO mapreduce.Job: The url to track the job: http://<PII-removed-by-me>/application_1569355854349_1231/
19/09/25 17:54:04 INFO mapreduce.Job: Running job: job_1569355854349_1231
19/09/25 17:57:34 INFO hive.metastore: Closed a connection to metastore, current connections: 1
19/09/25 18:02:59 INFO mapreduce.Job: Job job_1569355854349_1231 running in uber mode : false
19/09/25 18:02:59 INFO mapreduce.Job: map 0% reduce 0%
19/09/25 18:03:16 INFO mapreduce.Job: map 100% reduce 0%
19/09/25 18:03:18 INFO mapreduce.Job: Job job_1569355854349_1231 completed successfully
19/09/25 18:03:18 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=425637
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=0
HDFS: Number of read operations=1
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
S3: Number of bytes read=0
S3: Number of bytes written=310154
S3: Number of read operations=0
S3: Number of large read operations=0
S3: Number of write operations=0
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=29274
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=14637
Total vcore-milliseconds taken by all map tasks=14637
Total megabyte-milliseconds taken by all map tasks=52459008
Map-Reduce Framework
Map input records=145608
Map output records=145608
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=199
CPU time spent (ms)=4390
Physical memory (bytes) snapshot=681046016
Virtual memory (bytes) snapshot=5230788608
Total committed heap usage (bytes)=1483210752
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
19/09/25 18:03:18 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 582.8069 seconds (0 bytes/sec)
19/09/25 18:03:18 INFO mapreduce.ImportJobBase: Retrieved 145608 records.
19/09/25 18:03:18 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table null
19/09/25 18:03:19 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader#1d548a08
Solved it.
As we are using AWS EMR(managed hadoop service).It is already mentioned on their site.
Aws Forum Screenshot
When you use Sqoop to write output to an HCatalog table in Amazon S3, disable Amazon EMR direct write by setting the mapred.output.direct.NativeS3FileSystem and mapred.output.direct.EmrFileSystem properties to false. For more information, see Using HCatalog. You can use the Hadoop -D mapred.output.direct.NativeS3FileSystem=false and -D mapred.output.direct.EmrFileSystem=false commands.
If you don't disable direct write, no error occurs, but the table is created in Amazon S3 and no data is written.
can be found at https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-sqoop-considerations.html

Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)

I just followed the openstack rally quick start guide to create a tempest verifier with Rally v0.9.1 in an Openstack Ocata/stable deployment. The command failed:
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# rally verify create-verifier --type tempest --name tempest-verifier
2017-05-21 07:53:13.410 11422 INFO rally.api [-] Creating verifier 'tempest-verifier'.
2017-05-21 07:53:13.528 11422 INFO rally.verification.manager [-] Cloning verifier repo from https://git.openstack.org/openstack/tempest.
2017-05-21 07:53:37.174 11422 INFO rally.verification.manager [-] Creating virtual environment. It may take a few minutes.
2017-05-21 07:53:42.323 11422 ERROR rally.verification.utils [-] Failed cmd: '['pip', 'install', '-e', './']'
2017-05-21 07:53:42.324 11422 ERROR rally.verification.utils [-] Error output: 'Obtaining file:///root/.rally/verification/verifier-091a49ab-1241-40a3-bc9b-531d7f091e37/repo
Collecting pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178) (from versions: 1.10.0)
No matching distribution found for pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
'
Command failed, please check log for more info
As the current version of pbr is 2.0.0, I'm not sure why pbr installation failed.
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# pip freeze|grep pbr
pbr==2.0.0
The question is how to adjust the requirement checking for pbr? or is it possible to choose an older version of tempest?
Thanks.
It solved.
After uploading the two missing python packages: os_testr-0.8.2-py2-none-any.whl and testrepository-0.0.19.tar.gz into local repo, which is a lxc container had been created by openstack-ansible, the Tempest plugin was finally installed.

R + Hadoop with RHadoop job fails on Single Machine Cluster

Apologies in advance for being a newbie and perhaps asking stupid questions.
I have installed Hadoop on a Single Machine Cluster (Ubuntu 14.04) and successfully tested the very basic program specified in the Apache installation guide. Subsequently I installed R, RStudio, and the packages rhdfs, rmr2 and all dependencies.
Then I have tried to run the following program :
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar")
library('rhdfs')
library('rmr2')
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
the job fails and the ouput on the console is as follows
packageJobJar: [/tmp/RtmprPBBS1/rmr-local-env242520fb4125, /tmp/RtmprPBBS1/rmr-global-env24252518202b, /tmp/RtmprPBBS1/rmr-streaming-map24255b97931e, /tmp/hadoop-hduser/hadoop-unjar4430970496737933525/] [] /tmp/streamjob6651310557292596411.jar tmpDir=null
14/05/05 09:16:08 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/05 09:16:08 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-hduser/mapred/local]
14/05/05 09:16:08 INFO streaming.StreamJob: Running job: job_201405050557_0013
14/05/05 09:16:08 INFO streaming.StreamJob: To kill this job, run:
14/05/05 09:16:08 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201405050557_0013
14/05/05 09:16:08 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201405050557_0013
14/05/05 09:16:09 INFO streaming.StreamJob: map 0% reduce 0%
14/05/05 09:16:41 INFO streaming.StreamJob: map 100% reduce 100%
14/05/05 09:16:41 INFO streaming.StreamJob: To kill this job, run:
14/05/05 09:16:41 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201405050557_0013
14/05/05 09:16:41 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201405050557_0013
14/05/05 09:16:41 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201405050557_0013_m_000001
14/05/05 09:16:41 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
the stderror log is as follows
Error in library(functional) : there is no package called ‘functional’
No traceback available
Error during wrapup:
Execution halted
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
i have tried with a few other simple, demo programs and the result is the same. so it seems that the problem lies with my configuration.
the 'functional' package was already installed and was getting loaded automatically. even loading it manually, does not help. so that is most probably not the problem.
any help or suggestions would me gratefully accepted.
i am running Hadoop 1.2.1, R 3.0.5 and RStudio 0.98.507 on Ubuntu 14.04 in a single cluster mode Java is Oracle 7 Java version 1.7.0_55
Hadoop installation seems to be OK since my regular wordcount program is working fine.
i am getting identical results with even the simplest RHadoop demo
could this be a problem with my machine capacity ? running on a slightly high end laptop machine ? 2.8 GiB Memory and Intel® Core™ i3-2310M CPU # 2.10GHz × 4 processor
i have now moved to Hadoop 2.2.0 and managed to install the same using this tutorial. The demo program for calculating PI executed without errors.
Then I executed this very simple MR program
Sys.setenv(HADOOP_CMD="/usr/local/hadoop220/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop220/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar")
library('rhdfs')
library('rmr2')
library('functional')
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v) cbind(v, v^2))
The program executed upto line 7 but failed in the all important MR step with the following error [ only the last part of the error is shown ]
14/05/06 13:53:36 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:36 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:37 INFO mapred.FileInputFormat: Total input paths to process : 1
14/05/06 13:53:37 INFO mapreduce.JobSubmitter: number of splits:2
14/05/06 13:53:37 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
14/05/06 13:53:37 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/05/06 13:53:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1399363749415_0002
14/05/06 13:53:38 INFO impl.YarnClientImpl: Submitted application application_1399363749415_0002 to ResourceManager at /0.0.0.0:8032
14/05/06 13:53:38 INFO mapreduce.Job: The url to track the job: http://yantrajaal:8088/proxy/application_1399363749415_0002/
14/05/06 13:53:38 INFO mapreduce.Job: Running job: job_1399363749415_0002
14/05/06 13:53:45 INFO mapreduce.Job: Job job_1399363749415_0002 running in uber mode : false
14/05/06 13:53:45 INFO mapreduce.Job: map 0% reduce 0%
14/05/06 13:53:57 INFO mapreduce.Job: map 100% reduce 0%
14/05/06 13:53:57 INFO mapreduce.Job: Task Id : attempt_1399363749415_0002_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
14/05/06 13:54:31 INFO mapreduce.Job: map 100% reduce 0%
14/05/06 13:54:32 INFO mapreduce.Job: Job job_1399363749415_0002 failed with state FAILED due to: Task failed task_1399363749415_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/05/06 13:54:32 INFO mapreduce.Job: Counters: 10
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=72476
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
14/05/06 13:54:32 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
really at my wits end on what to do next !
any suggestions on the way forward would be gratefully received and acknowledged. my suspicion is that RHadoop is perhaps not yet comfortable with Ubuntu 14.04 but that is a guess
Start your terminal and login as super user or root using
sudo su root
then start R in terminal and install the rhadoop packages using following commands
install.packages(c("codetools", "R", "Rcpp", "RJSONIO", "bitops",
"digest", "functional", "stringr", "plyr", "reshape2", "rJava"))
install.packages(c("dplyr","R.methodsS3"))
install.packages(c("Hmisc")) install.packages(c("caTools"))
Sys.setenv(HADOOP_HOME="/usr/local/hadoop")
Sys.setenv(HADOOP_CMD="/usr/local/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/share/hadoop/tools/lib/hadoopversiomentionhere.jar")
after that install the rmr2 rhdfs2 here
after that install these downloaded source file using this command
install.packages(path_to_file, repos = NULL, type="source")
now after installing close the terminal R and then the terminal open
rstudio run the R code for streaming error will be solved as the
above steps will install the R libraries in global folders.
Optionally if u want u can install R itself being a super user for being on the safer side hope this helps
It seems to be an error with the R setup on your single machine cluster.
Is the R package functional installed on the cluster?
I sloved my problem similar to yours with method below.
Have a look at your R libraries
.libPaths()
Check which library package functional was installed to with commands below:
system.file(package="functional")
If it is installed in a personal library, instead of in a library common to all users, jobs will fail with error saying the package cannot loaded.
Hope this will help.
Cheers
Yanchang Zhao
RDataMining.com
the problem is because when you install packages as a non-root user, they end up in a private directory. this is the cause of all the problem. solution is to be logged in as root, or super-user, and then install the packages so that they end up in the system wide R library, which in my case is /usr/lib64/R/library after this, there is no more any problem. programs will work!

Running a R script using hadoop streaming Job Failing : PipeMapRed.waitOutputThreads(): subprocess failed with code 1

I have a R script which works perfectly fine in R Colsole ,but when I am running in Hadoop streaming it is failing with the below error in Map phase .Find the Task attempts log
The Hadoop Streaming Command I have :
/home/Bibhu/hadoop-0.20.2/bin/hadoop jar \
/home/Bibhu/hadoop-0.20.2/contrib/streaming/*.jar \
-input hdfs://localhost:54310/user/Bibhu/BookTE1.csv \
-output outsid -mapper `pwd`/code1.sh
stderr logs
Loading required package: class
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
no lines available in input
Calls: read.csv -> read.table
Execution halted
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
syslog logs
2013-07-03 19:32:36,080 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-07-03 19:32:36,654 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2013-07-03 19:32:36,675 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
2013-07-03 19:32:36,835 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720
2013-07-03 19:32:36,835 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
2013-07-03 19:32:36,899 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed exec [/home/Bibhu/Downloads/SentimentAnalysis/Sid/smallFile/code1.sh]
2013-07-03 19:32:37,256 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=0/1
2013-07-03 19:32:38,509 INFO org.apache.hadoop.streaming.PipeMapRed: MRErrorThread done
2013-07-03 19:32:38,509 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed failed!
2013-07-03 19:32:38,557 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
2013-07-03 19:32:38,631 INFO org.apache.hadoop.mapred.TaskRunner: Runnning cleanup for the task
write hadoopStreamming jar with full version like hadoop-streaming-1.0.4.jar
specify separate file path for mapper & reducer with -file option
tell hadoop which is your mapper & reducer code with -mapper & -reducer option
for more ref see Running WordCount on Hadoop using R script
You need to find the logs from your mappers and reducers, since this is the place where the job is failing (as indicated by java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1). This says that your R script crashed.
If you are using the Hortonworks Hadoop distribuion, the easiest way is to open your jobhistory. It should be at http://127.0.0.1:19888/jobhistory . It should be possible to find the log in the filesystem using the command line as well, but I haven't yet found where.
Open http://127.0.0.1:19888/jobhistory in your web browser
Click on the Job ID of the failed job
Click the number indicating the failed job count
Click an attempt link
Click the logs link
You should see a page which looks something like
Log Type: stderr
Log Length: 418
Traceback (most recent call last):
File "/hadoop/yarn/local/usercache/root/appcache/application_1404203309115_0003/container_1404203309115_0003_01_000002/./mapper.py", line 45, in <module>
mapper()
File "/hadoop/yarn/local/usercache/root/appcache/application_1404203309115_0003/container_1404203309115_0003_01_000002/./mapper.py", line 37, in mapper
for record in reader:
_csv.Error: newline inside string
This is an error from my Python script, the errors from R look a bit different.
source: http://hortonworks.com/community/forums/topic/map-reduce-job-log-files/
I received this same error tonight, while also developing Map Reduce Streaming jobs with R.
I was working on a 10 node cluster, each with 12 cores, and tried to supply at submission time:
-D mapred.map.tasks=200\
-D mapred.reduce.tasks=200
The job completed successfully though when I changed these to
-D mapred.map.tasks=10\
-D mapred.reduce.tasks=10
This was a mysterious fix, and perhaps more context will arise this evening. But if any readers can elucidate, please do!

Problems running simple rhadoop jobs - broken pipe error

I have a hadoop cluster setup with the rmr2 and rhdfs packages installed. I've been able to run some sample MR jobs through the CLI and through rscripts. For example, this works:
#!/usr/bin/env Rscript
require('rmr2')
small.ints = to.dfs(1:1000)
out = mapreduce( input = small.ints, map = function(k, v) keyval(v, v^2))
df = as.data.frame( from.dfs( out) )
colnames(df) = c('n', 'n2')
str(df)
Final Output:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
'data.frame': 1000 obs. of 2 variables:
$ n : int 1 2 3 4 5 6 7 8 9 10 ...
$ n2: num 1 4 9 16 25 36 49 64 81 100 ...
I'm now trying to move on to the next step of writing my own MR job. I have a file (`/user/michael/batsmall.csv') with some batting statistics:
aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
(batsmall.csv is an extract of a much larger file, but really I'm just trying to prove I can read and analyze a file from hdfs)
Here's the script I have:
#!/usr/bin/env Rscript
require('rmr2');
require('rhdfs');
hdfs.init()
hdfs.rmr("/user/michael/rMean")
findMean = function (input, output) {
mapreduce(input = input,
output = output,
input.format = 'csv',
map = function(k, fields) {
myField <- fields[[5]]
keyval(fields[[0]], myField)
},
reduce = function(key, vv) {
keyval(key, mean(as.numeric(vv)))
}
)
}
from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean"))
print(hdfs.read.text.file("/user/michael/batsmall.csv"))
This fails every time and looking at the hadoop logs it seems to be a Broken Pipe error. I cannot figure out what's causing this. As other jobs work I would think it's an issue with my script, not my configuration, but I can't figure it out. I am admittedly and R novice and relatively new to hadoop.
Here's the job output:
[michael#hadoop01 r]$ ./rtest.r
Loading required package: rmr2
Loading required package: Rcpp
Loading required package: RJSONIO
Loading required package: methods
Loading required package: digest
Loading required package: functional
Loading required package: stringr
Loading required package: plyr
Loading required package: rhdfs
Loading required package: rJava
HADOOP_CMD=/usr/bin/hadoop
Be sure to run hdfs.init()
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean
[1] TRUE
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local]
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:09:43 INFO streaming.StreamJob: map 0% reduce 0%
12/12/19 11:10:15 INFO streaming.StreamJob: map 100% reduce 100%
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run:
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA
12/12/19 11:10:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
hadoop streaming failed with error code 1
Calls: findMean -> mapreduce -> mr
Execution halted
And a sample exception from the job tracker:
ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
You need to inspect the stderr of the failed attempt. The jobtracker web UI is the easiest way to get there. Educated guess is that fields is a data frame and you are accessing it like a list, possible but unusual. Errors may follow indirectly from that.
Also we have a debugging document on the RHadoop wiki with this an many more suggestions.
Finally we have a dedicated RHadoop google group where you can interact with a large number of enthusiastic users. Or you can be on your own on SO.

Resources