insertion into hive table from shell script using oozie - oozie

I am using Oozie for automate the process.when i am using simple load data using shell script from oozie then all the data get inserted into hive tables.but when i am inserting values from a table to another table using shell scripting then oozie job gets killed automatically.Dont know why...
Error logs:
[18/Mar/2015 08:01:44 +0000] access WARNING 10.23.227.121 hdfs - "GET /logs HTTP/1.1"
[18/Mar/2015 08:01:42 +0000] middleware INFO Processing exception: Could not find job application_1426676094899_0051.: Traceback (most recent call last): File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/hue/apps/jobbrowser/src/jobbrowser/views.py", line 61, in decorate raise PopupException(_('Could not find job %s.') % jobid, detail=e) PopupException: Could not find job application_1426676094899_0051.
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"job":{"submitTime":1426690455158,"startTime":-1,"finishTime":1426690462073,"id":"job_1426676094899_0051","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","user":"hdfs","state":"FAILED","mapsTotal":0,"mapsCompleted":0,"reducesTotal":0,"reducesCompleted":0,"uberized":false,"diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.techmahindra.com:8020/user/hdfs/.staging/job_1426676...
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"app":{"id":"application_1426676094899_0051","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0051/jobhistory/job/job_1426676094899_0051","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.t...
[18/Mar/2015 08:01:42 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/jobs/application_1426676094899_0051 HTTP/1.1"
[18/Mar/2015 08:01:41 +0000] resource DEBUG GET Got response: {"apps":{"app":[{"id":"application_1426676094899_0043","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0043/jobhistory/job/job_1426676094899_0043","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyic...
[18/Mar/2015 08:01:41 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] middleware DEBUG No desktop_app known for request.
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.CloseOperation returned in 0ms: TCloseOperationResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0))
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.CloseOperation(args=(TCloseOperationReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=4, operationId=THandleIdentifier(secret='\x89w\xccf\x84:G\xda\xa9GR0\x00\xc8G\x96', guid='J\xcb\xa7\xba|\xfaH\x93\x93\xba$\x02\t\xc0IE'))),), kwargs={})
need help.....

Related

My VM can not be resize, but the flavor is subsistent exist

I can not resize the VM, and get the bellow error:
(source: openstack.org)
In the /var/log/nova-api.log:
2017-10-11 16:53:07.796 24060 INFO nova.osapi_compute.wsgi.server [-] 118.113.57.187 "GET / HTTP/1.1" status: 200 len: 502 time: 0.0013940
2017-10-11 16:53:08.129 24060 INFO nova.api.openstack.wsgi [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] HTTP Exception Thrown :***Cloud hosting type flavor: 2Core1GB40GB1M did not found*** 。
2017-10-11 16:53:08.131 24060 INFO nova.osapi_compute.wsgi.server [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors/flavor:%202Core1GB40GB1M HTTP/1.1" status: 404 len: 485 time: 0.2736869
2017-10-11 16:53:08.248 24060 INFO nova.osapi_compute.wsgi.server [req-b7d1d426-b110-4931-90aa-f9cceeddb187 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 1913 time: 0.0570610
2017-10-11 16:53:08.565 24060 INFO nova.osapi_compute.wsgi.server [req-8cbc33a5-1a78-4ba7-8869-cf01536f784b - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 875 time: 0.2515521
2017-10-11 16:53:10.433 24059 INFO nova.api.openstack.wsgi [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] HTTP Exception Thrown:***Can not find valid host,....***。
2017-10-11 16:53:10.435 24059 INFO nova.osapi_compute.wsgi.server [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/servers/f9bef431-0635-4c74-9af5-cf61ed4d3ae4/action HTTP/1.1" status: 400 len: 564 time: 1.6831121
No mater whatever flavor, the VM can not resize, but the flavor is subsistent.
Because my openstack is all in one server, so I can not migrate the VM( the resize essence is migrate), so I add this line in my nova.conf's [DEFAULT]:
allow_resize_to_same_host=True
And restart the nova related service, I success:
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl restart openstack-nova-compute.service
For the two nodes configuration (controller and compute), I needed to include a couple of parameters more as it is explained here:
Update the “nova.conf” file in both controller and compute with the following lines
allow_migrate_to_same_host = True
scheduler_default_filters = AllHostsFilter
allow_resize_to_same_host = True
Restart the following services of the "controller" node
• nova-api
• nova-cert
• nova-consoleauth
• nova-scheduler
• nova-conductor
• nova-novncproxy
Restart the following service of the compute node:
• nova-compute

NGinx: connect() to unix:/var/run/hhvm/sock failed, No such file

I am Tryig to install FB-CTF which uses HHVM, and NGinx. Everything is set completely by Shell command itself.. but now error log showing
2017/01/18 21:48:17 [crit] 15143#0:
***6 connect() to unix:/var/run/hhvm/sock failed**
(2: No such file or directory) while connecting to upstream,
client: 127.0.0.1, server: ,
request: "GET / HTTP/1.1",
upstream: "fastcgi://unix:/var/run/hhvm/sock:",
host: "localhost"
actually /var/run/hhvm/ contains only hhvm.hhbc.. getting 502 BAD GATEWAY
I reinstalled HHVM. then missing files are reinstalled. all the dependent files are correctly restored.

How to figure out Nginx status code 499

My Setup :
Curl --> AWS ELB --> Nginx --> NodeJS
When number of request High then I face this issue.
Nginx access logs
xx.xx.xx.xx - - [30/Oct/2014:13:23:40 +0000] "POST /some/calls/object HTTP/1.1" 499 0 "-" "curl/7.27.0"
xx.xx.xx.xx - - [30/Oct/2014:13:23:40 +0000] "POST /some/calls/object HTTP/1.1" 499 0 "-" "curl/7.27.0"
Nginx error logs
2014/10/30 13:23:40 [info] 11181#0: *17811 client xx.xx.xx.xx closed keepalive connection
2014/10/30 13:23:40 [info] 11181#0: *17631 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: xx.xx.xx.xx, server: example.com, request: "POST /some/calls/objects HTTP/1.1", upstream: "http://xx.xx.xx.xx:xx/some/calls/object", host: "example.com"
NodeJS logs
2014-10-30T13:23:40.074Z : info : req.method GET
2014-10-30T13:23:40.074Z : info : req.url /some/calls/objects
2014-10-30T13:23:40.074Z : info : error { message: 'Request Timed Out!' }
2014-10-30T13:23:40.075Z : info : req.method GET
2014-10-30T13:23:40.075Z : info : req.url /some/calls/objects
2014-10-30T13:23:40.075Z : info : error { message: 'Request Timed Out!' }
2014-10-30T13:23:40.075Z : info : error.stack undefined
2014-10-30T13:23:40.076Z : info : error.stack undefined
Question
as per this link 499 means client closed the connection, but my question is, who is the client in this scenario, is ELB closing the connection or curl or nodejs ?
I've also observed that nodejs takes 60+ seconds to respond calls, if nodejs encounter Timed out issue, then it keep occurring again and again.

Openshift Wordpress Service Temporary Unavaliable. Can't restart application

I'm having a problem on a Wordpress instance I need to show to a client.
I'm using free gear. The app is not intended for production yet. Everything was working fine, I made a child-theme. Have been working on this page for about a month. Since yesterday morning I'm getting: Service Temporarily Unavailable when I try to access the page. It isn't the first time this happens, but it never lasted more then a few hours. Now it has been almost 48 hours. I need to show her the demo but I can't make it work.
Here is the output of the tail command:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc tail demo
DL is deprecated, please use Fiddle
==> app-root/logs/mysql.log <==
140820 22:03:29 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: ready for connec
tions.
Version: '5.5.37' socket: '/var/lib/openshift/539c92755973caa1f000044c/mysql//s
ocket/mysql.sock' port: 3306 MySQL Community Server (GPL)
140823 18:05:36 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Normal shutdown
140823 18:05:36 [Note] Event Scheduler: Purging the queue. 0 events
140823 18:05:36 InnoDB: Starting shutdown...
140823 18:05:39 InnoDB: Shutdown completed; log sequence number 9866622
140823 18:05:39 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Shutdown complet
e
chown: changing ownership of `/var/lib/openshift/539c92755973caa1f000044c/mysql/
/stdout.err': Operation not permitted
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
==> app-root/logs/php.log <==
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "POST /wp-cron.php?doing_wp_cron=14
08828238.7940719127655029296875 HTTP/1.1" 200 - "-" "WordPress/3.9.2; http://dem
o-gibbinsurance.rhcloud.com"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:39 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
- - - [23/Aug/2014:17:10:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
[Sat Aug 23 18:05:41 2014] [notice] caught SIGWINCH, shutting down gracefully
Interrupted
Terminate batch job (Y/N)? Y
When I try to restart the server, this is what I'm getting:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc app restart -a demo
DL is deprecated, please use Fiddle
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/mysql
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/php
I appreciate any help.
Thank's a lot!
Try doing a force-stop on your application, and then a start and see if that helps. You should also try checking your quota and make sure that you are not out of disk space on your gear.

502 Bad Gateway error with nginx and unicorn

I have nginx 1.4.1 and unicorn set up on centos - I am getting a 502 Bad Gateway error and my nginx logs shows this:
1 connect() to unix:/tmp/unicorn.pantry.sock failed (2: No such file or directory)
while connecting to upstream, client: 192.168.1.192, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/tmp/unicorn.pantry.sock:/", host: "192.168.1.30"
There is no /tmp/unicorn.pantry.sock file or directory. I am thinking that it might be a permission error and therefore the file can't be written if so who requires what permission - I have also read that I can create a tcp client
Also I don't understand where the 192.168.1.192 comes from
I just want to make it work How can I do it?
Ok I figured this out. I had unicorn.sock in my shared directory so I needed to point unix: to it

Resources