I'm having a problem on a Wordpress instance I need to show to a client.
I'm using free gear. The app is not intended for production yet. Everything was working fine, I made a child-theme. Have been working on this page for about a month. Since yesterday morning I'm getting: Service Temporarily Unavailable when I try to access the page. It isn't the first time this happens, but it never lasted more then a few hours. Now it has been almost 48 hours. I need to show her the demo but I can't make it work.
Here is the output of the tail command:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc tail demo
DL is deprecated, please use Fiddle
==> app-root/logs/mysql.log <==
140820 22:03:29 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: ready for connec
tions.
Version: '5.5.37' socket: '/var/lib/openshift/539c92755973caa1f000044c/mysql//s
ocket/mysql.sock' port: 3306 MySQL Community Server (GPL)
140823 18:05:36 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Normal shutdown
140823 18:05:36 [Note] Event Scheduler: Purging the queue. 0 events
140823 18:05:36 InnoDB: Starting shutdown...
140823 18:05:39 InnoDB: Shutdown completed; log sequence number 9866622
140823 18:05:39 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Shutdown complet
e
chown: changing ownership of `/var/lib/openshift/539c92755973caa1f000044c/mysql/
/stdout.err': Operation not permitted
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
==> app-root/logs/php.log <==
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "POST /wp-cron.php?doing_wp_cron=14
08828238.7940719127655029296875 HTTP/1.1" 200 - "-" "WordPress/3.9.2; http://dem
o-gibbinsurance.rhcloud.com"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:39 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
- - - [23/Aug/2014:17:10:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
[Sat Aug 23 18:05:41 2014] [notice] caught SIGWINCH, shutting down gracefully
Interrupted
Terminate batch job (Y/N)? Y
When I try to restart the server, this is what I'm getting:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc app restart -a demo
DL is deprecated, please use Fiddle
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/mysql
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/php
I appreciate any help.
Thank's a lot!
Try doing a force-stop on your application, and then a start and see if that helps. You should also try checking your quota and make sure that you are not out of disk space on your gear.
Related
How do I configure the visualizer so it does not hang after sending a picture to the browser?
DisplaCy sends the picture to the browser, it is drawn in the browser, but then they do not respond to keystrokes and buttons. In the browser, it is only manages to close the page with the picture, but this is not sent to spaCy. The program waits for a response but does not get one, I have to crash execution to continue. After kernel`s interrupting execution continues normally. Execution in Spyder environment.
My actions: start the browser, Spyder, start the visualization program, after appearing in the console "Serving on http://0.0.0.0:5000 ..." enter the address in the browser - a picture appears and in the console lines: "127.0.0.0.1 - - [09/Feb/2023 19:06:28] "GET / HTTP/1.1" 200 3400". Further - only close.
import spacy
from spacy import displacy
print('\n begin') #
nlp = spacy.load("en_core_web_sm")
doc = nlp("This is a sentence.")
displacy.serve(doc, style="dep", options={"compact": True})
print('\n end') #`
console content:
begin
Using the 'dep' visualizer
Serving on http://0.0.0.0:5000 ...
127.0.0.1 - - [09/Feb/2023 19:06:28] "GET / HTTP/1.1" 200 3400
127.0.0.1 - - [09/Feb/2023 19:06:29] "GET /favicon.ico HTTP/1.1" 200 3400
127.0.0.1 - - [09/Feb/2023 19:06:29] "GET /favicon.ico HTTP/1.1" 200 3400
Shutting down server on port 5000.
end`
I can not resize the VM, and get the bellow error:
(source: openstack.org)
In the /var/log/nova-api.log:
2017-10-11 16:53:07.796 24060 INFO nova.osapi_compute.wsgi.server [-] 118.113.57.187 "GET / HTTP/1.1" status: 200 len: 502 time: 0.0013940
2017-10-11 16:53:08.129 24060 INFO nova.api.openstack.wsgi [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] HTTP Exception Thrown :***Cloud hosting type flavor: 2Core1GB40GB1M did not found*** 。
2017-10-11 16:53:08.131 24060 INFO nova.osapi_compute.wsgi.server [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors/flavor:%202Core1GB40GB1M HTTP/1.1" status: 404 len: 485 time: 0.2736869
2017-10-11 16:53:08.248 24060 INFO nova.osapi_compute.wsgi.server [req-b7d1d426-b110-4931-90aa-f9cceeddb187 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 1913 time: 0.0570610
2017-10-11 16:53:08.565 24060 INFO nova.osapi_compute.wsgi.server [req-8cbc33a5-1a78-4ba7-8869-cf01536f784b - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 875 time: 0.2515521
2017-10-11 16:53:10.433 24059 INFO nova.api.openstack.wsgi [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] HTTP Exception Thrown:***Can not find valid host,....***。
2017-10-11 16:53:10.435 24059 INFO nova.osapi_compute.wsgi.server [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/servers/f9bef431-0635-4c74-9af5-cf61ed4d3ae4/action HTTP/1.1" status: 400 len: 564 time: 1.6831121
No mater whatever flavor, the VM can not resize, but the flavor is subsistent.
Because my openstack is all in one server, so I can not migrate the VM( the resize essence is migrate), so I add this line in my nova.conf's [DEFAULT]:
allow_resize_to_same_host=True
And restart the nova related service, I success:
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl restart openstack-nova-compute.service
For the two nodes configuration (controller and compute), I needed to include a couple of parameters more as it is explained here:
Update the “nova.conf” file in both controller and compute with the following lines
allow_migrate_to_same_host = True
scheduler_default_filters = AllHostsFilter
allow_resize_to_same_host = True
Restart the following services of the "controller" node
• nova-api
• nova-cert
• nova-consoleauth
• nova-scheduler
• nova-conductor
• nova-novncproxy
Restart the following service of the compute node:
• nova-compute
I have four single node installs for IBM Cloudant. All IBM Cloudant instances are installed on RHEL 6.5. Three of the four installs are working just fine. On the fourth, I am having issues with dashboard.html, haproxy, and connecting to databases. The following are the details from my debugging effort:
I used the same install instructions on all 4 machines.
I also verified that all RPMS are at same levels.
I made sure /etc/hosts, /etc/resolv.conf, and /etc/sysconfig/network are all set correctly.
I disabled iptables for some of my tests, with not luck.
haproxy is set to run on port 10080
nginx is set to run on 5657
from the end of a weatherreport run.
['cloudant#prdpcrdlp01.w3-969.ibm.com'] [warning] Cluster member cloudant#localhost is not connected to this node. Please check whether it is down.
From haproxy.log 500's for all of my database from dashboard.html
Mar 2 12:41:38 localhost.localdomain haproxy[26792]: 9.72.190.182:49510 [02/Mar/2016:12:41:38.166] dbfarm dbfarm/prdpcrdlp01.w3-969.ibm.com 181/0/0/2/183 500 312 - - ---- 5/5/0/1/0 0/0 "GET /stats HTTP/1.1"
Mar 2 12:41:38 localhost.localdomain haproxy[26792]: 9.72.190.182:49516 [02/Mar/2016:12:41:34.963] dbfarm dbfarm/prdpcrdlp01.w3-969.ibm.com 3417/0/0/2/3419 500 312 - - ---- 5/5/0/1/0 0/0 "GET /_replicator HTTP/1.1"
Mar 2 12:41:38 localhost.localdomain haproxy[26792]: 9.72.190.182:49517 [02/Mar/2016:12:41:34.964] dbfarm dbfarm/prdpcrdlp01.w3-969.ibm.com 3425/0/0/3/3428 500 312 - - ---- 5/5/1/2/0 0/0 "GET /metrics HTTP/1.1"
Mar 2 12:41:38 localhost.localdomain haproxy[26792]: 9.72.190.182:49518 [02/Mar/2016:12:41:34.968] dbfarm dbfarm/prdpcrdlp01.w3-969.ibm.com 3422/0/0/3/3425 500 312 - - ---- 5/5/0/1/0 0/0 "GET /ray HTTP/1.1"
Mar 2 12:41:38 localhost.localdomain haproxy[26792]: 9.72.190.182:49515 [02/Mar/2016:12:41:34.925] dbfarm dbfarm/prdpcrdlp01.w3-969.ibm.com 3726/0/0/2/3728 500 312 - - ---- 5/5/0/1/0 0/0 "GET /test2 HTTP/1.1"
from cloudant.log
2016-03-02 12:55:52.245 [error] cloudant#prdpcrdlp01.w3-969.ibm.com <0.10284.0> Missing IOQ stats db:
2016-03-02 12:56:04.066 [error] cloudant#prdpcrdlp01.w3-969.ibm.com <0.10127.0> httpd 500 error response:
{"error":"nodedown","reason":"progress not possible"}
from firebug...
I see 500's when attempting to access all db's
Example: http://prdpcrdlp01.w3-969.ibm.com:10080/test2"
I have performed a clean install of IBM Cloudant twice and the issue persists.
I would guess the system has had multiple node names over its lifetime, i.e. previously it was brought up with the node name cloudant#localhost and now it has the node name cloudant#prdpcrdlp01.w3-969.ibm.com. Any databases created when the node name was cloudant#localhost will therefore be unavailable now.
What does the output of curl -X GET http://prdpcrdlp01.w3-969.ibm.com:10080/_membership look like?
solved, the DELETE command worked, then I just needed to rerun "configure.sh -D" Which deleted all of the db's and recreated them.
I am using Oozie for automate the process.when i am using simple load data using shell script from oozie then all the data get inserted into hive tables.but when i am inserting values from a table to another table using shell scripting then oozie job gets killed automatically.Dont know why...
Error logs:
[18/Mar/2015 08:01:44 +0000] access WARNING 10.23.227.121 hdfs - "GET /logs HTTP/1.1"
[18/Mar/2015 08:01:42 +0000] middleware INFO Processing exception: Could not find job application_1426676094899_0051.: Traceback (most recent call last): File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/hue/apps/jobbrowser/src/jobbrowser/views.py", line 61, in decorate raise PopupException(_('Could not find job %s.') % jobid, detail=e) PopupException: Could not find job application_1426676094899_0051.
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"job":{"submitTime":1426690455158,"startTime":-1,"finishTime":1426690462073,"id":"job_1426676094899_0051","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","user":"hdfs","state":"FAILED","mapsTotal":0,"mapsCompleted":0,"reducesTotal":0,"reducesCompleted":0,"uberized":false,"diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.techmahindra.com:8020/user/hdfs/.staging/job_1426676...
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"app":{"id":"application_1426676094899_0051","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0051/jobhistory/job/job_1426676094899_0051","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.t...
[18/Mar/2015 08:01:42 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/jobs/application_1426676094899_0051 HTTP/1.1"
[18/Mar/2015 08:01:41 +0000] resource DEBUG GET Got response: {"apps":{"app":[{"id":"application_1426676094899_0043","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0043/jobhistory/job/job_1426676094899_0043","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyic...
[18/Mar/2015 08:01:41 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] middleware DEBUG No desktop_app known for request.
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.CloseOperation returned in 0ms: TCloseOperationResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0))
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.CloseOperation(args=(TCloseOperationReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=4, operationId=THandleIdentifier(secret='\x89w\xccf\x84:G\xda\xa9GR0\x00\xc8G\x96', guid='J\xcb\xa7\xba|\xfaH\x93\x93\xba$\x02\t\xc0IE'))),), kwargs={})
need help.....
I am using goaccess statistics my ngnix log. But the problem is that the same url have difference parameter.
115.*.*.115 - - [01/Nov/2013:06:15:29 +0000] "GET /this/is/example/test.html?ver=53&q=aaaaaa HTTP/1.1" 200 64 "-" "-"
115.*.*.115 - - [01/Nov/2013:06:15:29 +0000] "GET /this/is/example/test.html?ver=53&q=bbbbbb HTTP/1.1" 200 64 "-" "-"
I want to ignore the parameter after just statistics url like "/this/is/example/test.html".
How to do that or some other tools can do it?
Thanks.
perl -p -e 's/\?.*(\sHTTP)/$1/' log | goaccess
should do it.