My VM can not be resize, but the flavor is subsistent exist - openstack

I can not resize the VM, and get the bellow error:
(source: openstack.org)
In the /var/log/nova-api.log:
2017-10-11 16:53:07.796 24060 INFO nova.osapi_compute.wsgi.server [-] 118.113.57.187 "GET / HTTP/1.1" status: 200 len: 502 time: 0.0013940
2017-10-11 16:53:08.129 24060 INFO nova.api.openstack.wsgi [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] HTTP Exception Thrown :***Cloud hosting type flavor: 2Core1GB40GB1M did not found*** 。
2017-10-11 16:53:08.131 24060 INFO nova.osapi_compute.wsgi.server [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors/flavor:%202Core1GB40GB1M HTTP/1.1" status: 404 len: 485 time: 0.2736869
2017-10-11 16:53:08.248 24060 INFO nova.osapi_compute.wsgi.server [req-b7d1d426-b110-4931-90aa-f9cceeddb187 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 1913 time: 0.0570610
2017-10-11 16:53:08.565 24060 INFO nova.osapi_compute.wsgi.server [req-8cbc33a5-1a78-4ba7-8869-cf01536f784b - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 875 time: 0.2515521
2017-10-11 16:53:10.433 24059 INFO nova.api.openstack.wsgi [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] HTTP Exception Thrown:***Can not find valid host,....***。
2017-10-11 16:53:10.435 24059 INFO nova.osapi_compute.wsgi.server [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/servers/f9bef431-0635-4c74-9af5-cf61ed4d3ae4/action HTTP/1.1" status: 400 len: 564 time: 1.6831121
No mater whatever flavor, the VM can not resize, but the flavor is subsistent.

Because my openstack is all in one server, so I can not migrate the VM( the resize essence is migrate), so I add this line in my nova.conf's [DEFAULT]:
allow_resize_to_same_host=True
And restart the nova related service, I success:
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl restart openstack-nova-compute.service

For the two nodes configuration (controller and compute), I needed to include a couple of parameters more as it is explained here:
Update the “nova.conf” file in both controller and compute with the following lines
allow_migrate_to_same_host = True
scheduler_default_filters = AllHostsFilter
allow_resize_to_same_host = True
Restart the following services of the "controller" node
• nova-api
• nova-cert
• nova-consoleauth
• nova-scheduler
• nova-conductor
• nova-novncproxy
Restart the following service of the compute node:
• nova-compute

Related

Openstack unable to create instance due neutron connection error

I am trying to do a manual install of openstack. I am not able to create an instance. I followed the documention but i am still getting errors. Anyone willing to help me getting openstack started. Much appreciated
If i reload the website it does show up
apache error.log
Timeout when reading response headers from daemon process 'horizon': /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py, referer: http://192.168.1.100/horizon/project/instances/
Neutron-server.log
2018-11-07 18:07:52.655 31285 DEBUG neutron.wsgi [-] (31285) accepted ('192.168.1.100', 35802) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:956
2018-11-07 18:07:52.799 31285 DEBUG neutron.pecan_wsgi.hooks.policy_enforcement [req-9b499140-3f8f-471c-8bdf-2c377170e782 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] Attributes excluded by policy engine: [u'is_default', u'vlan_transparent'] _exclude_attributes_by_policy /usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256
2018-11-07 18:07:52.801 31285 INFO neutron.wsgi [req-9b499140-3f8f-471c-8bdf-2c377170e782 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/networks?shared=True HTTP/1.1" status: 200 len: 866 time: 0.1453359
2018-11-07 18:07:52.817 31286 DEBUG neutron.wsgi [-] (31286) accepted ('192.168.1.100', 35808) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:956
2018-11-07 18:07:52.959 31286 INFO neutron.wsgi [req-044501ca-4e30-4871-81f1-5b979b859277 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/ports?network_id=fb280ae3-bc89-4922-bd85-391c79967ae8 HTTP/1.1" status: 200 len: 1804 time: 0.1413569
2018-11-07 18:08:03.763 31286 INFO neutron.wsgi [req-b48c285d-e875-4a93-ad2e-d810e407d46c b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/security-groups?fields=id&id=26ee0adb-06a9-49a1-a134-7058131ad216 HTTP/1.1" status: 200 len: 267 time: 0.0607591
2018-11-07 18:08:03.819 31286 INFO neutron.wsgi [req-959eb0d0-b302-4aa0-a57f-226133b7c9d3 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/security-groups/26ee0adb-06a9-49a1-a134-7058131ad216 HTTP/1.1" status: 200 len: 2631 time: 0.0522490
2018-11-07 18:08:03.989 31286 DEBUG neutron.pecan_wsgi.hooks.policy_enforcement [req-514f65e7-6aed-42c5-aef8-9863650f8253 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] Attributes excluded by policy engine: [u'is_default', u'vlan_transparent'] _exclude_attributes_by_policy /usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256
2018-11-07 18:08:03.991 31286 INFO neutron.wsgi [req-514f65e7-6aed-42c5-aef8-9863650f8253 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/networks?id=fb280ae3-bc89-4922-bd85-391c79967ae8 HTTP/1.1" status: 200 len: 866 time: 0.1661508
2018-11-07 18:08:04.007 31286 INFO neutron.wsgi [req-ff26e06a-3ec7-4232-a1f8-0ddbcdafead1 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/quotas/85ee25734f664d6d822379674c93da44 HTTP/1.1" status: 200 len: 341 time: 0.0136049
2018-11-07 18:08:04.085 31286 INFO neutron.wsgi [req-b5797013-aab1-440f-b7c4-f0e8f8571a30 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "GET /v2.0/ports?fields=id&tenant_id=85ee25734f664d6d822379674c93da44 HTTP/1.1" status: 200 len: 255 time: 0.0731540
2018-11-07 18:08:04.998 31286 DEBUG neutron.wsgi [req-f3e3f7d8-daa4-4464-ab4a-430cbf2dee58 5673e5fcc9ff4029b9a72e32310570fb 6e60623ffbbe49fb94cc0eade1d4f6e3 - default default] http://controller:9696/v2.0/extensions returned with HTTP 200 __call__ /usr/lib/python2.7/dist-packages/neutron/wsgi.py:715
2018-11-07 18:08:05.000 31286 INFO neutron.wsgi [req-f3e3f7d8-daa4-4464-ab4a-430cbf2dee58 5673e5fcc9ff4029b9a72e32310570fb 6e60623ffbbe49fb94cc0eade1d4f6e3 - default default] 192.168.1.100 "GET /v2.0/extensions HTTP/1.1" status: 200 len: 7807 time: 0.3759091
2018-11-07 18:08:05.118 31286 INFO neutron.wsgi [req-c8c02b81-6a39-4a1a-813c-8eeaffee2d53 5673e5fcc9ff4029b9a72e32310570fb 6e60623ffbbe49fb94cc0eade1d4f6e3 - default default] 192.168.1.100 "GET /v2.0/networks/fb280ae3-bc89-4922-bd85-391c79967ae8?fields=segments HTTP/1.1" status: 200 len: 212 time: 0.1159072
2018-11-07 18:08:05.243 31286 INFO neutron.wsgi [req-17890f41-07cc-4e53-998d-feb6443e3515 5673e5fcc9ff4029b9a72e32310570fb 6e60623ffbbe49fb94cc0eade1d4f6e3 - default default] 192.168.1.100 "GET /v2.0/networks/fb280ae3-bc89-4922-bd85-391c79967ae8?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1" status: 200 len: 281 time: 0.1205151
2018-11-07 18:08:19.218 31289 DEBUG neutron.db.agents_db [req-68b39ef4-218e-442f-ab21-f2b8442bdb65 - - - - -] Agent healthcheck: found 0 active agents agent_health_check /usr/lib/python2.7/dist-packages/neutron/db/agents_db.py:326
2018-11-07 18:08:56.222 31289 DEBUG neutron.db.agents_db [req-68b39ef4-218e-442f-ab21-f2b8442bdb65 - - - - -] Agent healthcheck: found 0 active agents agent_health_check /usr/lib/python2.7/dist-packages/neutron/db/agents_db.py:326
neutron log file after trying to create a floating ip
0dfe9c3 HTTP/1.1" status: 200 len: 825 time: 0.0563171
2018-11-07 18:20:54.655 31285 DEBUG neutron.wsgi [-] (31285) accepted ('192.168.1.100', 37282) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:956
2018-11-07 18:20:54.660 31285 WARNING neutron.pecan_wsgi.controllers.root [req-6081c61f-abf0-4f97-9f00-5a2a4870fca4 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] No controller found for: floatingips - returning response code 404: PecanNotFound
2018-11-07 18:20:54.661 31285 INFO neutron.pecan_wsgi.hooks.translation [req-6081c61f-abf0-4f97-9f00-5a2a4870fca4 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] POST failed (client error): The resource could not be found.
2018-11-07 18:20:54.661 31285 DEBUG neutron.pecan_wsgi.hooks.notifier [req-6081c61f-abf0-4f97-9f00-5a2a4870fca4 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] Skipping NotifierHook processing as there was no resource associated with the request after /usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/hooks/notifier.py:74
2018-11-07 18:20:54.662 31285 INFO neutron.wsgi [req-6081c61f-abf0-4f97-9f00-5a2a4870fca4 b2e32e162a664ad0afd1a0c34643cd0c 85ee25734f664d6d822379674c93da44 - default default] 192.168.1.100 "POST /v2.0/floatingips HTTP/1.1" status: 404

Appium ios - Unknown device or simulator UDID

I am facing the following issue while running test-script on real device(iPhone 7).
After saw that log, I try to solve this issue, but I stuck here (Check the screenshot)
Environment -
Appium - 1.8.0-beta3
Device - iphone 7 (OS - 10.3.1)
mac OS - 10.13.2
Appium Log -
[Appium] Welcome to Appium v1.8.0-beta3
[Appium] Non-default server args:
[Appium] port: 5488
[Appium] Appium REST http interface listener started on 0.0.0.0:5488
[HTTP] --> GET /wd/hub/status {}
[debug] [MJSONWP] Calling AppiumDriver.getStatus() with args: []
[debug] [MJSONWP] Responding to client with driver.getStatus() result: {"build":{"version":"1.8.0-beta3","revision":null}}
[HTTP] <-- GET /wd/hub/status 200 23 ms - 78
Attempt[1] to start appium server
[HTTP] --> GET /wd/hub/status {}
[debug] [MJSONWP] Calling AppiumDriver.getStatus() with args: []
[debug] [MJSONWP] Responding to client with driver.getStatus() result: {"build":{"version":"1.8.0-beta3","revision":null}}
[HTTP] <-- GET /wd/hub/status 200 15 ms - 78
[HTTP] --> GET /wd/hub/status {}
[debug] [MJSONWP] Calling AppiumDriver.getStatus() with args: []
[debug] [MJSONWP] Responding to client with driver.getStatus() result: {"build":{"version":"1.8.0-beta3","revision":null}}
[HTTP] <-- GET /wd/hub/status 200 15 ms - 78
[HTTP] --> POST /wd/hub/session {"desiredCapabilities":{"noReset":false,"clearSystemFiles":true,"newCommandTimeout":1200,"platformVersion":"10.3.1","automationName":"XCuiTest","bundleId":"--------","udid":"457187374caf18----------------------------------","platformName":"iOS","deviceName":"iOS"},"requiredCapabilities":{},"capabilities":{"desiredCapabilities":{"noReset":false,"clearSystemFiles":true,"newCommandTimeout":1200,"platformVersion":"10.3.1","automationName":"XCuiTest","bundleId":"-----------","udid":"457187374caf18----------------------------------","platformName":"iOS","deviceName":"iOS"},"requiredCapabilities":{},"alwaysMatch":{"platformName":"iOS"},"firstMatch":[]}}
[debug] [MJSONWP] Calling AppiumDriver.createSession() with args: [{"noReset":false,"clearSystemFiles":true,"newCommandTimeout":1200,"platformVersion":"10.3.1","automationName":"XCuiTest","bundleId":"-----------","udid":"457187374caf18----------------------------------","platformName":"iOS","deviceName":"iOS"},{},{"desiredCapabilities":{"noReset":false,"clearSystemFiles":true,"newCommandTimeout":1200,"platformVersion":"10.3.1","automationName":"XCuiTest","bundleId":"-----------","udid":"457187374caf18----------------------------------","platformName":"iOS","deviceName":"iOS"},"requiredCapabilities":{},"alwaysMatch":{"platformName":"iOS"},"firstMatch":[]}]
[debug] [BaseDriver] Event 'newSessionRequested' logged at 1519049570375 (19:42:50 GMT+0530 (IST))
[Appium] Could not parse W3C capabilities: 'deviceName' can't be blank. Falling back to JSONWP protocol.
[Appium] Creating new XCUITestDriver (v2.68.0) session
[Appium] Capabilities:
[Appium] noReset: false
[Appium] clearSystemFiles: true
[Appium] newCommandTimeout: 1200
[Appium] platformVersion: 10.3.1
[Appium] automationName: XCuiTest
[Appium] bundleId: -----------
[Appium] udid: 457187374caf18----------------------------------
[Appium] platformName: iOS
[Appium] deviceName: iOS
[debug] [BaseDriver]
[debug] [BaseDriver] Creating session with MJSONWP desired capabilities: {"noReset":false,"clearSyst...
[BaseDriver] Session created with session id: 431df073-2efd-4f7f-b9c6-9868cc1d87d6
[debug] [XCUITest] Current user: 'test'
[debug] [XCUITest] Xcode version set to '9.2'
[debug] [XCUITest] iOS SDK Version set to '11.2'
[debug] [BaseDriver] Event 'xcodeDetailsRetrieved' logged at 1519049570723 (19:42:50 GMT+0530 (IST))
[XCUITest] The 'idevice_id' program is not installed. If you are running a real device test it is necessary. Install with 'brew install libimobiledevice --HEAD'
[debug] [XCUITest] Available devices:
[XCUITest] Error: Unknown device or simulator UDID: '457187374caf18----------------------------------'
On terminal run these commands
brew uninstall ideviceinstaller
brew uninstall --ignore-dependencies libimobiledevice
brew install --HEAD libimobiledevice
brew unlink libimobiledevice && brew link libimobiledevice
brew install --HEAD ideviceinstaller
brew unlink ideviceinstaller && brew link ideviceinstaller

insertion into hive table from shell script using oozie

I am using Oozie for automate the process.when i am using simple load data using shell script from oozie then all the data get inserted into hive tables.but when i am inserting values from a table to another table using shell scripting then oozie job gets killed automatically.Dont know why...
Error logs:
[18/Mar/2015 08:01:44 +0000] access WARNING 10.23.227.121 hdfs - "GET /logs HTTP/1.1"
[18/Mar/2015 08:01:42 +0000] middleware INFO Processing exception: Could not find job application_1426676094899_0051.: Traceback (most recent call last): File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/hue/apps/jobbrowser/src/jobbrowser/views.py", line 61, in decorate raise PopupException(_('Could not find job %s.') % jobid, detail=e) PopupException: Could not find job application_1426676094899_0051.
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"job":{"submitTime":1426690455158,"startTime":-1,"finishTime":1426690462073,"id":"job_1426676094899_0051","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","user":"hdfs","state":"FAILED","mapsTotal":0,"mapsCompleted":0,"reducesTotal":0,"reducesCompleted":0,"uberized":false,"diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.techmahindra.com:8020/user/hdfs/.staging/job_1426676...
[18/Mar/2015 08:01:42 +0000] resource DEBUG GET Got response: {"app":{"id":"application_1426676094899_0051","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0051/jobhistory/job/job_1426676094899_0051","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyicbivm021.t...
[18/Mar/2015 08:01:42 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/jobs/application_1426676094899_0051 HTTP/1.1"
[18/Mar/2015 08:01:41 +0000] resource DEBUG GET Got response: {"apps":{"app":[{"id":"application_1426676094899_0043","user":"hdfs","name":"insert into table ide...ecision.currency_dim(Stage-1)","queue":"default","state":"FINISHED","finalStatus":"FAILED","progress":100.0,"trackingUI":"History","trackingUrl":"http://inhyicbivm021.techmahindra.com:8088/proxy/application_1426676094899_0043/jobhistory/job/job_1426676094899_0043","diagnostics":"Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://inhyic...
[18/Mar/2015 08:01:41 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] access INFO 10.23.227.121 hdfs - "GET /jobbrowser/ HTTP/1.1"
[18/Mar/2015 08:01:40 +0000] middleware DEBUG No desktop_app known for request.
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.CloseOperation returned in 0ms: TCloseOperationResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0))
[18/Mar/2015 08:01:40 +0000] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.CloseOperation(args=(TCloseOperationReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=4, operationId=THandleIdentifier(secret='\x89w\xccf\x84:G\xda\xa9GR0\x00\xc8G\x96', guid='J\xcb\xa7\xba|\xfaH\x93\x93\xba$\x02\t\xc0IE'))),), kwargs={})
need help.....

How to figure out Nginx status code 499

My Setup :
Curl --> AWS ELB --> Nginx --> NodeJS
When number of request High then I face this issue.
Nginx access logs
xx.xx.xx.xx - - [30/Oct/2014:13:23:40 +0000] "POST /some/calls/object HTTP/1.1" 499 0 "-" "curl/7.27.0"
xx.xx.xx.xx - - [30/Oct/2014:13:23:40 +0000] "POST /some/calls/object HTTP/1.1" 499 0 "-" "curl/7.27.0"
Nginx error logs
2014/10/30 13:23:40 [info] 11181#0: *17811 client xx.xx.xx.xx closed keepalive connection
2014/10/30 13:23:40 [info] 11181#0: *17631 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: xx.xx.xx.xx, server: example.com, request: "POST /some/calls/objects HTTP/1.1", upstream: "http://xx.xx.xx.xx:xx/some/calls/object", host: "example.com"
NodeJS logs
2014-10-30T13:23:40.074Z : info : req.method GET
2014-10-30T13:23:40.074Z : info : req.url /some/calls/objects
2014-10-30T13:23:40.074Z : info : error { message: 'Request Timed Out!' }
2014-10-30T13:23:40.075Z : info : req.method GET
2014-10-30T13:23:40.075Z : info : req.url /some/calls/objects
2014-10-30T13:23:40.075Z : info : error { message: 'Request Timed Out!' }
2014-10-30T13:23:40.075Z : info : error.stack undefined
2014-10-30T13:23:40.076Z : info : error.stack undefined
Question
as per this link 499 means client closed the connection, but my question is, who is the client in this scenario, is ELB closing the connection or curl or nodejs ?
I've also observed that nodejs takes 60+ seconds to respond calls, if nodejs encounter Timed out issue, then it keep occurring again and again.

Openshift Wordpress Service Temporary Unavaliable. Can't restart application

I'm having a problem on a Wordpress instance I need to show to a client.
I'm using free gear. The app is not intended for production yet. Everything was working fine, I made a child-theme. Have been working on this page for about a month. Since yesterday morning I'm getting: Service Temporarily Unavailable when I try to access the page. It isn't the first time this happens, but it never lasted more then a few hours. Now it has been almost 48 hours. I need to show her the demo but I can't make it work.
Here is the output of the tail command:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc tail demo
DL is deprecated, please use Fiddle
==> app-root/logs/mysql.log <==
140820 22:03:29 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: ready for connec
tions.
Version: '5.5.37' socket: '/var/lib/openshift/539c92755973caa1f000044c/mysql//s
ocket/mysql.sock' port: 3306 MySQL Community Server (GPL)
140823 18:05:36 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Normal shutdown
140823 18:05:36 [Note] Event Scheduler: Purging the queue. 0 events
140823 18:05:36 InnoDB: Starting shutdown...
140823 18:05:39 InnoDB: Shutdown completed; log sequence number 9866622
140823 18:05:39 [Note] /opt/rh/mysql55/root/usr/libexec/mysqld: Shutdown complet
e
chown: changing ownership of `/var/lib/openshift/539c92755973caa1f000044c/mysql/
/stdout.err': Operation not permitted
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
140823 18:05:39 mysqld_safe mysqld from pid file /var/lib/openshift/539c92755973
caa1f000044c/mysql/pid/mysql.pid ended
==> app-root/logs/php.log <==
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:16:10:37 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "POST /wp-cron.php?doing_wp_cron=14
08828238.7940719127655029296875 HTTP/1.1" 200 - "-" "WordPress/3.9.2; http://dem
o-gibbinsurance.rhcloud.com"
10.6.135.27 - - [23/Aug/2014:17:10:38 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
10.6.135.27 - - [23/Aug/2014:17:10:39 -0400] "HEAD / HTTP/1.1" 200 - "-" "Ruby"
- - - [23/Aug/2014:17:10:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
- - - [23/Aug/2014:18:05:41 -0400] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.2.15
(Red Hat) (internal dummy connection)"
[Sat Aug 23 18:05:41 2014] [notice] caught SIGWINCH, shutting down gracefully
Interrupted
Terminate batch job (Y/N)? Y
When I try to restart the server, this is what I'm getting:
C:\Users\Joao Paulo\Projetos\GibbInsurance\sources\demo>rhc app restart -a demo
DL is deprecated, please use Fiddle
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/mysql
Failed to execute: 'control restart' for
/var/lib/openshift/539c92755973caa1f000044c/php
I appreciate any help.
Thank's a lot!
Try doing a force-stop on your application, and then a start and see if that helps. You should also try checking your quota and make sure that you are not out of disk space on your gear.

Resources