I want to debug the issue of Horizon Dashboard being slow, thereby taking too much of time to load. Any possible hints/references to code or something will indeed be helpful to me.
Waiting for the guided response regarding the same.
Thanks!
I also met this problem, not only the openstack dashboard response slowly, but also the CLI command(Just dashboad response is more slow than it)
the results as below:
*[root#kolla ~]# time neutron port-list | wc -l
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
98
real 0m16.366s
user 0m0.735s
sys 0m0.164s*
*[root#kolla ~]# time nova list | wc -l
25
real 0m10.767s
user 0m0.798s
sys 0m0.121s
[root#kolla ~]# time port list | wc -l*
from dashboard, it would need takes 18 seconds to get all instances of the special tenant.
Related
I was in riak-shell when ssh lost its connection to the server. After reconnecting, I do the following:
sudo riak-shell
and get:
An instance of riak-shell is already running
So, I restarted the riak node in question. This did not seem to solve the problem. I do not see anything using ps -aux to kill. According to the docs, only one instance can run at a time. That makes sense, but when I run riak-shell from another node and try to connect to any node, I now get the following:
Error: invalid function call : connection_EXT:connect ["riak#<<<ip_address_elided>>>"]
You can connect to a specific node (whether in your riak_shell.config
or not) by typing 'connect "dev1#127.0.0.1";' substituting your
node name for dev1.
You may need to change the Erlang cookie to do this.
See also the 'reconnect' command.
Unhandled message received is {#Ref<0.0.0.135>,disconnected}
riak-shell(3)>
I have not changed the cookies during this process, and the cookie appears to be the same (at least in /etc/riak/riak_shell.config). (I am running the Riak TS AMI on AWS.)
riak-shell runs in its own Erlang VM - entirely separate from the riak node
(You don't need to run riak-shell from the machine your node is on - it uses the normal riak-erlang-client to talk to riak)
If you you are on a Linux do ps aux | grep riak_shell_app it will give you the process number you need to kill that instance:
08:30:45:~ $ ps aux | grep riak_shell_app
vagrant 4671 0.0 0.3 493260 34884 pts/4 Sl+ Aug17 0:03 /home/vagrant/riak_ee/dev/dev1/erts-5.10.3/bin/beam.smp -- -root /home/vagrant/riak_ee/dev/dev1 -progname erl -- -home /home/vagrant -- -boot /home/vagrant/riak_ee/dev/dev1/releases/2.1.1/start_clean -run riak_shell_app boot debug_off /home/vagrant/riak_ee/dev/dev1/bin/../log/riak_shell/riak_shell -noshell -config /home/vagrant/riak_ee/dev/dev1/bin/../etc/riak
I wrote a good chunk of it so let me know how you got on:
https://github.com/basho/riak_shell/graphs/contributors
All,
I have an issue, how to let bosj errand on existed vms,
I have a vm deployed a mongodb, and i want to run some errand command in the vm, but i don't know how?
Do any one know this?
Director task 2121
Task 2121 done
+---------+---------+---------+--------------+
| VM | State | VM Type | IPs |
+---------+---------+---------+--------------+
| app-p/0 | stopped | app | 10.62.90.171 |
+---------+---------+---------+--------------+
How to run an errand:
bosh run errand ERRAND_NAME [--download-logs] [--logs-dir DESTINATION_DIRECTORY]
ref: http://bosh.io/docs/sysadmin-commands.html#errand
Assuming you have already marked the deployment job 'lifecycle' as 'errand' in your manifest
ref: Jobs Block section of http://bosh.io/docs/deployment-manifest.html
Like Ben Moss mentioned in his comment, a new VM is created for running an errand job. The errand VM does not stay around after the errand completes. Errand is a short lived job and can be run multiple times
What is the proper way to run Kibana 4.5 as service on CentOS 7?
When I run it as ./kibana, I can conenct to it form another machine without any problem. When I run it with systemctl start kibana and check with ps -ef | grep '.*node/bin/node.*src/cli'it looks like running but refuses to connect. And goes down. What can be the problem? Thanks in advance.
Here is content of kibana.service file
[Unit]
Description=no description given
[Service]
Type=simple
User=kibana
Group=root
Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml
ExecStart=/opt/kibana/bin/kibana
Restart=always
[Install]
WantedBy=multi-user.target
I am not that much of a linux expert but i recently installed kibana using yum (https://www.elastic.co/guide/en/kibana/4.5/setup.html#kibana-yum) on a minimal installation of CentOS 7 and did not face any issues whatsoever.
In order to have some debug logs and find out what is wrong in your case, edit the kibana configurations file
/opt/kibana/config/kibana.yml
and set a filename for the logging.dest property.
logging.dest: /var/log/kibana.log
Good luck
Igor,
I noticed a few questions you posted on Kafka so sounds like you need to set up a cluster that can ingest data and pass to Elastic. Kibana would be just user interface.
In my experience, components like ELK, Kafka, Zookeeper, etc should be managed by a watchdog process. I highly recommend looking at something like supervisord. http://supervisord.org/
You should run it as a service and the rest managed by the supervisor. It will guarantee starting components at boot but whats more important restart in case of failure and collecting logs. In case of Kibana, it is a NodeJS app that writes to stdout/stderr so to know what fails, you need to collect them.
Recently I am working on LBaaS service. When I set up a pool and it serves,
the haproxy process randomly returns 503:
503 Service Unavailabe
No server is available to handle this request
And I am pretty sure when this problrm happened, member servers are up.
Anyone can help me abt this problem?
PS:when i first build members in the loadbalancer,the status is active, however,it will turn to inactive in a few minutes. And I find a way to resolve this via executing
echo -e 'HTTP/1.0 200 OK\r\n\r\n<serverX>' | nc -l -p
then the status of members turns to active.
Ahaaaa...
Finally i got it, I am really careless..
i should use while true when i execute
echo -e 'HTTP/1.0 200 OK\r\n\r\n' | nc -l -p
otherwise, this command will be only executed by once.
I have an OBIEE 11g installation in a Red Hat machine, but I'm finding problems to make it running. I can start WebLogic and its services, so I’m able to enter the WebLogic console and Enterprise Manager, but problems come when I try to start OBIEE components with opmnctl command.
The steps I’m performing are the following:
1) Start WebLogic
cd /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/bin/
./startWebLogic.sh
2) Start NodeManager
cd /home/Oracle/Middleware/wlserver_10.3/server/bin/
./startNodeManager.sh
3) Start Managed WebLogic
cd /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/bin/
./startManagedWebLogic.sh bi_server1
4) Set up OBIEE Components
cd /home/Oracle/Middleware/instances/instance1/bin/
./opmnctl startall
The result is:
opmnctl startall: starting opmn and all managed processes...
================================================================================
opmn id=JustiziaInf.mmmmm.mmmmm.9999
Response: 4 of 5 processes started.
ias-instance id=instance1
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ias-component/process-type/process-set:
coreapplication_obisch1/OracleBISchedulerComponent/coreapplication_obisch1/
Error
--> Process (index=1,uid=1064189424,pid=4396)
failed to start a managed process after the maximum retry limit
Log:
/home/Oracle/Middleware/instances/instance1/diagnostics/logs/OracleBISchedulerComponent/
coreapplication_obisch1/console~coreapplication_obisch1~1.log
5) Check the status of components
cd /home/Oracle/Middleware/instances/instance1/bin/
./opmnctl status
Processes in Instance: instance1
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
coreapplication_obiccs1 | OracleBIClusterCo~ | 8221 | Alive
coreapplication_obisch1 | OracleBIScheduler~ | N/A | Down
coreapplication_obijh1 | OracleBIJavaHostC~ | 8726 | Alive
coreapplication_obips1 | OracleBIPresentat~ | 6921 | Alive
coreapplication_obis1 | OracleBIServerCom~ | 7348 | Alive
Read the log files from /home/Oracle/Middleware/instances/instance1/diagnostics/logs/OracleBISchedulerComponent/
coreapplication_obisch1/console~coreapplication_obisch1~1.log.
I would recommend trying the the steps in the below link as this is a common issue when upgrading OBIEE.
http://www.askjohnobiee.com/2012/11/fyi-opmnctl-failed-to-start-managed.html
Not sure, what your log says, but try these below steps and check if it works or not
Login as superuser
cd $ORACLE_HOME/Apache/Apache/bin
chmod 6750 .apachectl
logout and login as ORACLE user
opmnctl startproc process-type=OracleBIScheduler