is there a way to monitor ActiveMQ Artemis queues deployed in Karaf? - apache-karaf

Apache Karaf version: 4.1.1
ActiveMQ Artemis version: 2.0.0
I have followed these instructions to deploy the artemis system into karaf.
I can send/retrieve messages to the Artemis queues and it's everything OK. But I was wondering if there is some command from the karaf shell which would allow me to list the queues or returning the number of messages in each one, connected clients ...
I can change the Artemis configuration through editing the artemis.xml file into the etc karaf folder, but it does not allow me to "ask" for information.
Is this possible? or maybe a workaround?. I am looking for something inside the karaf shell.

It seems karaf shell does not provide any way to monitor Artemis queue. What we can do is install hawtio on karaf and then monitor Artemis queues.

Related

How to send Airflow Metrics to datadog

We have a requirement where we need to send airflow metrics to datadog. I tried to follow the steps mentioned here
https://docs.datadoghq.com/integrations/airflow/?tab=host
Likewise, I included statsD in airflow installation and updated the airflow configuration file (Steps 1 and 2)
After this point, I am not able to figure out how to send my metrics to datadog. Do I follow the Host configurations or containarized configurations? For the Host configurations, we have to update the datadog.yaml file which is not in our repo and for containerized version, they have specified how to do it for Kubernetics but we don't use Kubernetics.
We are using airflow by creating a docker build and running it over on Amazon ECS. We also have a datadog agent running parallely in the same task (not part of our repo). However I am not able to figure out what configurations I need to make in order to send the StatsD metrics to datadog. Please let me know if anyone has any answer.

Azure DevOps Pipeline Task to connect to Unix Server and execute commands

I am seeking to set up a Release Pipeline in Azure DevOps Services that will deploy
an application to a Unix server, where it then executes some unix commands as part
of the deployment.
Would appreciate some guidance on which pipeline Task(s) I can set up to therefore
achieve the following objectives:
Connect to the Unix server.
Execute the required Unix commands.
By the way, the Agents are currently installed on Windows hosts but we are looking to
extend that to Unix servers in due course, so a solution that fits both setups would
be ideal, even though the former is the priority.
You can check out task SSH Deployment task.
Use this task to run shell commands or a script on a remote machine using SSH. This task enables you to connect to a remote machine using SSH and run commands or a script.
If you need to copy files to the remote linux server. You can check out Copy Files Over SSH task.
You probably need to create a SSH service connection. See steps here to create as service connection.
In the end, due to concerns raised about the install of private keys on the target server which is part of the SSH Deployment setup, we opted for the use of Deployment Groups which has enabled us to set up a persistent connection to our Linux server.

Corda Nodes Monitoring

I have Corda network (3 nodes + 1 Notary Node) running locally on my windows system.
I am reading through this document # https://docs.corda.net/node-administration.html
Node Statistics are exposed through JMX beans to Jolokia agent running at the start of each node. I see jolokia agents tarting for each of the node at different ports. Ex - Jolokia: Agent started with URL http://127.0.0.1:xxxx/jolokia/
I am using Hawtio dashboard to see Corda node JVM statistics exposed through Jolokia agents storage. While hawtio is smart enough to discover jolokia agents started at different port for each Corda Node, i am not able to see required statistics displayed on the dashboard.
I have tried setting up jmxMonitoringHttpPort in each of the node.conf with jokia port for each node. But Node is not starting because Jolokia agent is not running at target port message.
I have downloaded binaries of Jolokia agent and ran it on a unused port in system, configured node.conf for each file pointing to this port. But i am still not seeing statistics for any of the node.
i think you can try with the obligation CorDapp project https://github.com/corda/obligation-cordapp#instructions-for-setting-up with ".runnodes" script, because the nodes are running with jolokia agent enabled.
alternatively you can run a single node with java -Dcapsule.jvm.args="-javaagent:drivers/jolokia-jvm-1.3.7-agent.jar=port=7033" -jar corda.jar and see if this works.

Mesos/Marathon scalling a web application

I'm reading up and doing basic mesos/marathon installation. If Im deploying a webapp as a marathon application, the instance(s) of my webapp could run in any mesos slave. Howe would I then configure my nginx upstream to point to correct host(s).
Should my webapp register its host in zookeeper and reconfigure nginx periodically ?.
Is there any examples how to do this.
Thanks
Should my webapp register its host in zookeeper and reconfigure nginx periodically ?
You don't need zookeeper. All data required to configure nginx are available in Mesos or Marathon. You can periodically query Mesos/Marathon and generate Nginx configuration like Nixy does.
To minimize unavability time you can use Marathon SSE to get information about instances start/stop just like allegro/marathon-consul does.

How to stop service without restart

I have deployed an "helloworld" service on cloudify 2.7 and OpenStack cloud. I would stop the service tomcat without the service is being restarted.
So, in the cloudify shell I have execute:
cloudify#default> connect cloudify-manager-1_IP
Connected successfully
cloudify#default> use-application helloworld
Using application helloworld
cloudify#helloworld> invoke tomcat cloudify:start-maintenance-mode 60
Invocation results:
1: OK from instance #1#tomcat_IP, Result: agent failure detection disabled successfully for a period of 60 minutes
invocation completed successfully
At this point, I have connected via ssh into the tomcat VM and ran:
CATALINA_HOME/bin/catalina.sh stop
In the CATALINA_HOME/log/catalina.out I can see that the app server is being stopped and immediately restarted!
So, what should I do in order to stop the app server and restart it only when I decide to restart it?
Maintenance mode in Cloudify 2.7 is used to prevent the system from starting a new VM if a service VM has failed.
What you are looking for is to prevent Cloudify from auto-healing a process - Cloudify checks for the liveness of the configured process, and if it dies, it executes the 'start' lifecycle again.
In your case, the monitored process can change, since you will be restarting it manually. So you should not use the default process monitoring. There is a similar question here: cloudify 2.7 locator NO_PROCESS_LOCATORS

Resources