I've been trying to implement a Datadog integration, more specifically Airflow's. I've been following this piece of documentation using the containerized approach (I've tried using pod annotations and adding the confd parameters to the agent's Helm values). I've only made advances when adding the airflow.yaml config to the confd sectrion of the cluster agent. However, I get stuck when I try to validate the integration as specified in the documentation by running datadog-cluster-agent status. Under the "Running Checks" section, I can see the following:
airflow
-------
Core Check Loader:
Check airflow not found in Catalog
On top of being extremely generic, this error message mentions a "Catalog" that is not referenced anywhere else in the DD documentation. It doesn't tell me or give me any hints on what could possibly wrong with the integration. Anyone had the same problem and knows how to solve it or at least how can I get more info/details/verbosity to debug this issue?
You may need to add cluster_check: true to your airflow.yaml confd configuration.
Related
I have not been able to find any reference for the options passed into the node module version of firebase-tools. How do one turn on diagnostic logging or progress output? The github README for firebase tools only says:
The Firebase CLI can also be used programmatically as a standard Node module. Each command is exposed as a function that takes an options object and returns a Promise.
and has only the example:
client.deploy({
project: 'myfirebase',
token: process.env.FIREBASE_TOKEN,
cwd: '/path/to/project/folder'
}).then(function() {...
It would be really nice to get complete docs. The source code wasn't much help.
There isn't a good way to see progress via the programmatic API for the Firebase CLI right now. Your best bet would be to instead use spawn or similar to run it as a process and simply capture the stdout.
We'd like to improve this in the future but there are no concrete plans of what it will look like yet.
To see the complete list of keys off of the client object, see commands/index.js
In terms of what options to pass in, that is definitely hard to figure out. This seems like a great chance to submit an issue requesting specific improvements to documentation, or take a shot at documenting it yourself and submit a PR.
I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on
I am trying to install Openstack through Ansible for a single node using All IN ONE.
When I run setup-everything.yml file, I am receiving following error:
ERROR: config_template is not a legal parameter in an Ansible task or handler
Can you please help on the issue?
I know that this answer is a little late but I found this and thought to try and help others out should they run into this.
It's very likely that the system was not bootstrapped. We see this error from time to time when the action plugin is not present on the system. With OpenStack-Ansible you will need to retrieve the roles and plugins from the ansible-role-requirements.txt file. After you've cloned the software the first step in deployment is usually running ./scripts/bootstrap-ansible.sh which would install Ansible into a venv, retrieve your roles, libraries, & plugins, and then create the openstack-ansible CLI wrapper. You can also simply use the ansible-galaxy command with the ansible-role-requirements.txt if you don't want to run that script. After you have the roles and libs you will likely no longer see that error. More documentation on getting started can be found here: https://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html
You can get access to the config_template module source code here: https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py should you have issues specifically with the module or jump into the #openstack-ansible channel on freenode where there's generally someone available to help out.
config_template is a custom module developed by the OpenStack team. If you get ERROR: config_template is not a legal parameter in an Ansible task or handler it might mean that Ansible can not find the module could be an indention / syntax error as well. Check if the module config_template is in your ANSIBLE_LIBRARY environment variable. You can also pass the path via command line with --module-path
Also the pull request for this module was closed by the Ansible developers. So it is likely that you can find a similar functionality in a module supported by the Ansible developers.
How can meteor run on multiple ports.For example if the meteor run on 3000 i need another meteor app run on the same terminal.Please help me.
You can use the --port parameter:
`meteor run --port 3030`
To learn more about command line parameters, run meteor help <command>, e.g. meteor help run.
I see you've tagged your question meteor-up. If you're actually using mup, check out the env parameter in the config file.
I think the OP was referring to the exceptions caused because of locks on the mongo db. I am only on this platform for last week - and am learning as quick as I can. But when I tried running my application from the same project directory as two different users on two different ports - I got an exception about MongoDB :
Error: EBUSY, unlink 'D:\test\.meteor\local\db\mongod.lock'
The root of the issue isn't running on different ports - it is the shared files between the two instances - Specifically the database.
I don't think any of your answers actually helped him out. And .. neither can I yet.
I see two options -
First -
I am going to experiment with links to see if I can get the two users to use a different folder for the .meteor\local tree ... so both of us can work on the same code at the same time - but not impact each other when testing.
But I doubt if that is what the OP was referring to either (different users same app)...
Second - is trying to identify if I can inject into the run-mongo.js some concept of the URL / port number I am running on, so the mongodb.lock (and db of course) ... are named something like mongodb.lock-3000
I don't like the 2nd option because then I am on my own version of standard scripts.
B
No, it is mainly used the default port of 3000 or any state at the start, and the following (+1) to Mongo.
That is, the following application can be run through a 2-port, already in 3002, hence the previous 2-port as before - it is 2998.
Check can be very simple (Mac, Linux):
ps|grep meteor
I'm implementing a very light weight (embedded) OSGi framework which runs on a target piece of hardware. To attach a console I'm using org.apache.felix.gogo.shell and org.apache.felix.shell.remote.
To date, I've logged all custom messages using System.out.println which has worked fine, but now that I'm using the remote console I require something that will allow me to 'print' my messages to the OSGi console (and hopefully appear both on the target's console as well as the telnet console provided by felix.shell.remote).
I'm guessing there must be a way to get a handle to an OutputStream (or similar) to do this; My question is how? It seems that most people redirect their stdout etc. to solve problems like this.
I'm using declarative services, so I was hoping to be able to setup a component which attaches a referenced service (not important, but would make it nice and neat).
Any help is greatly appreciated.
The best way is to use logging for custom messages using the OSGi Log Service. That way you can get recent logs from the LogReader service from inside your shell or webconsole. If you insist on using popular frameworks like log4j etc. then you can get a bridge with Pax logging.
Alternatively, redirecting the output to a file in a known location works. You can then make a command in gogo that views that file or provide a tail function that continuously displays the new parts of the file.