I'm trying to setup email notifications on a nagios server. The nagios machine has been running okay for a little while now. Except for the fact that it hasn't been sending emails. I've been using a chrome plugin in it's place until I get this resolved.
Anyhow, this is how I have my contacts file setup:
define contact{
contact_name nagiosadmin ; Short name of user
use generic-contact ; Inherit default values from generic-contact template (defined above)
alias Nagios Admin ; Full name of user
email admin#example.com; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******
service_notification_period 24x7
host_notification_period 24x7
service_notification_options w,u,c,r,f
host_notification_options d,u,r,f
service_notification_commands notify-service-by-email
host_notification_commands notify-host-by-email
}
define contactgroup{
contactgroup_name admins
alias Nagios Administrators
members nagiosadmin
}
And I have my host and service definitions setup like this:
define host{
use linux-server ; Name of host template to use
; This host definition will inherit all variables that are defined
; in (or inherited by) the linux-server host template definition.
host_name web1
alias web1
address 10.10.10.6
contact_groups admins
}
define service{
use local-service ; Name of service template to use
host_name web1
service_description HTTP
contact_groups admins
check_command check_http
notifications_enabled 1
}
I've tested if this works by shutting down http on a web server it's monitoring. Waited a while and no message received on the mail server.
I've also telnetted to the mail server on the nagios machine. And I'm able to send an email to the account I want via telnet.
I'd appreciate some help here!
Look inside your service notification command configuration and execute it in a terminal for debugging.
Example :
define command {
command_name notify-service-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n $NOTIFICATIONCOMMENT$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
}
Related
I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}
I'm currently running a number of servers, each running NGINX used as reverse proxies to other websites. However, if I need to change a backend IP address or change other variables within NGINX, I need to manually SSH into the server and change the configurations OR log onto NGINX Proxy Manager.
What I'm looking to do is create a central website that will enable me to edit NGINX variables such as 'proxy_pass' and send the updated value to the selected remote server, updating the NGINX config and reloading the service.
Is there any current way to do this and how could I implement that? What comes to mind is some kind of CURL request to the remote server, and then I'm not sure how I'd automatically rewrite the correct portion of NGINX config etc.
Any help would be appreciated!
If you have root access on those servers, all you need is a service or a script that will fill the new values. The simplest way I see fit is to do it with a bash script and a template for the config file.
Template config file: /home/user/nginx_config/nginx.config.sample:
-- your generic config settings
proxy_pass
location /your/location {
proxy_pass {{proxy_pass}};
}
-- rest of standard file
The bash script for filling the template: /home/user/nginx_config/generator.sh
new_ip=$1
template_path="/home/user/nginx_config/nginx.config.sample"
config_path="/etc/nginx/nginx.conf"
if [[ -z $1 ]]
then echo "Missing IP param"; exit;
fi
cp "$config_path" "${config_path}.bak"
sed "s/{{proxy_pass}}/$new_ip/g" "$template_path" > "$config_path"
echo "Done! Updated $config_path file to $1:"
cat "$config_path"
Then, all you need to do is to make a local script to connect using ssh and run the generator script (with 1.2.3.4 as your new IP address)
sshpass -p password ssh -oStrictHostKeyChecking=no -oCheckHostIP=no user#your_server "bash /home/user/nginx_config/generator.sh 1.2.3.4"
I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf
I have a web app running on machine with ip : 172.10.10.10.
The basic API call exposed by this app is : GET - http://172.10.10.10
and it will return a response as OK.
On another machine I added an entry in /etc/hosts file as below.
172.10.10.10 webserver1.com
With this the ping command is resolved successfully. e.g. : ping webserver1.com
Now I want to resolve the curl command as well.
e.g. : curl http://webserver1.com
Result : curl: (6) Could not resolve host: webserver1.com
How to achieve this for curl command with http url?
You can setup a DNS server and point your IP in /etc/resolv.conf
There are many options out there in marker ( paid / free ) for a Local DSN Server dockerized and non-dockerized too.
I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.