Running snort (in packet dump mode) with command sudo snort -C snort.conf -A console -i eth0 a following problem occurred:
--== Initializing Snort ==--
Initializing Output Plugins!
Snort BPF option: snort.conf
pcap DAQ configured to passive.
The DAQ version does not support reload.
Acquiring network traffic from "eth0".
ERROR: Can't set DAQ BPF filter to 'snort.conf' (pcap_daq_set_filter: pcap_compile: syntax error)!
Fatal Error, Quitting..
Can someone please suggest a solution?
You're using the wrong option to load the configuration, it should be the lower case '-c'.
sudo snort -c snort.conf -A console -i eth0
Also, you can test your configuration with '-T' before running it:
sudo snort -T -c snort.conf
just put "-i" before eth0 in command it will solve the problem
Try this:
sudo service snort
ps ax|grep snortstart
The output I got was
/usr/sbin/snort -m 027 -D -d -l /var/log/snort -u snort -g snort -c
/etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0
The man page says
-D Run Snort in daemon mode. Alerts are sent to
/var/log/snort/alert unless otherwise specified.
So when I drop the -D and add the -A
sudo /usr/sbin/snort -m 027 -d -l /var/log/snort -u snort -g snort -c /etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0 -A console
Works for snort Version 2.9.7.0 GRE (Build 149)
Related
I have a server with Suricata (169.69.1.11) installed and a specific rule:
drop ICMP any any -> 169.69.1.11 any (msg: "ping dropped";sid:10001;)
In other VM I execute:
ping 169.69.1.11 -c 5
so at this point, everything is bad because the pings reach, and nothing is registered on fast.log so I execute on the Suricata machine
sudo suricata -i enp0s8
and I ping another time with the same command ( 5 pings )
In my other machine every seems okay, the 5 pings seems they reach, but I look at the logs on Suricata /var/log/suricata/fast.log it drops that line
03/25/2022-11:11:05.231735 [wDrop] [**] [1:10001:0] ping dropped [**] [Classification: (null)] [Priority: 3] {ICMP} 169.69.1.10:8 -> 169.69.1.11:0
Why the pings are hitting and don't get blocked?
Why do I ping 5 times but only 1 time is logged?
My first problem is I didn't have Suricata IPS, first delete ur iptables rules with
sudo iptables -F
sudo iptables -I INPUT -j NFQUEUE
sudo iptables -I OUTPUT -j NFQUEUE
sudo iptables -I FORWARD -j NFQUEUE
and execute the Suricata with -D to let as bg
sudo Suricata -q 0 -D
Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish
I have installed Nagios on Redhat with the following configurations:
/usr/local/nagios/etc/static/commands.cfg
define command {
command_name check_service
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_service -a $ARG1$
}
When I try to run it manually:
if i try to use the following syntax, I get error:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a check_http
NRPE: Unable to read output
not using nope:
/usr/local/nagios/libexec/check_http -H 10.111.55.92
HTTP OK: HTTP/1.1 200 OK - 4298 bytes in 0.024 second response time |time=0.024462s;;;0.000000 size=4298B;;;0
I am consistently getting Nagios Email notifications:
HOST: Proxy (Dev) i-01aa24242424d7
IP: 10.111.55.92
Service: Apache Running
Service State: UNKNOWN
Attempts: 3/3
Duration: 0d 9h 28m 49s
Command: check_service!httpd
\More Details:
NRPE: Unable to read output
Not sure how I can use nrpe with check_service to check http
Just. running the check_nrpe with check_http displays the version of installed nope
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -a check_http
NRPE v3.2.1
/usr/local/nagios/etc/nrpe.cfg
command[check_users]=/usr/local/nagios/libexec/check_users -w 10 -c 15
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_root_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 10 -c 15 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_ping]=/usr/local/nagios/libexec/check_ping $ARG1$
command[check_http]=/usr/local/nagios/libexec/check_http
# LINUX DEFAULT
command[check_service]=/bin/sudo -n /bin/systemctl status -l $ARG1$
# GLUSTER CHECKS
command[check_glusterdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /gluster
# GITLAB CHECKS
command[gitlab_ctl]=/bin/sudo -n /bin/gitlab-ctl status $ARG1$
command[gitlab_rake]=/bin/sudo -n /bin/gitlab-rake gitlab:check
command[check_gitlabdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /var/opt/gitlab
# OPENSHIFT CHECKS
command[check_openshift_pods]=/usr/local/nagios/libexec/check_pods
File: /usr/local/nagios/etc/nagios.cfg
cfg_dir=/usr/local/nagios/etc/static
You seem to be confusing two plugins. check_service will just check a service is running locally. Try calling it like this:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a httpd
I'd hesitate to use the check_service command you have in there though. Giving nrpe access to run systemctl with sudo seems dangerous to me.
check_http is an http client. It will actually connect to an http server and check a given URI. It can check status codes and do all sorts of things.
It looks like in your nrpe.cfg you didn't include any arguments to check_http. It will just print its help message if you call it like that, I don't think it will check the local machine.
Note that when you call check_http above manually, you supply -H. That -H is not passed through automatically, you need to provide arguments to your check_http command in nrpe.cfg.
Change the line:
command[check_http]=/usr/local/nagios/libexec/check_http
To something like:
command[check_http]=/usr/local/nagios/libexec/check_http -H 127.0.0.1
And it should work better assuming your http is listening on localhost.
You probably don't want to call check_http via nrpe like this though. Let your nagios server call check_http out to the remote machine.
I'm looking for some assistance please to create a proper command-line for syncing from a local machine to a remote server over ssh.
Here is a draft that is not working.
/usr/bin/rsync --dry-run --delete -arzh /Users/HOME/.0.data/ "--rsh=/Users/HOME/.0.data/.0.emacs/elpa/bin/sshpass -p 'alpine' ssh -p '2222' -l root localhost" -t "cd /var/mobile/Applications/F30B1574-5979-4764-8742-7F9DB2863094/Documents/.0.data && bash --login"
The following command-line successfully logs in to my iphone over ssh via usb. Id like to incorporate that working command-line into something that can be used with rsync, but I need some assistance in that regard.
/Users/HOME/.0.data/.0.emacs/elpa/bin/sshpass -p 'alpine' ssh -p '2222' -l root localhost -t "cd /var/mobile/Applications/F30B1574-5979-4764-8742-7F9DB2863094/Documents/.0.data && bash --login"
For anyone who is interested in leaning how to ssh into an iphone over usb, here is a link that discusses the method: http://iphonedevwiki.net/index.php/SSH_Over_USB
rsync must be installed on both locations. Cydia has an rsync binary that installs on the iPhone. The method of connection with rsync is the same as any regular ssh sever.
Here is a bash script solution (includes --dry-run):
#!/bin/bash
HOST="localhost";
PORT="2222";
USER="root";
PWD="alpine";
SOURCE="/Users/HOME/Desktop/test/";
TARGET="/private/var/mobile/Applications/F30B1574-5979-4764-8742-7F9DB2863094/Documents/test";
SSHPASS="/Users/HOME/.0.data/.0.emacs/elpa/bin/sshpass";
RSYNC="/Users/HOME/.0.data/.0.emacs/elpa/bin/rsync";
$RSYNC --dry-run --progress --delete -arvzh --rsh="$SSHPASS -p $PWD ssh -p $PORT -l $USER" $SOURCE $HOST:$TARGET
For an example of how to use rsync in conjunction with Emacs, see the following thread: https://emacs.stackexchange.com/a/5844/2287
My site gives error 521 all the times.
When I found this error from my server
$sudo service varnish reload
* Reloading HTTP accelerator varnishd
Connection failed (localhost:6082)
Error: vcl.load 8d6fb6be-9a0a-4896-be47-e2678e3c2617 /etc/varnish/default.vcl failed
Moreover,
varnishlog
shows nothing.
I am following this tutorial to set the server up. And, I changed
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m"
The /etc/varnish/default.vcl file is copied from the tutorial. All & has been corrected to &.
It is a fresh VPS. No firewall.
Any clue to resolve it?
Thanks!!!!
3 things come into my mind:
Start varnish in foreground mode and check what it says
varnishd -F -a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m
Try changing -T localhost:6082 to -T 127.0.0.1:6082
Your port 6082 might be already taken. Change it or check if it's listed in already open ports' list with
netstat -tlnep
restart your varnish
sudo /etc/init.d/varnish restart
then
sudo /etc/init.d/varnish reload