I'm trying to use "purge' util that comes with squid 3.3.8 to purge some objects in squid cache, and with ufs store. "purge" works fine, it can extract URL from cached objects, but with rock store, which is available sine squid 3.2, "purge" reports the following error:
no cache_dir or error accessing "/opt/squid/3.3.8/etc/squid.conf"
Here is the the corresponding squid.conf:
pid_filename /var/run/squid.pid
cache_effective_user user
cache_effective_group user
http_port 3128
cache_mem 0 MB
#cache_dir ufs /var/squid/cache/ 500 16 256
cache_dir rock /var/squid/rock 5120 max-size=102400
acl my_machine src 192.168.2.22
http_access allow my_machine
acl localnet src 127.0.0.1
acl Purge method PURGE
http_access allow localnet Purge
http_access deny all Purge
Is there something wrong with my config file, or squid purge does not work with rock store?
I just read source code of purge, in /purge/conffile.cc, it use the following regex to search cache_dir in squid.conf;
^[ \t]*cache_dir([ \t]+([[:alpha:]]+))?[ \t]+([[:graph:]]+)[ \t]+([0-9]+)[ \t]+([0-9]+)[ \t]+([0-9]+)
This regex can only match ufs, aufs and diskd, but can not match rock:
cache_dir rock /var/squid/rock 5120 max-size=102400
From this point of view, purge does not support rock store.
Related
I have a server with multiple IP configured on it ( as virtual IP on eth0). I'm using Haproxy for Load balacing. Each IP has been configured/pointed to different domain name and All requests that comes to each IP address is being forwarded to different backend server by using haproxy.
Issue here, all outgoing traffic from haproxy is pass through main interface IP [ by default]. I just wanted to set source ip for backend connection.
I tried the below config, its not working. Any idea ?
backend web1
server ss2 10.11.12.13:80 source ${frontend_ip}
frontend new1
bind 10.11.13.15:8080
mode tcp
use_backend web1
You only have 1 IP in your question so I can't say for sure. But if you have multiple virtual IPs and want to serve different backends, you need to create one frontend each at least. Like this:
frontend new1
bind 10.11.13.15:80
...
acl is_new1domain hdr(host) -i new1.domain.com
use_backend web1 if is_new1domain
frontend new2
bind 10.11.13.16:80
...
acl is_new2domain hdr(host) -i new2.domain.com
use_backend web2 if is_new2domain
backend web1
...
source 10.124.13.15
backend web2
...
source 10.124.13.16
Actually, if you don't have any other rules to parse, just use Layer4 to proxy/balance. Like this:
listen new1
bind 10.11.12.15:80
server ss1 10.11.12.90:8080 check
server ss2 10.11.12.91:8080 check
server ss3 10.11.12.92:8080 check
source 10.124.12.15
listen new2
bind 10.11.12.16:80
server ss4 10.11.12.80:8080 check
server ss5 10.11.12.81:8080 check
server ss6 10.11.12.82:8080 check
source 10.124.12.16
Is there is a way to check if localhost is making ftp connection to other server?
The requirement is like this: Local host -> serverA
Remote server --> serverB.
Need to check if serverA is making ftp connection to serverB.
So whenever serverA is making ftp connection to serverB, how to get notified.
I tried like this: ps -ef | grep -i ftp; however since ps process too would get notified, so can't make this use in shell script, is there any better way which checks if serverA is making ftp connections to serverB, and if so, get notified / logs to a file.
Thanks
Your problem of "ps -ef | grep -i ftp" also reporting the 'ps' process is resulting from grep searching the string "ftp". This would also hit a lot of other processes which also have the word 'ftp' in it's command line.
To fix that check if you have the procps tools "pgrep" and "pkill" installed. They are very helpful for 'grepping' processes and running commandlines.
To solve your initial problem you might check if you have the 'ss' (show sockets from iproute2 packages) command installed.
It's output might be useful (11.22.33.44 is you local IP 130.133.3.130 the remote):
root:sigkill:~/# ss -p|cat
State Recv-Q Send-Q Local Address:Port Peer Address:Port
[...]
ESTAB 0 0 11.22.33.44:43681 130.133.3.130:ftp users:(("ftp",19729,4),("ftp",19729,3))
[...]
There are a few approaches that you could take:
You could poll running processes for ftp. This wouldn't catch other FTP clients (if you care about that), and it wouldn't catch very short ftp sessions that slip between polls.
If your system supports execution logging, you could log all executions of ftp. Again, this wouldn't catch other FTP clients.
You could watch for outbound connections on port 21/tcp using some mechanism provided by your system (for instance, on Linux, use an iptables rule that matches outbound FTP connections to any servers that you care about and logs them using the LOG target). This would catch all connections regardless of client, but tracking down the process and user would be a little more complicated.
You can use $ grep ftp /etc/services to list the current ftp connections.
$ grep ftp /etc/services
ftp-data 20/tcp
ftp-data 20/udp
...
ftp 21/tcp
ftp 21/udp fsp fspd
...
sftp 115/tcp
sftp 115/udp
...
ftp-data 20/sctp # FTP
ftp 21/sctp # FTP
...
ftps-data 989/tcp # ftp protocol, data, over TLS/SSL
ftps-data 989/udp # ftp protocol, data, over TLS/SSL
ftps 990/tcp # ftp protocol, control, over TLS/SSL
ftps 990/udp # ftp protocol, control, over TLS/SSL
Use netstat to see the open connections. e.g., For simple FTP...
$ netstat -tan | grep \:21
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN
tcp 0 0 :::21 :::* LISTEN
I am trying to set up basic auth on my elastic beanstalk instance running nodejs, however, I cannot seem to get this working. I have followed this guide Nginx Server on Amazon EC2, but http traffic is still getting through the nginx instance. I think its because the nginx server on the ec2 instance is not the one I need to be altering the virtual.conf file for. I think the nginx server is on another instance entirely, but I cant seem to find it. I think this because when I ping the Domain name for my site its IP is that of the nginx server and not my Elastic IP. Any ideas on how to configure nginx to restrict http and https traffic to my site on Elastic beanstalk?
Although this isn't a direct answer to your question, I had a lot of trouble finding resources for HTTP Basic Authentication for AWS.
I ended up switching from Nginx to Apache and used this configuration in PROJECT_ROOT/.ebextensions/apache.conf:
files:
"/etc/httpd/conf.d/allow_override.conf":
mode: "000644"
owner: ec2-user
group: ec2-user
encoding: plain
content: |
<Directory /var/app/current/>
AllowOverride AuthConfig
</Directory>
"/etc/httpd/conf.d/auth.conf":
mode: "000644"
owner: ec2-user
group: ec2-user
encoding: plain
content: |
<Directory /var/app/current/>
AuthType Basic
AuthName "Myproject Prototype"
AuthUserFile /etc/httpd/.htpasswd
Require valid-user
</Directory>
"/etc/httpd/.htpasswd":
mode: "000644"
owner: ec2-user
group: ec2-user
encoding: plain
content: |
myusername:mypassword-generated-by-htpasswd
Note, this is not ideal as you end up having password protection in the source code of the repo... But, I couldn't find a better way documented anywhere. I'm currently exploring baking the HTTP Auth into the ec2 instance, saving the instance as an AMI, and using that AMI for the instances that are auto-generated in my beanstalk.
Don't even get me started on HTTP Auth in front of s3 buckets, which is not supported by AWS and requires you point your DNS at a third-party service!
The key factor of managing Elastic Beanstalk's nginx Basic auth is to recognize the conf file is managed by beanstalk, so when you modify it, you need to edit the file in /tmp/deployment/config. All the files in there will be copied to destination, and the destination is calculated by replacing the filename's # character to /. And since the /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf file is not created on the commands step, you need to modify it on the container_commands step.
For me, following worked.
files:
/etc/nginx/.htpasswd:
mode: "000755"
owner: root
group: root
# the content of htpasswd.
# Obtain it by `htpasswd -nb USER PASSWORD`
content: "USER_NAME:HASHED_PASS"
container_commands:
add-basic:
command: |
set -ex
EB_CONFIG_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k config_staging_dir)
file_name="${EB_CONFIG_STAGING_DIR}/$(echo /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf | sed -e 's|/|#|g')"
sed -i -e '
/location \// {
s|$|\nauth_basic "Restricted Area";\nauth_basic_user_file /etc/nginx/.htpasswd;|
:loop
n
b loop
}' "$file_name"
Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/
I am am currently running OpenLdap 2.4.31 on Ubuntu 12.04 in EC2. I am having an issue where I get random timeouts when doing ldapsearch or ldapadd commands against the ldap server.
There is really no load against the ldap servers, I am using them for name resolution for EC2 internal hostnames and using ldap as an external node classifier for puppet.
When the timeout happens I get the following error:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server
If I rerun the command it works fine, this is causing some issues in my automation (and while I can put in error checking for this it seems odd its happening in the first place).
Here is a copy of my slapd.conf (with some env specific info commented out) hopefully someone has some suggestions on what I am missing in the config to prevent the timeout issue:
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/core.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/collective.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/corba.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/cosine.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/duaconf.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/dyngroup.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/inetorgperson.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/java.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/misc.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/nis.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/openldap.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/ppolicy.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/puppet.schema
pidfile /opt/openldap/openldap-2.4.31/var/run/slapd.pid
argsfile /opt/openldap/openldap-2.4.31/var/run/slapd.args
loglevel 0
serverID 001
database bdb
suffix "dc=example,dc=local"
rootdn "cn=admin,dc=example,dc=local"
rootpw secret
directory /opt/openldap/openldap-2.4.31/var/openldap-data
idletimeout 120
timelimit 300
cachesize 2000
syncrepl rid=000
provider=ldap://10.10.10.10
type=refreshAndPersist
retry="5 5 10 +"
searchbase="dc=example,dc=local"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,dc=example,dc=local"
credentials=secret
syncrepl rid=000
provider=ldap://10.10.10.20
type=refreshAndPersist
retry="5 5 10 +"
searchbase="dc=example,dc=local"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,dc=example,dc=local"
credentials=secret
index entryCSN eq
index entryUUID eq
mirrormode TRUE
overlay syncprov
syncprov-checkpoint 100 10
Ignore this question. My self-healing automation was misconfiguration and was restarting the slapd process every minute on accident.