Bind named service high cpu load [closed] - cpu-usage

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
named service under CentOs 6 is using more 100% of all 4 processors. I tried to play around with the configuration files. I currently have like 10 websites. below is a sample
$TTL 14400
# IN SOA ns1.mynameserver.com. hostmaster.mydomain.com. (
2012071300
14400
3600
1209600
86400 )
mydomain.com. 14400 IN NS ns1.mynameserver.com.
mydomain.com. 14400 IN NS ns2.mynameserver.com.
ftp 14400 IN A 123.218.168.8
localhost 14400 IN A 127.0.0.1
mail 14400 IN A 123.218.168.8
pop 14400 IN A 123.218.168.8
mydomain.com. 14400 IN A 123.218.168.8
smtp 14400 IN A 123.218.168.8
www 14400 IN A 123.218.168.8
mydomain.com. 14400 IN MX 10 mail
mydomain.com. 14400 IN TXT "v=spf1 a mx ip4:123.218.168.8 ~all"
localhost 14400 IN AAAA ::1
and for mynameserver.com
$TTL 14400
# IN SOA ns1.mynameserver.com. hostmaster.mynameserver.com. (
2012081200
14400
3600
1209600
86400 )
mynameserver.com. 14400 IN NS ns1.mynameserver.com.
mynameserver.com. 14400 IN NS ns2.mynameserver.com.
ftp 14400 IN A 123.218.168.11
localhost 14400 IN A 127.0.0.1
mail 14400 IN A 123.218.168.11
ns1.mynameserver.com. 14400 IN A 123.218.168.10
ns1.mynameserver.com. 14400 IN A 123.218.168.11
ns2.mynameserver.com. 14400 IN A 123.218.168.11
ns2.mynameserver.com. 14400 IN A 123.218.168.11
pop 14400 IN A 123.218.168.11
s1 14400 IN A 123.218.168.11
smtp 14400 IN A 123.218.168.11
mynameserver.com. 14400 IN A 123.218.168.11
www 14400 IN A 123.218.168.11
mynameserver.com. 14400 IN MX 10 mail
mynameserver.com. 14400 IN TXT "v=spf1 a mx ip4:123.218.168.8 ~all"
localhost 14400 IN AAAA ::1
i changed the ip and domains to make it a general question for everyone. Thing is i don't use mail or smtp at all, i might add MX records in the future and rely on gmail for example for emails. Is it safe to remove mail/pop/smtp/MX records?
Based on your experience what is causing this huge CPU load for several months!

I had the same issue, and the information in the link posted by Starcalc above worked for me (though the post he linked is for Ubuntu). This is what I did for my CentOS 6.4 box:
In /etc/named.conf, ensure you have the line present in the options{} section
managed-keys-directory "/var/named/dynamic";
Also, make sure you have the directories /var/named/dynamic and /var/named/chroot/var/named/dynamic presesent, ensure all are owned by named:named (easy way: chown -R named:named /var/named ) and if you run with SELinux, do: restorecon -R /var/named/

Check your /etc/named.conf file have managed-keys-directory "/var/named/dynamic"; in options scope.
Then check your bind files exist under both /var/named and /var/named/chroot/var/named directories.
Stop named service
#service stop named
Check or create files
#mkdir /var/named/data
#touch /var/named/data/named.run
#mkdir /var/named/dynamic
#touch/var/named/dynamic/managed-keys.bind
chroot files
#mkdir /var/named/chroot/data
#touch /var/named/chroot/data/named.run
#mkdir/var/named/chroot/var/named/dynamic
#touch /var/named/chroot/var/named/dynamic/managed-keys.bind
Don't forget to change owner of files.
#chown root:named -R /var/named/chroot/var/named/d*
start named daemon
#service named start

Related

Unable to reach Google Compute over port 9000

I have a google compute running CentOS 7, and I wrote up a quick test to try and communicate with it over port 9000 (from my home PC) - but I'm unexpectedly getting network errors.
This happens both with my test script (which attempts to send a payload) and even with plink.exe (which I'm just using to check the port availability).
>plink.exe -v -raw -P 9000 <external_IP>
Connecting to <external_IP> port 9000
Failed to connect to <external_IP>: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused
I've added my external IP to googles firewall (https://console.cloud.google.com/networking/firewalls) and set to allow ingress traffic over port 9000 (it's the lowest priority, at 1000)
I also updated firewalld in CentOS to allow TCP traffic over the port:
Redirecting to /bin/systemctl start firewalld.service
[foo#bar ~]$ sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
success
[foo#bar ~]$ sudo firewall-cmd --reload
success
I've confirmed my listener is running on port 9000
[foo#bar ~]$ netstat -npae | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1000 18381 1201/python3
By default, CentOS 7 doesn't use iptables (just to be sure, I confirmed it wasn't running)
Am I missing something?
NOTE: Actual external IP replaced with <external_IP> placeholder
Update:
If I nmap my listener over port 9000 from the CentOS 7 compute instance over a local IP, like 127.0.0.1 I get some results. Interestingly, if I make the same nmap call over the servers external IP -- nadda. So this has to be a firewall, right?
external call
[foo#bar~]$ nmap <external_IP> -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 00:33 UTC
Nmap scan report for <external_IP>.bc.googleusercontent.com (<external_IP>)
Host is up (0.00043s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds
Internal Call
[foo#bar~]$ nmap 127.0.0.1 -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 04:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
In this case software running on the backend VM must be listening any IP (0.0.0.0 or ::), your's is listening to "127.0.0.1:9000" and it should be "0.0.0.0:9000".
The way to fix that it's to change the service config to listen to 0.0.0.0 instead of 127.0.0.1 .
Cheers.

az login returns error "Failed to establish a new connection: [Errno -3] Temporary failure in name resolution"

I was doing az login from WSL of my windows machine. Then it gives an error:
Please ensure you have network connection. Error detail: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /common/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f401d135630>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
I hope this is a DNS issue.
So I checked /etc/resolv.conf of WSL:
# This file was automatically generated by WSL. To stop automatic generation of this file, remove this line.
nameserver 192.168.1.1
nameserver fcc0:0:0:ffff::1
nameserver fcc0:0:0:ffff::2
192.168.1.1 is the default gateway.
There are the results of some commands I tried:
$ ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=128 time=0.351 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=128 time=0.888 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=128 time=0.883 ms
$ dig 192.168.1.1
; <<>> DiG 9.11.3-1ubuntu1.3-Ubuntu <<>> 192.168.1.1
;; global options: +cmd
;; connection timed out; no servers could be reached
$ nslookup 192.168.1.1
;; connection timed out; no servers could be reached
These commands also give an output that indicates an issue.
Ping google.com
dig google.com
All these commands( or its alternatives) work from the Windows command prompt works correctly.
I found a workaround here:
https://askubuntu.com/questions/91543/apt-get-update-fails-to-fetch-files-temporary-failure-resolving-error
It says that I should add the followinng line to /etc/resolv.conf. If I try it like this, it works.
nameserver 192.168.1.1
nameserver 8.8.8.8
nameserver fcc0:0:0:ffff::1
nameserver fcc0:0:0:ffff::2
After this, the ping google.com and dig google.com works fine. But I can see that the nameserver it uses to resolve is 8.8.8.8.
If I connect to VPN, it adds our own nameservers to /etc/resolv.conf and after that, there is no problem in resolving the URLs. Once the VPN is disconnected, the issue arises again.
Note:
There were no issues like this before.
Last day we changed our router in order to use a new ISP's connection and after that, the issue occurs.
Other machines in the same network don't have this issue.
Why this occurs and How can I properly fix this issue of WSL?
Why only one machine in our network can ping, but can't dig to the default gateway?
Update:
I can see that there are two entries that are marked as default in routing table:
$ ip route show table all | grep default
none default via 192.168.0.1 dev wifi0 proto unspec metric 0
none default via 192.168.1.1 dev eth6 proto unspec metric 0

Serving rtmp on port 1935

I've been trying to get ffmpeg to stream in rtmp but connection to port 1935 is always refused. I really don't know what else I can do to allow this connection.
Here is what specs I'm running.
Ubuntu 18.04 (tried with 19.04) however same issue - here is why I think I've made a mistake
No Nginx installation at the moment
FFMPEG "ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)"
This is the script I run:
ffmpeg -i "test.mp4" -c:v copy -c:a copy -f flv "rtmp://127.0.0.1/stream/test"
Error I get is:
[tcp # 0x55ff05ab8ce0] Connection to tcp://127.0.0.1:1935 failed: Connection refused
I've done some research and been across many posts about ffserver.conf and I have made those changes but still no luck. Here is my config file. I also have ran ffserver once using this config.
HTTPPort 8090
HTTPBindAddress 127.0.0.1
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 200K
# Only allow connections from localhost to the feed.
ACL allow 127.0.0.1
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Feed>
<Stream test1.mpg>
# coming from live feed 'feed1'
Feed feed1.ffm
Format mpeg
AudioBitRate 32
# Number of audio channels: 1 = mono, 2 = stereo
AudioChannels 2
AudioSampleRate 44100
# Bitrate for the video stream
VideoBitRate 64
# Ratecontrol buffer size
VideoBufferSize 40
# Number of frames per second
VideoFrameRate 3
</Stream>
<Stream test.asf>
Feed feed1.ffm
Format asf
VideoFrameRate 15
VideoSize 352x240
VideoBitRate 256
VideoBufferSize 40
VideoGopSize 30
AudioBitRate 64
StartSendOnKey
</Stream>
# Special streams
# Server status
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 127.0.0.1
ACL allow 192.168.0.0 192.168.255.255
#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
</Stream>
<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>
Here is my ufw status:
-- ------ ----
22/tcp ALLOW Anywhere
22 ALLOW Anywhere
1935/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
1935/tcp (v6) ALLOW Anywhere (v6)
but still nothing, I've also opened ports in iptables but no luck. Here is how this is done:
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 1935 -j ACCEPT
and
iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --dport 1935 -j ACCEPT
and still nothing, every time I run ffmpeg I get connection refused. I have previously installed nginx just to test but no luck.
What am I doing wrong here? Isn't this port suppose to be open now?
Thanks
#JJ-the-Second, I have been using nginx rtmp module on ubuntu natively and it is working completely fine. But instead of sending stream to 127.0.0.1, I either send it to localhost or 0.0.0.0
I figured it out, I was using Nginx RTMP module - Nginx RTMP for some reason doesn't work well on Ubuntu but fine with Alpine 3.8 - As soon as I started a nginx rtmp docker container and exposed 1935 and 80 everything started working fine. Listen learnt, never install nginx rtmp module on ubuntu.

Squid refuses all websites when creating proxy server

so I'm trying to create a proxy server for my crawler to use, and I'm unsure about why I'm getting denied from even myself. When I go to any website in a browser, on the computer that I've installed Squid and everything on, it's giving me the following error message:
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL: http://www.whatismyipaddress.com/
The following error was encountered:
Access Denied.
Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect.
Your cache administrator is webmaster.
Generated Sun, 08 Nov 2015 04:03:13 GMT by WIN-AIUOBK0JHPA (squid/2.7.STABLE8)
I've edited my LAN settings in Internet Options to allow for a proxy server at the correct IP address (IPv4 when I run ipconfig), gave it the correct port to open up to, and I've also opened up the port in my Windows Firewall.
Below are segments of my squid.conf file:
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl localhost src 192.168.1.0/255.255.255.255
http_access allow localhost
(skip through some commented out segments....)
http_access allow manager localhost
http_access allow localnet
As you can tell, I've stripped out a lot of unnecessary commented parts. Down lower, I have my...
http_port ####
...line.
I have no idea why I'm getting blocked out. I will be constantly refreshing, so if you need any more information or have any questions, please let me know. Thank you so much!!
your config should somewhat look like below
http_access allow localhost
http_access allow localnet
# And finally deny all other access to this proxy
http_access deny all
and remove the following line from your config
acl localhost src 192.168.1.0/255.255.255.255
localhost need not to be specified as ACL its just for accessing localhost pages. You have mixed up localhost with localnet, modify that line like below
acl localnet src 192.168.1.0/255.255.255.255
your lan clients local ip that hitting the proxy should belong to the above mentioned src range or modify the range as you require. all other requests from other ips will be denied
I just got rid of all the default config and used the following:
# cat /etc/squid/squid.conf
http_port 3128
acl vpc_no_internet src 10.130.0.0/255.255.0.0
http_access allow vpc_no_internet
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
refresh_pattern . 0 20% 4320
Note: The above config allows access for the specified subnet only.
I had a similar situation and the solution was to use the following commands:
unset https_proxy
unset http_proxy
unset ftp_proxy
I placed these into a script to run every time I logged in to my server.
This may not be the correct solution for you but it worked in my situation as I wasn't using the proxy and was connecting via vpn.
The only reason I am posting this is I haven't seen this answer posted anywhere else. As usual, YMMV.

Can not divert packet to pf when configure transparent squid 3.4.13 on OpenBSD 5.7

I am trying to build transparent proxy with squid on OpenBSD 5.7 with pf firewall.
I compile squid from source with below options:
$ squid -v
Squid Cache: Version **3.4.13**
configure options:
--prefix=/usr/local/squid
--with-default-user=squid
--enable-icmp
--enable-storeio=ufs,aufs
--enable-removal-policies=lru,heap
--disable-snmp
--disable-wccp
--disable-wccpv2
--enable-pf-transparent
--enable-ipv6
--enable-referer-log
--with-nat-devpf
--enable-debug-cbdata
--enable-useragent-log
--enable-refererlog
--enable-cache-digests
--with-large-files
--with-pthreads
--without-mit-krb5
--without-heimdal-krb5
--without-gnugss
--disable-eui
--disable-auth
--enable-ltdl-convenience
$ uname -a
OpenBSD dns.localdomain 5.7 GENERIC#825 amd64
My squid.conf:
visible_hostname dns.local
acl localnet src 192.168.1.0/24 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# allow
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 3128
http_port 127.0.0.1:3129 intercept
# disk cache directory.
cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256
# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid
#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
Enabled gateway to connect Internet:
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
pf.conf:
int_if = "vic1"
ext_if = "vic0"
lan_net = "192.168.1.0/24"
# Settings
set block-policy return
set loginterface egress
set skip on lo
# NAT
match out on egress inet from !(egress:network) to any nat-to (egress:0)
pass in quick log on $ext_if inet proto tcp from 192.168.1.0/24 to port www divert-to 127.0.0.1 port 3129
pass out quick log inet from 192.168.1.0/24 divert-reply
#
# Rules
#
block all
# allow dns
pass quick on {$int_if, $ext_if} inet proto udp from {self, $lan_net} to any port 53
# allow local access to web
pass quick on $ext_if inet proto tcp from {self} to any port 80
# allow icmp
pass quick on $int_if inet proto icmp from $lan_net to any
# allow ssh from $ext_if
pass quick on $ext_if inet proto tcp from any to ($ext_if) port 22
I think problem in pf rule. Because pf can not divert packet to port 3129? I've tested with command:
nc -l 3129
but it didn't response any HTTP header.
The rule in Squid wiki can not apply to pf because syntax error.
Thank You in advance
Maybe you mean $int_if instead of $ext_if on this rule? :
pass in quick log on $ext_if inet proto tcp from 192.168.1.0/24 to port www divert-to 127.0.0.1 port 3129
As I understand, you want to divert traffic comming from internal network to local port 3129.

Resources