Postfix: Allow a certain sender only from a certain IP address - postfix-mta

I have a client who authenticates with SASL, and they keep getting their password stolen. Their legitimate mail is sent from one IP address, so I'd like to block all mail sent from their email unless it comes from that IP. Is this possible? I've tried to find a solution but I'm fairly new to postfix and inherited this setup. I can see how to do one or the other but not how to combine them without affecting other users. We're using postfix with amavis as content filter. Thank you.
Edited to add things I've tried: I created a local.cf rule for amavis/spamassassin, like this:
header __LOCAL_FROM_USER From =~ /me\#domain\.com/i
header __LOCAL_IP_USER Received =~ /11.22.33.44/
meta LOCAL_EMPIRE_RULE (__LOCAL_FROM_USER && !__LOCAL_IP_USER)
score LOCAL_EMPIRE_RULE 20.0
However, that doesn't work because the originating IP isn't in the Received headers. I guess that's due to the way it's passed through from postfix to amavis. So that blocks all mail from their address, even if it's from their IP.
So I tried adding their IP to mynetworks in postfix's main.cf. I'd be satisfied short-term with relaying all email from their IP and blocking everything from their address otherwise. I can't seem to stop it from getting spam filtered, though, even though I have this in amavisd.conf:
$policy_bank{'MYNETS'} = { # mail originating from #mynetworks
originating => 1, # is true in MYNETS by default, but let's make it explicit
os_fingerprint_method => undef, # don't query p0f for internal clients
};

Related

How to recycle(or flush) an IPV6 node's global address by RA(router advertisement)?

I'm developing on linux router to assign global IP for node linked. The node I am testing on is a Windows PC.
I managed to assign global IP by sending Router Advertisement as per rfc4861.
+---------+---------------+----------+
07:14:07,632,019 ETHER
|0 |33|33|00|00|00|01|ce|74|19|9a|07|a2|86|dd|60|00|00|00|00|38|3a|ff|fe|80|00|00|00|00|00|00|cc|74|19|ff|fe|94|01|9c|ff|02|00|00|00|00|00|00|00|00|00|00|00|00|00|01|86|00|a1|25|40|40|ff|ff|00|00|00|00|00|00|00|00|03|04|40|c0|ff|ff|ff|ff|ff|ff|ff|ff|00|00|00|00|fc|01|ab|ab|cd|cd|ef|e0|00|00|00|00|00|00|00|00|05|01|00|00|00|00|05|dc|
After sending this RA from router (link-local addr fe80::cc74:19ff:fe94:19c), the tested PC can be auto-configured with global addr fc01:abab:cdcd:efe0:e1fb:2297:51db:af84 and fc01:abab:cdcd:efe0:29e9:52fd:2527:dbca.
Above is background.
But how can I recycle(or flush) the global IP on the tested PC? I have tried sending RA with (M=0,O=0,Router Lifetime=0), to my understanding to rfc4861, but this doesn't work. After that still I can see the global IP assigned, checking by cmd ipconfig.
RFC4862 could answer this question:
A RA with short "preferred lifetime" (like 1s) could deprecate the old IPv6 address, but which still may count as valid address. Back to current question, the address is not easy to be flushed by short "valid lifetime" because the consideration of avoiding DOS attack.
If someone who does need to flush the old IP, please refer to RFC4862 5.5.3:
If RemainingLifetime is less than or equal to 2 hours, ignore the Prefix Information option with regards to the valid lifetime, unless the Router Advertisement from which this option was obtained has been authenticated (e.g., via Secure Neighbor Discovery [RFC3971]). If the Router Advertisement was authenticated, the valid lifetime of the corresponding address should be set to the Valid Lifetime in the received option.

Building SPF record with existing SenderID in place

I'm evaluating setting up SPF for a domain that already has SenderID. We're considering removing the SenderID record entirely and replacing it with just an SPF record, instead of trying to write a SenderID record that tries to encompass SPF and SenderID.
We have two outbound servers, and two inbound servers which relay to internal Exchange machines.
Any bounces received by the two inbound server are delivered directly to the sender. I wasn't sure if that was considered sending and therefore should be included with the SPF record?
There is also a POP/IMAP server on a subdomain of our main domain.
I'm using 192.168.1.10 and 192.168.2.10 as our outbound servers, mail1.mydomain.com and mail2.mydomain.com for privacy here.
I believe the following would be the proper SPF record for our domain:
mydomain.com. 3600 IN TXT "v=spf1 ipv4:192.168.1.10 ipv4:192.168.2.10 ptr:subhost.mydomain.com mx:mail1.mydomain.com mx:mail2.mydomain.com a:subhost.mydomain.com include:constant-contact.com -all"
Is the PTR and A fields for the POP/IMAP host on the network that also sends mail as user#subhost.mydomain.com?
If the marketing folks frequently change who they use to send marketing emails as "user#mydomain.com", would you recommend ~all instead of -all?
We currently have "spf2.0/mfrom,pra ..." as our SenderID record. I would be interested in input on how to adapt that to properly support SPF as well.
It appears that even microsoft.com and live.com don't include "spf2.0" records, but instead just spf. Is anyone even using it anymore?
Thanks,
Alex
I think the simplest SPF record that would fulfill your needs is:
"v=spf1 ip4:192.168.1.10 ip4:192.168.2.10 include:constant-contact.com -all"
You should not use the ptr mechanism, since it can require a lot of DNS queries on the receiving end, and should not be needed in this case. The use of ptr is also discouraged by rfc7208.
I'll guess that mail1 and mail2 are the names for the receiving mail servers. If you want them listed also (I don't think they are needed since bounces normally are delivered internally) you should just use the "mx" mechanism. This will match the ip-address against all mx records for the mydomain.com domain.
Regarding sending as user#subhost.mydomain.com, then this SPF record will not be used for that domain, since the sending domain is subhost.mydomain.com. Instead you will have to add a SPF record for the subdomain. If subhost.mydomain.com is one of the listed sending servers (the ip4s), this record could possible link to the record for mydomain.com:
"v=spf1 redirect=mydomain.com"
And regarding the -all/~all, then I would recommend using ~all if there is a real risk of the marketing department switching mailing list providers without informing IT (It would not be the first time). Just be aware that some receivers may still block mails when SPF returns SoftFail (from ~all), especially of the mailing provider is listed on some of the blacklists.

How can I proof my IP address?

If I connect directly to another computer, I proof my IP. But what if I want to receive a message on paper which proofs someones IP?
For example, client contacts Google for an JSON web signature, prints it out on paper, gives the paper to me, and I can verify the signature of the message containing their IP, without ever connecting over to the client (or to Google) by TCP.
Is there a simpeler or better scheme possible?
If you use encryption, consider using HMAC. If not, then a simple hash or random number is fine for trivial use, as long as it is unique for a given period of time and then expires. Either way, you can send the generated value across both transports so they can be matched to each other. Preferably the server should generate the value to ensure its authenticity, eg:
"Hello TCP client, send me XXX and your IP over the other transport".
"Hello transport client, I see you sent me value XXX and IP YYY, I have a matching TCP client".
Also keep in mind that if your TCP client is behind a router, the other party is going to see your router's public IP, not your client's private IP behind the router. So your client will have to send the router's IP, and maybe also send its private IP as well. Depends on your actual needs.
I don't really see the need to validate the IP, though. Just dealing with the router situation, let alone trying to avoid IP spoofing, makes it almost not worth doing. Just having an authenticated token should be good enough.
Update: If that is not what you want, then you have to include the IP as part of the encryption/hash. The client takes some seed values (sometimes known as nonce values) and its IP and hashes them all together, then the result is given to the other party. That party uses the same seed/nonce values and the IP it wants to validate and hashes them together and sees if it comes up with the same result. If so, the IPs match.

Determining when to try an IPv6 connection and when to use IPv4

I'm working on a network client program that connects to public servers, specified by the user. If the user gives me a hostname to connect to that has both IPv4 and IPv6 addresses (commonly, a DNS name with both A and AAAA records), I'm not sure how I should decide which address I should connect to.
The problem is that it's quite common for machines to support both IPv4 and IPv6, but only to have global connectivity over IPv4. The most common case of this is when only IPv6 link-local addresses are configured. At the moment the best alternatives I can come up with are:
Try the IPv6 address(es) first - if the connection fails, try the IPv4 address(es); or
Just let the user specify it as a config setting ("prefer_ipv6" versus "prefer_ipv4").
The problem I can see with option 1 is that the connection might not fail straight away - it might take quite a while to time out.
Please do try IPv6. In the significant majority of installations, trying to create an IPv6 connection will fail right away if it can't succeed for some reason:
if the system doesn't support IPv6 sockets, creating the socket will fail
if the system does support IPv6, and has link-local addresses configured, there won't be any routing table entry for the global IPv6 addresses. Again, the local kernel will report failure without sending any packets.
if the system does have a global IP address, but some link necessary for routing is missing, the source should be getting an ICMPv6 error message, indicating that the destination cannot be reached; likewise if the destination has an IPv6 address, but the service isn't listening on it.
There are of course cases where things can break, e.g. if a global (or tunnel) address is configured, and something falsely filters out ICMPv6 error messages. You shouldn't worry about this case - it may be just as well that IPv4 connectivity is somehow broken.
Of course, it's debatable whether you really need to try the IPv6 addresses first - you might just as well try them second. In general, you should try addresses in the order in which they are returned from getaddrinfo. Today, systems support configuration options that let administators decide in what order addresses should be returned from getaddrinfo.
Subsequent to the question being asked the IETF has proposed an answer to this question with RFC6555, a.k.a. Happy Eyeballs.
The pertinent point being the client and server may both have IPv4 and IPv6 but a hop in between may not so it is impossible to reliably predict which path will work.
You should let the system-wide configuration decide thanks to getaddrinfo(). Just like Java does. Asking every single application to try to cater for every single possible IPv6 (mis)configuration is really not scalable! In case of a misconfiguration it is much more intuitive to the user if all or none applications break.
On the other hand you want to try to log annoying delays and time-outs profusely, so users can quickly identify what to blame. Just like every other delays ideally, including (very common) DNS time-outs.
This talk has the solution. To summarize;
Sometimes there are problems with either DNS lookups or the subsequent connection to the resolved address
You don't want to wait for connecting to an IPv6 address to timeout before connecting to the IPv4 address, or vice versa
You don't want to wait for a lookup for an AAAA record to timeout before looking for an A record or vice versa
You don't want to stall while waiting for both AAAA and A records before attempting to connect with whichever record you get back first.
The solution is to lookup AAAA and A records simultaneously and independently, and to connect independently to the resolved addresses. Use whatever connection succeeds first.
The easiest way to do this is to allow the networking API do it for you using connect-by-name networking APIs. For example, in Java:
InetSocketAddress socketAddress = new InetSocketAddress("www.example.com", 80);
SocketChannel channel = SocketChannel.open(socketAddress);
channel.write(buffer);
The slide notes say at this point:
Here we make an opaque object called an InetSocketAddress from a host
and port, and then when we open that SocketChannel, that can complete
under the covers, doing whatever is necessary, without the
application ever seeing an IP address.
Windows also has connect-by-name APIs. I don’t have code fragments for
those here.
Now, I’m not saying that all implementations of these APIs necessarily
do the right thing today, but if applications are using these APIs,
then the implementations can be improved over time.
The di!erence with getaddrinfo() and similar APIs is that they
fundamentally can’t be improved over time. The API definition is that
they return you a full list of addresses, so they have to wait until
they have that full list to give you. There’s no way getaddrinfo can
return you a partial list and then later give you some more.
Some ideas:
Allow the user to specify the preference on a per-site basis.
Try IPv4 first.
Attempt IPv6 in parallel upon the first connection.
On subsequent connections, use IPv6 if the connection was successful previously.
I say to try IPv4 first because that is the protocol which is better established and tested.

How to prevent unauthorized spidering

I want to prevent automated html scraping from one of our sites while not affecting legitimate spidering (googlebot, etc.). Is there something that already exists to accomplish this? Am I even using the correct terminology?
EDIT: I'm mainly looking to prevent people that would be doing this maliciously. I.e. they aren't going to abide by robots.txt
EDIT2: What about preventing use by "rate of use" ... i.e. captcha to continue browsing if automation is detected and the traffic isn't from a legitimate (google, yahoo, msn, etc.) IP.
This is difficult if not impossible to accomplish. Many "rogue" spiders/crawlers do not identify themselves via the user agent string, so it is difficult to identify them. You can try to block them via their IP address, but it is difficult to keep up with adding new IP addresses to your block list. It is also possible to block legitimate users if IP addresses are used since proxies make many different clients appear as a single IP address.
The problem with using robots.txt in this situation is that the spider can just choose to ignore it.
EDIT: Rate limiting is a possibility, but it suffers from some of the same problems of identifying (and keeping track of) "good" and "bad" user agents/IPs. In a system we wrote to do some internal page view/session counting, we eliminate sessions based on page view rate, but we also don't worry about eliminating "good" spiders since we don't want them counted in the data either. We don't do anything about preventing any client from actually viewing the pages.
One approach is to set up an HTTP tar pit; embed a link that will only be visible to automated crawlers. The link should go to a page stuffed with random text and links to itself (but with additional page info: /tarpit/foo.html , /tarpit/bar.html , /tarpit/baz.html - but have the script at /tarpit/ handle all requests with the 200 result).
To keep the good guys out of the pit, generate a 302 redirect to your home page if the user agent is google or yahoo.
It isn't perfect, but it will at least slow down the naive ones.
EDIT: As suggested by Constantin, you could mark the tar pit as offlimits in robots.txt. The good guys use web spiders that honor this protocol will stay out of the tar pit. This would probably get rid of the requirement to generate redirects for known good people.
If you want to protect yourself from generic crawler, use a honeypot.
See, for example, http://www.sqlite.org/cvstrac/honeypot. The good spider will not open this page because site's robots.txt disallows it explicitly. Human may open it, but is not supposed to click "i am a spider" link. The bad spider will certainly follow both links and so will betray its true identity.
If the crawler is created specifically for your site, you can (in theory) create a moving honeypot.
I agree with the honeypot approach generally. However, I put the ONLY link to the honeypot page/resource on a page blocked by "/robots.txt" - as well as the honeypot blocked by such. This way, the malicious robot has to violate the "disallow" rule(s) TWICE to ban itself. A typical user manually following an unclickable link is likely only to do this once and may not find the page containing the honeypot URL.
The honeypot resource logs the offending IP address of the malicious client into a file which is used as an IP ban list elsewhere in the web server configuration. This way, once listed, the web server blocks all further access by that client IP address until the list is cleared. Others may have some sort of automatic expiration, but I believe only in manual removal from a ban list.
Aside: I also do the same thing with spam and my mail server: Sites which send me spam as their first message get banned from sending any further messages until I clear the log file. Although I implement these ban lists at the application level, I also have firewall level dynamic ban lists. My mail and web servers also share banned IP information between them. For an unsophisticated spammer, I figured that the same IP address may host both a malicious spider and a spam spewer. Of course, that was pre-BotNet, but I never removed it.
robots.txt only works if the spider honors it. You can create a HttpModule to filter out spiders that you don't want crawling your site.
You should do what good firewalls do when they detect malicious use - let them keep going but don't give them anything else. If you start throwing 403 or 404 they'll know something is wrong. If you return random data they'll go about their business.
For detecting malicious use though, try adding a trap link on search results page (or the page they are using as your site map) and hide it with CSS. Need to check if they are claiming to be a valid bot and let them through though. You can store their IP for future use and a quick ARIN WHOIS search.
1 install iptables and tcpdump (for linux)
2 detect and autorize good traffic, for example googlebot
in perl
$auth="no";
$host=`host $ip`;
if ($host=~/.googlebot.com\.$/){$auth="si";}
if ($host=~/.google.com\.$/){$auth="si";}
if ($host=~/.yandex.com\.$/){$auth="si";}
if ($host=~/.aspiegel.com\.$/){$auth="si";}
if ($host=~/.msn.com\.$/){$auth="si";}
Note: host googlebot is 55.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-55.googlebot.com.
3 Create schedule or service capture traffic and count packet by host and insert in database or insert in you site a sql query insert ip for count ip traffic
for example in perl
$ip="$ENV{'REMOTE_ADDR'}";
use DBI;
if ($ip !~/^66\.249\./){
my $dbh = DBI->connect('DBI:mysql:database:localhost','user','password') or die print "non connesso:";
my $sth = $dbh->prepare("UPDATE `ip` SET totale=totale+1, oggi=oggi+1, dataUltimo=NOW() WHERE ip ='$ip'");
$sth ->execute;
$rv = $sth->rows;
if ($rv < 1){
my $sth = $dbh->prepare("INSERT INTO `ip` VALUES (NULL, '$ip', 'host', '1', '1', 'no', 'no', 'no', NOW(), 'inelenco.com', oggi+1)");
$sth ->execute;
}
$dbh->disconnect();
}
Or sniff traffic by service for example in perl
$tout=10;
$totpk=3000;
$tr= `timeout $tout tcpdump port $porta -nn -c $totpk`;
#trSplit=split(/\n/,$tr);
undef %contaUltimo;
foreach $trSplit (#trSplit){
if ($trSplit=~/IP (.+?)\.(.+?)\.(.+?)\.(.+?)\.(.+?) > (.+?)\.(.*?)\.(.+?)\.(.+?)\.(.+?): Flags/){
$ipA="$1.$2.$3.$4";
$ipB="$6.$7.$8.$9";
if ($ipA eq "<SERVER_IP>"){$ipA="127.0.0.1";}
if ($ipB eq "<SERVER_IP>"){$ipB="127.0.0.1";}
$conta{$ipA}++;
$conta{$ipB}++;
}
4 block host if traffic is > $max_traffic
for example in perl
if ($conta->{$ip} > $maxxDay){block($conta->{$ip});}
sub block{
my $ipX=shift;
if ($ipX =~/\:/){
$tr= `ip6tables -A INPUT -s $ipX -j DROP`;
$tr= `ip6tables -A OUTPUT -s $ipX -j DROP`;
print "Ipv6 $ipX blocked\n";
print $tr."\n";
}
else{
$tr= `iptables -A INPUT -s $ipX -j DROP`;
$tr= `iptables -A OUTPUT -s $ipX -j DROP`;
print "iptables -A INPUT -s $ipX -j DROP";
print "Ipv4 $ipX blocked\n";
print $tr."\n";
}
}
Another method is read log traffic server.
for example in linux /var/log/apache2/*error.log
contain all query error
/var/log/apache2/*access.log contains all web traffic
create a Bash script read log and block bad spider.
For block attack read all log, for example for block ssh attack read log ssh error and block ip. iptables -A INPUT -s $ip -j DROP

Resources