I have Postfix set up to deliver all incoming email to 〈any_random_address〉#mydomain.com to myname#mydomain.com. I've recently noticed that a large percentage of spam is going to the same non-existent username, and I'd like to block incoming email to that username, while still sending all other emails to my inbox. What is the best way to accomplish that?
Aside from the fact that catch-all doesn't really make sense:
In your virtual aliases map (e.g. /etc/postfix/virtual_alias_maps), add the following line:
john.doe#example.com devnull
In /etc/aliases, add the following line:
devnull: /dev/null
This defines a mailbox named devnull and stores its contents in /dev/null.
Don't forget to update the alias caches and restart Postfix, for example like
sudo postmap /etc/postfix/virtual_alias_maps
sudo newaliases
sudo service postfix restart
Now you should be fine.
Related
With this line in postfix's main.cf:
smtp_sasl_mechanism_filter = xoauth2
I can send email through gmail but not through dreamhost.
If I delete that line I can send through dreamhost but not through gmail.
Postfix documentation for smtp_sasl_mechanism_filter implies I can have a type:table lookup for the desired mechanisms, but I'm not sure what that table should look like. I've tried a file with a list of
[relayhost]:port mech1, mech2
but it doesn't work. Has anyone created such a lookup table successfully?
One answer is, courtesy of the postfix mailing list and postfix's author, to use different smtp delivery transports in master.cf, e.g:
Add to master.cf:
google unix - - n - - smtp
-o smtp_sasl_mechanism_filter=xoauth2
dreamhost unix - - n - - smtp
-o smtp_sasl_mechanism_filter=login
Add to main.cf:
sender_dependent_default_transport_maps=hash:/etc/postfix/sender_transport
Create sender_transport with:
#domain1 google:[gmail-smtp.l.google.com]:587
#domain2 dreamhost:[smtp.dreamhost.com]:587
I am hosting my WCF services in service fabric. One of the WCF service starts a HttpSelfHostServer on a port after startup. I sometimes gets the error:
A registration already exists for URI 'http://localhost:10503/'.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Channels.CommunicationObject.EndOpen(IAsyncResult result)
at System.Web.Http.SelfHost.HttpSelfHostServer.OpenListenerComplete(IAsyncResult result)
In service fabric, I think there could be multiple services spun up by the framework. I am wondering if there is any way to programmatically check the port and detect there is a service registered with it and remove it automatically?
https://learn.microsoft.com/en-us/windows/win32/http/show-urlacl
Netsh http show Urlacl command is able to help us to detect the reserved URl. But as you know, we need to elevate the permission to an administrator, then delete or add the entry for the specifical users and accounts(The administrator account can directly remove the occupation and use the reserved URL).
Netsh http add urlacl url=https://+:80/MyUri user=DOMAIN\user
delete urlacl url=https://+:80/MyUri
If it is accomplished programmatically, we need to execute the delete command as an administrator, who can directly remove the occupation and use the reserved URL. Therefore, I don't think it is feasible.
Feel free to let me know if there is anything I can help with.
I have gone through some tools like nagios, collectd but they din't find best as we need to monitor no_of_req/sec for each virtual host with all response status, with response time also.
I'm Using ELK Stack:
Separate access logs for each server block for better visibility or you can separate charts via URLs.
Then Use ELK stack:
Feed the logs to logstash via filebeat.
Create grok pattern for your log model.
Create charts via kibana and monitor in real time.
For realtime monitoring:
Try netdata, Its amazing. Please note its not a replacement for nagios or zabbix.
After some quick research, I found this: check_nginx_status.pl. I think defining something like:
define command {
command_name check-nginx
command_line $USER1$/check_nginx_status.pl -H $HOSTADDRESS$ -s $ARG1$ -u $ARG2$ $ARG3$ $ARG4$ $ARG5$ $ARG6$
}
is probably just what you're looking for.
The -s flag ($ARG1$) would be the hostname of the virtual host
The -u flag ($ARG2$) would be the specific url (/something/status)
And then the rest of the args would be used if you needed to add any additional flags.
Hope this helps!
Can't get to the root on juniper ssg5
After i enter my login username and password I'm stuck on this prompt
'my-fw->'
my-fw-> copy
^------unknown keyword copy
my-fw-> show
^------unknown keyword show
my-fw-> configure
^-----------unknown keyword configure
why can't i get to root#my-fw-> or root#my-fw-# prompt. What can i do to get to root. I'm using putty to console to the juniper ssg5.
[Note- I'm trying to backup config to a tftp server where i require to get to the root access]
You don't need root access. root is an special account and all / most of the commands works without root account. Any account which has privileges to perform configuration changes, can apply the command to archive the configuration on given site.
See junos-os-login-classes-overview for user privileges.
The prompt we get is in this format: user#hostname> Ref
If there is no hostname defined, then it is just: user>
Once you make sure that you have logged in with correct user, i.e. it has requirement permissions, you should be able to execute those commands and apply archival configuration.
I have a working solution, let me know if above doesn't help.
When I ssh to my Datapower node like so: ssh user#192.168.0.1 I receive this response:
ssh user#192.168.0.1
(unknown)
Unauthorized access prohibited.
login:
I then enter in the same username, and am also prompted for a password. I type in my credentials and it works! Why didn't it just read my username the first time?
This is hampering my ability to automate a few basic tasks with shell scripts such as fetching logs for processing.
I agree with #Ken and #Stefan that a XML Management is a more appropriate tool for long term automations, howerver, sometimes we need something quick or temporary (or both) ... and for that a CLI automation is easier and faster to develop.
An easy way to push commands to CLI from a shell script is directing the input and output, like this quick sample:
#!/bin/ksh
DPHOST=datapower.device.company.com
DP_USER_ID="myuser"
DP_PASSWORD="mypasword"
TMPFILE=/tmp/tempfile.dp
OUTFILE=/tmp/outfile.dp
TS=`date +%Y%m%d%H%M%S`
cat << EOF > $TMPFILE
DP_USER_ID
DP_PASSWORD
default
echo show cpu
show cpu
echo show memory
show memory
EOF
ssh -T $DPHOST < $TMPFILE > $OUTFILE.$TS
rm $TMPFILE
Note that if you do not have any application domains defined, you may suppress the "default" after the password
And of course, for security reasons you may request the user and password at run time, rather then have it saved on a plain text file, but that is up to you ... the relevant piece here is that you can redirect the file with the commands to an regular ssh session
If you prefer, something like cat $TMPFILE | ssh -T $DPHOST > $OUTFILE.$TS would also works.
That is because DataPower really isn't a SSH server only using the protocol.
What I do in my scripts is that I do the connection, wait for the response and then send the username as the second command and password as third:
ssh [datapower ip]
(unknown)
Unauthorized access prohibited.
login:
your-username
password:
your-password
'#xi52:
DataPower ignores the passed-in username.
Will using the XML Management interface meet your needs? I probably have some scripts laying around.
Ken