Postfix Dovecot Local Deliver and Relay - postfix-mta

I have a large aliases file, and don't want to move it into a DB table. The ask can we have an aliases entry that is delivered to Dovecot and others that are forwarded?
Played with virtual_alias_maps, etc. etc.
Can it be done ?

If MTA expands address to the alias that points to the local address, then LDA occur. If address is expanded to the alias that points to the external domain, then redirection will occur. Both types of aliases have exactly the same semantics and syntax therefore they are traditionally stored in the single file /etc/aliases

Related

How to access dcm4chee data from another dcm4chee

Lets consider a scenario
I have two system A and B
IP Address A - 192.168.0.1 database IP is 192.168.0.1 for pacs
IP Address B - 192.168.0.2 database IP is 192.168.0.2 for pacs
I have sent dicom image in A using dcmsnd command
how to access system A data from system B
So what i need to configure in system A or system B to access system A's dicom data in system B
I can recommend two options depending on your needs.
Option 1
The first option assumes you do actually want redundant data (ie: two separate storage locations and two separate databases) and not just two dcm4chee instances.
In this case you can set up dicom forwarding from A to B. This is setup in the Forward Service bean of dcm4chee (via the jmx-console or via the jboss twiddle.sh script). More complex forwarding (ie: based on modalities) can be configured in the Forward Service2 bean.
The official docs are here:
Forward Service
Forward Service2
If you need more details, I have written a blog post that goes into more depth about using and setting up Forward Service here:
DCM4CHEE: PACS SYNCHRONIZATION VIA DICOM FORWARDING
Option 2
The second option assumes you don't really need data redundancy, but you do need two separate dcm4chee instances.
No problem. You can setup two dcm4chee instances on separate boxes to share the same database (which lives either at 192.168.0.1 or 192.168.0.2 or perhaps somewhere else) and storage device.
For this to really work, you will need to configure both dcm4chee instances to not only connect to the same db, but to also store their archives on the same shared network storage device which you mount on each box.
The storage directory is configured via the DefaultStorageDirectory property of the FileSystemMgt group=ONLINE_STORAGE bean in the jmx-console.
Note: My answer assumes the dcm4chee-2.x series, and not the successor arc-light series (though the steps should be conceptually similar in either case - ie: setup forwarding or shared storage).

zsh tab completion for ssh using IP Address

I ssh into several machines that are just IP Addresses, however I noticed a while back that tab completion stopped working when trying to SSH to them. I use zsh and I can tab complete a regular domain name with ssh, but all the IP machines that I use dont tab complete any more, did something break here? or whats the deal?
OS X - 10.9.3
zsh - 5.0.2
have you set the use-ip style?
zstyle ':completion:*' use-ip true
the documentation says that ip addresses are stripped from the host databases by default. use-ip allows completion of them.
http://zsh.sourceforge.net/Doc/Release/Completion-System.html#index-use_002dip_002c-completion-style
Your ssh might be hashing the entries at known_hosts?
Best usability solution in general for ssh IMO is to create ssh host aliases, and then just use the alias in the command line. Eg. add something like this to you ~/.ssh/config
Host foo
# HostName also accepts numeric IP addresses
HostName XXX.ZZZ.YYY.BBB
then you just use scp backup.tar foo:
Check man ssh_config for more info. From the manual:
HashKnownHosts
Indicates that ssh(1) should hash host names and addresses when they are added to ~/.ssh/known_hosts. These hashed names
may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be
disclosed. The default is “no”. Note that existing names and addresses in known hosts files will not be converted auto‐
matically, but may be manually hashed using ssh-keygen(1). Use of this option may break facilities such as tab-comple‐
tion that rely on being able to read unhashed host names from ~/.ssh/known_hosts.
Ok ignore the above, I see in a comment that that is not the case, will leave it there for reference though.
PS: you can always manually set the hosts to be completed by zsh using something along the lines of:
hosts=(foo.bar.com faa.bar.com fee.bar.com)
zstyle ':completion:*:hosts' hosts $hosts
Or do a much more complicated version of it, such as described here https://www.maze.io/2008/08/03/remote-tabcompletion-using-openssh-and-zsh/index.html

Is using the IP address faster than using the domain name?

Assuming that the IP address that the domain is mapped to is known, are there any advantages to using this known IP address rather than using the domain? What makes the trace routing decision? Because DNS servers translate the domain names to IP addresses I am compelled to say that using an IP address is quicker, albeit unnoticeable. However, because DNS servers process these requests at a high volume and presumably cache the most popular sites I am also compelled to say that a DNS server might know the fastest route to the server which would result in the domain being slightly quicker. I understand that when I am asking which may be faster this quantification may be at the nanosecond or microsecond scale.
Technically, yes. At least the first time. The first time your computer asks the internet "Where is this domain name located?" and some machine out there responds with its IP address.
However, when it gets this response back it keeps a copy (called caching) so it doesn't have to ask again for a while (these things CAN change, but rarely do)
So, if your computer currently has the IP cached, then they are equal. If you don't currently have it IP is faster, but only for the first time in a few days and only a few seconds
As for the question of how the fastest route is picked. There are several routing protocols, most of which take into account several different factors including load on a connection, bandwidth, latency, jitter, and distance. Several others are also possible. Long story short is that the routers of the internet are constantly telling each other that such and such link is down or I just got a new address connected and they have algorithms that the routers run to figure out which way is best.
N.B. A side note is that IP wont always give you access to a certain website: take for instance a site hosted on a hosting service. They rarely have their own specific IP address, but instead requests for lots of different sites could come into one IP. In this case the domain name being requested is used to determine which site to return to the requester
Both of the examples that you gave are correct. Inputting an IP address directly will bypass the need for a DNS lookup, but the advantage you gain by doing this could be pointless if you use an IP address to a popular website which brings you halfway around the world instead of to a server nearby. Ultimately, you wouldn't benefit enough to make it worth your while, especially since your computer will cache the response you receive from the DNS lookup, making the difference 0.
This question was answered pretty well by #PsychoData but I think there's a few things worth noting and restating here:
When using IP, you bypass DNS which will save you the DNS resolution time on the first call until the TTL (Time To Live) expires. TTL is usually 1 hour. The difference is usually not worth noticing in most applications. If you're only making one call, you won't notice the milliseconds delay. If you make multiple calls, all calls after the first won't have the delay.
When entering a name vs IP you can be calling several different Networking daemons including NetBIOS (\ServerX), DNS FQDN (\ServerX.domain.com), DNS Shortname (\ServerX which MAY get automatically lengthened or guessed to the FQDN \ServerX.domain.com by your OS or DNS server)
Microsoft has two primary Authentication Mechanisms in play with SMB shares: NTLMv2 (NTLMv1 and CHAP are insecure) and Kerberos. Depending on lots of configurations on your client, the server, and the authentication server (Active Directory if in play) and the way you called the name, you may get one or the other. Kerberos is generally faster than NTLMv2, at least for repeated calls, as it gets and keeps an authentication token and doesn't need to reauthenticate via password hash each time.
NetBIOS uses different ports than DNS which can play into network latency due to ACLs/routers/Firewalls.
NetBIOS can actually give you a different answer than DNS because it's a different resolution system. Generally the first PC to boot on a subnet will act as the NetBIOS server and a new server can randomly declare itself to the network as the new NetBIOS master. Also \FileShareServer.domain.com wouldn't come back in a NetBIOS lookup as it's not the machine name (ServerX) but a DNS alias.
There's probably even more that I'm missing here but I think you get the idea that a lot of factors can be in play here.

How to perform file / directory manipulation with user privileges in mind?

I have a server application that will be running under a system account because at any given time, it will be processing requests on behalf of any user on the system. These requests consist of instructions for manipulating the filesystem.
Here's the catch: the program needs to keep that particular user's privileges in mind when performing the actions. For example, joe should not be able to modify /home/larry if its permissions are 755.
Currently my strategy is this
Get the owner / group of the file
Compare it to the user ID / group ID of the user trying to perform the action
If either match (or if none match), use the appropriate part of the permission field in the file to either allow or deny the action
Is this wise? Is there an easier way to do this?
At first, I was thinking of having multiple instances of the app running under the user's accounts - but this is not an option because then only one of the instances can listen on a given TCP port.
Take a look at samba for an example of this can be done. The samba daemon runs as root but forks and assumes the credentials of a normal user as soon as possible.
Unix systems have two separate sets of credentials: the real user/group ids and the effective user/group ids. The real set identifies who you actually are, and the effective set defines what you can access. You can change the effective uid/gid as you please if you are root—including to an ordinary user and back again—as your real user/group ids remain root during the transition. So an alternative way to do this in a single process is to use seteuid/gid to apply the permissions of different users back and forth as needed. If your server daemon runs as root or has CAP_SETUID then this will be permitted.
However, notice that if you have the ability to switch the effective uid/gid at whim and your application is subverted, then that subversion could for example switch the effective uid/gid back to 0 and you could have a serious security vulnerability. This is why it is prudent to drop all privileges permanently as soon as possible, including your real user uid/gid.
For this reason it is normal and safer to have a single listening socket running as root, then fork off and change both the real and effective user ids by calling setuid. Then it cannot change back. Your forked process would have the socket that was accept()ed as it is a fork. Each process just closes the file descriptors they don't need; the sockets stay alive as they are referenced by the file descriptors in the opposite processes.
You could also try and enforce the permissions by examining them individually yourself, but I hope it is obvious that this is potentially error-prone, has lots of edge cases and more likely to go wrong (eg. it won't work with POSIX ACLs unless you specifically implement that too).
So, you have three options:
Fork and setgid()/setuid() to the user you want. If communication is required, use pipe(2) or socketpair(2) before you fork.
Don't fork and seteuid()/setegid() around as needed (less secure: more likely to compromise your server by accident).
Don't mess with system credentials; do permission enforcement manually (less secure: more likely to get authorisation wrong).
If you need to communicate with the daemon, then although it might be harder to do it down a socket or a pipe, the first option really is the proper secure way to go about it. See how ssh does privilege separation, for example. You might also consider if can change your architecture so instead of any communication the process can just share some memory or disk space instead.
You mention that you considered having a separate process run for each user, but need a single listening TCP port. You can still do this. Just have a master daemon listen on the TCP port and dispatch requests to each user daemon and communicate as required (eg. via Unix domain sockets). This would actually be almost the same as having a forking master daemon; I think the latter would turn out to be easier to implement.
Further reading: the credentials(7) manpage. Also note that Linux has file system uid/gids; this is almost the same as effective uid/gids except for other stuff like sending signals. If your users don't have shell access and cannot run arbitrary code then you don't need to worry about the difference.
I would have my server fork() and immediately setuid(uid) to give up root privileges. Then any file manipulation would be on behalf of the user you've become. Since you're a child of the server you'd still hold the accept()ed child socket that the request (and I assume response) would go on. This (obviously) requires root privilege on the daemon.
Passing file descriptors between processes seems unnecessarily complicated in this case, as the child already has the "request" descriptor.
Let one server run on the previlegued server port, and spawn child processes for users that log into the system. The child processes should drop privilegues and inpersonate the user that logged in. Now the childs cannot do harm anymore.

What was the motivation for adding the IPV6_V6ONLY flag?

In IPv6 networking, the IPV6_V6ONLY flag is used to ensure that a socket will only use IPv6, and in particular that IPv4-to-IPv6 mapping won't be used for that socket. On many OS's, the IPV6_V6ONLY is not set by default, but on some OS's (e.g. Windows 7), it is set by default.
My question is: What was the motivation for introducing this flag? Is there something about IPv4-to-IPv6 mapping that was causing problems, and thus people needed a way to disable it? It would seem to me that if someone didn't want to use IPv4-to-IPv6 mapping, they could simply not specify a IPv4-mapped IPv6 address. What am I missing here?
Not all IPv6 capable platforms support dualstack sockets so the question becomes how do applications needing to maximimize IPv6 compatibility either know dualstack is supported or bind separatly when its not? The only universal answer is IPV6_V6ONLY.
An application ignoring IPV6_V6ONLY or written before dualstack capable IP stacks existed may find binding separatly to V4 fails in a dualstack environment as the IPv6 dualstack socket bind to IPv4 preventing IPv4 socket binding. The application may also not be expecting IPv4 over IPv6 due to protocol or application level addressing concerns or IP access controls.
This or similar situations most likely prompted MS et al to default to 1 even tho RFC3493 declares 0 to be default. 1 theoretically maximizes backwards compatibility. Specifically Windows XP/2003 does not support dualstack sockets.
There are also no shortage of applications which unfortunately need to pass lower layer information to operate correctly and so this option can be quite useful for planning a IPv4/IPv6 compatibility strategy that best fits the requirements and existing codebases.
The reason most often mentioned is for the case where the server has some form of ACL (Access Control List). For instance, imagine a server with rules like:
Allow 192.0.2.4
Deny all
It runs on IPv4. Now, someone runs it on a machine with IPv6 and, depending on some parameters, IPv4 requests are accepted on the IPv6 socket, mapped as ::192.0.2.4 and then no longer matched by the first ACL. Suddenly, access would be denied.
Being explicit in your application (using IPV6_V6ONLY) would solve the problem, whatever default the operating system has.
I don't know why it would be default; but it's the kind of flags that i would always put explicit, no matter what the default is.
About why does it exist in the first place, i guess that it allows you to keep existing IPv4-only servers, and just run new ones on the same port but just for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, making the IPv6 functionality easy and painless to add to old services.
For Linux, when writing a service that listens on both IPv4 and IPv6 sockets on the same service port, e.g. port 2001, you MUST call setsockopt(s, SOL_IPV6, IPV6_V6ONLY, &one, sizeof(one)); on the IPv6 socket. If you do not, the bind() operation for the IPv4 socket fails with "Address already in use".
There are plausible ways in which the (poorly named) "IPv4-mapped" addresses can be used to circumvent poorly configured systems, or bad stacks, or even in a well configured system might just require onerous amounts of bugproofing. A developer might wish to use this flag to make their application more secure by not utilizing this part of the API.
See: http://ipv6samurais.com/ipv6samurais/openbsd-audit/draft-cmetz-v6ops-v4mapped-api-harmful-01.txt
Imagine a protocol that includes in the conversation a network address, e.g. the data channel for FTP. When using IPv6 you are going to send the IPv6 address, if the recipient happens to be a IPv4 mapped address it will have no way of connecting to that address.
There's one very common example where the duality of behavior is a problem. The standard getaddrinfo() call with AI_PASSIVE flag offers the possibility to pass a nodename parameter and returns a list of addresses to listen on. A special value in form of a NULL string is accepted for nodename and implies listening on wildcard addresses.
On some systems 0.0.0.0 and :: are returned in this order. When dual-stack socket is enabled by default and you don't set the socket IPV6_V6ONLY, the server connects to 0.0.0.0 and then fails to connect to dual-stack :: and therefore (1) only works on IPv4 and (2) reports error.
I would consider the order wrong as IPv6 is expected to be preferred. But even when you first attempt dual-stack :: and then IPv4-only 0.0.0.0, the server still reports an error for the second call.
I personally consider the whole idea of a dual-stack socket a mistake. In my project I would rather always explicitly set IPV6_V6ONLY to avoid that. Some people apparently saw it as a good idea but in that case I would probably explicitly unset IPV6_V6ONLY and translate NULL directly to 0.0.0.0 bypassing the getaddrinfo() mechanism.

Resources