qmail MTA which can send/receive the mail have two starting points viz qmail-smtp and qmail-inject.
Why do we have two such different interfaces for mail delivery???
Am going through the tutorial of qmail referenced from: http://www.nrg4u.com/qmail/the-big-qmail-picture-103-p1.gif
The two different starting points/interfaces that you are talking about have significance importances:
qmail-smtpd: is responsible for accepting the mails from external world. It listens on port 25 and accepts the mail following the smtp protocol.
whereas
qmail-inject: is responsible for sending the mail maybe within same domain or to other domain.
(basically we call it a MUA)
Related
My team wants to build a chat app and so we are researching about all the available technologies available at our arsenal. I am concerned about XMPP. So i was reading the Oreilly's "XMPP: The definitive guide", and came across these lines and i quote
In XMPP, messages are delivered as fast as possible over the network. Let’s say that Alice sends a message from her new account on the wonderland.lit server to her sister on the realworld.lit server. Her client effectively “uploads” the message to wonderland.lit by pushing a message stanza over a client-to-server XML stream. The wonderland.lit server then stamps a from address on the stanza and checks the to ad- dress in order to see how the stanza needs to be handled (without performing any deep packet inspection or XML parsing, since that would eat into the delivery time). Seeing that the message stanza is bound for the realworld.lit server, the wonderland.lit server then immediately routes the message to realworld.lit over a server-to-server XML stream (with no intermediate hops).Page 45
Like email, but unlike the Web, XMPP systems involve a great deal of inter-domain connections. However, when you send an XMPP message to one of your contacts at a different domain, your client connects to your “home” server, which then connects directly to your contact’s server without intermediate hops (see Figure 2-4).Page 13
Can anyone please make me understand how can there be no intermediate hops(unlike email).
E-Mail (SMTP) also has no intermediate hops. I assume you confuse the application OSI layer, where XMPP, SMTP and so on live, with the network layer (IP).
I'm using ForcebindIP to point an app at a specific network adapter, like this:
forcebindip -i 192.168.0.5 MyCSharpApp.exe
This works fine and the app isn't aware (or doesn't access) any of the other network adapters on the PC.
Is it possible to restrict ForceBindIP to outbound traffic only leaving the app to receive data from any local network adapter? Or even to specify a network adapter for outbound and another for inbound traffic?
I can't find an extra startup parameter for ForceBindIP that does this.
I'd appreciate any help with this.
If I get your problem correctly, you want to bind your application to listen for packets on all available interfaces but return packets to only through one given interface. I also assume it's a server application and you don't have neiter source code nor control over its behaviour.
Disclosure: I do not know how ForceBindIP works internally, I'm basing my understanding of it on this passage from the website:
it will then inject a DLL (BindIP.dll) which loads WS2_32.DLL into memory and intercepts the bind(), connect(), sendto(), WSAConnect() and WSASendTo() functions, redirecting them to code in the DLL which verifies which interface they will be bound to and if not the one specified, (re)binds the socket
Problems to overcome
I don't believe your desired configuration is possible with just one application level DLL injector. I'll list a few issues that ForceBindIP will have to overcome to make it work:
to listen to a socket, application has to bind() it to a unique protocol-address-port combination first. An application can bind itself to either a specific address or a wildcard (i.e. listen on all interfaces). Apparently, one can bind to wildcard and specific address simultaneously as outlined in this SO question. This however will be two different sockets from the application standpoint. Therefore your application will have to know how to handle this sort of traffic.
When accepting client connection, accept() will create a new socket and parameters on that are managed by Windows, I don't believe there's an API to intercept binding here - by this time the connection is considered established.
Now imagine, we somehow got a magic socket. We can receive packets on one interface and send to another. The client (and all routing equipment on the way) will have to be aware that two packets originating from two different source IP addresses are actually part of the same connection and be able to assemble the TCP session (or correctly merge UDP streams).
You can have multiple gefault gateways with different priorities and rules (which is a whole different topic to explore) but as far as I'm aware that's not going to solve your particular issue: majority of routing protocols assume links are symmetric and expect packets to keep within same interface. There are special cases like asymmetric routing and network interface teaming but they have to be implemented on per-interface level.
One potential solution
One way to achieve what you're after (I don't know enough about your environment to claim it will work), will be to create a virtual interface, set it into yet another IP network, bind your application to it, then use firewall (to, say, allow multicast backets into the "virtual" network) and routing from that network to required default gateway with metric set to 1. I also suspect just any Windows will not be that flexible, so you might need like a Server Edition.
I am sorry this didn't turn out to be the ready-to-fly solution, I however am hoping this gives you more context to the problem you are facing and points you into other directions to explore.
You can use Set-NetAdapterAdvancedProperty command in Powershell to set the flow control of your specified adapter
To get the names and properties of all the network adapter :-
Get-NetAdapterAdvancedProperty -Name "*"
Suppose you want the network adapter named "Ethernet 2" to be only used to receive data from internet then type :-
Set-NetAdapterAdvancedProperty -Name "Ethernet 2" -DisplayName "Flow Control" -DisplayValue "Rx Enabled"
You can find more in :
https://learn.microsoft.com/en-us/powershell/module/netadapter/set-netadapteradvancedproperty?view=win10-ps
Microsoft winsock example has a usage in their example for limiting a socket to only send or receive mode. It might help.
https://learn.microsoft.com/en-us/windows/win32/winsock/complete-client-code
Outbount and Inbount limits are not imposed while binding. But latter or when connection is established.
Line of code pertaining to this in client code is toward the end.
// shutdown the connection since no more data will be sent
iResult = shutdown(ConnectSocket, SD_SEND);
I have been writing my own version of the 802.11 protocol with network stack. This is mostly a learning experience to see more in depth on how networks work.
My question is, is there a standard for replying to client devices that a certain protocol is unsupported?
I have an android device connecting to my custom wifi device and immediately sending a TON of requests at the DNS port of my UDP protocol. Since I would like to test out other protocols I would very much like a way for my wifi device to tell the android device that DNS is not available and get it to quite down a little.
Thanks in advance!
I don't see a possibility to send a reply that a service is not available.
I can't find anything about this case in the UDP specification.
One part of the DNS specification assumes that there are multiple DNS servers and defines how to handle communication with them. This explains part of the behavior in your network, but does not provide much information how to handle it.
4.2.1 Messages - format - UDP usage
The optimal UDP retransmission policy will vary with performance of the
Internet and the needs of the client, but the following are recommended:
The client should try other servers and server addresses
before repeating a query to a specific address of a server.
The retransmission interval should be based on prior
statistics if possible. Too aggressive retransmission can
easily slow responses for the community at large. Depending
on how well connected the client is to its expected servers,
the minimum retransmission interval should be 2-5 seconds.
7.2 Resolver Implementation - sending the queries
If a resolver gets a server error or other bizarre response
from a name server, it should remove it from SLIST, and may
wish to schedule an immediate transmission to the next
candidate server address.
According to this you could try to send garbage back to the client, but this is rather a hack, or an error, but how does an error look like? Such a solution assumes that you have knowledge about the service that you don't support.
I believe that the DNS - requests can be avoided by using DHCP. DHCP allows to specify DNS-servers as listed in the linked page. This is the usual way that I know for a DNS-resolver in a LAN to get initial DNS servers although I don't find anything about this in the DNS specification. You can give the Android - device a DNS-server with DHCP so that it does to need to try to query your device. Querying your device could be a fallback.
Additionally to DNS there is mDNS which uses multicasts in the network to send queries. This seems not to be the protocol you have to do with because it uses the special port 5353.
Not possible to stop DNS in the way you intend. However, only for your tests you can check the UDP messages and find out the names the device is looking for. Then you update the hosts file (google how to do it: http://www.howtogeek.com/140576/how-to-edit-the-hosts-file-on-android-and-block-web-sites/) and add those names with some localoop IP address. That might work for your test.
Other possibility is to change DNS server to some localloop IP address: http://xslab.com/2013/08/how-to-change-dns-settings-on-android/
Again, this is only to avoid having all the DNS messages through the wifi connection.
In BizTalk server we can configure many receive locations for one receive port . In the same way if I want to send the same message to many destinations I have to create a send group containing the collection of send ports which will send the message to my desired locations . This is fine I have understood the concept of send group . But why one send port is not supporting more than one send location ??
But why one send port is not supporting more than one send location
I guess the main reason for this is "Because it is what it is". The difference between send port groups with multiple ports, and recieve ports with mutliple locations is mainly semantics, rather than any technical difference.
However, in the interests of argument, I could argue that a send port, which is by it's nature a subscriber in BizTalk, should only do a single thing. In this case that thing is to send to one transport channel (and one backup channel). If you introduced multiple "send locations" then the send port would be responsible for more than one thing.
Furthermore, the introduction of send locations would introduce complexity:
what send locations would be invoked to send the message? Would it be the same each time?
how to handle a mixture of synchronous and asynchronous transports on a single send port?
etc...
another ordered delivery problem.
We have an orchestration which is bound to a send port which has ordered delivery true. Another send port also picks up these messages through filtering, this port also has ordered delivery.
Now for some reason when there are multiple ports using the message and one of these is directly port binded only one of the ports is being used. I mean that not both ports give an output.
If i unenlist one of the ports it's always outputted, this works in both ways.
We used to have this with 2 ports which both used filters instead, this worked but we had to change one to a direct port, the problem occured since then. Also the choice of ports for BizTalk is pretty random, because on our server it for example chooses port A and when I recreate the same problem on my local machine it for example choses port B.
It's kind of a weird problem and we have no idea what could be the cause.
David Hall: I recreated this on my BizTalk 2010 box and never faced the problem you are mentioning! You have to set ALLOW MULTIPLE RESPONSES to True. It is under the HOSTS tab in BizTalk settings dashboard.
So, I've got 4 send ports. Each has Ordered Delivery turned ON. All send ports are on same subscription, i.e. BTS.ReceivePortName. I have an MLLP receive location for receiving the message into the Biztalk box.
Test Case: Does BizTalk maintain order ?
I sent 5 diff messages in this order 1,1,2,2,3,3,4,4,5,5. All 4 send ports sent the message out in the same order 1,1,2,2,3,3,4,4,5,5.
Result; YES it does.
Forgot to mention. Everything is running under one default host of BizTalkServerApplication.
Hi this a bug in BizTalk Messaging engine worker thread, for it doesnot execute all ordered delivery Send Ports at the same time running under the same host. At max, it runs only two ordered delivery Send port, but if you have four to five ordered delivery Send ports, then only at max two send port execute at one time. To make all the ordered delivery send port works at a time, you need to put them under different host
Did you tried to add the filter to a Port Group instead, and have all the other properties in the specific ports?