One account is migrating. How do I stop local delivery for that single domain and use a different mail server? - qmail

I have one account, domainABC.com that is moving to another provider. The same users have another domain, domainXYZ.com that is remaining on the server. These accounts email back and forth.
To avoid local deliveries, do I need to do anything more than remove domainABC.com from /var/qmail/control/virtualdomains and /var/qmail/control/rcpthosts ?
Do I also need to add an entry in /var/qmail/control/smptroutes
Many Thanks!

I'm assuming you're using a fairly vanilla qmail or netqmail system. What you propose is basically enough but note the following:
Send qmail-send a HUP signal to tell it to re-read virtualdomains once you've changed it.
Be aware that there may be an entry/entries for the target of the virtualdomains line (domain:target) in the qmail-users database (see man page for qmail-users); you may like to remove this line if it's not in use by any other line in virtualdomains to keep things tidy. If there's no entry then target will be the username configured for that virtual domain.
In particular I'd advise against an entry in smtproutes since the DNS should be sufficient; it creates an extra bit of unnecessary configuration that could cause confusion in future.
Once the domain is absent from virtualdomains and rcpthosts and qmail-send restarted, qmail is no longer configured for local delivery of the domain, so will consider it to be a remote domain and act accordingly (DNS lookup and remote delivery etc).

Related

Whitelisting Problems?

I have a huge issue that has to do with whitelisting. I have been doing C++ for about 6 months now and I can't seem to figure out how to pinpoint my targets to limit who can open and use my application with a whitelist.
For example, if the user is not on the whitelist the program would tell them by the way it loads. I would like to see this done with ID's if specific ID matches with the whitelist then that person can use my program.
I have tried doing target drawbacks such as getting IP's, but doing this is so vulnerable if the IP is changed. Also, multiple programs could be opened up on different IDs on that IP, which I don't want.
Sorry if this is very confusing I have just been STRUGGLING with this whitelist I have less hair than I did before I started making the whitelist.
Thanks if you can help, tried to explain the best I could! :)
The general strategy is pretty simple.
First, specify what criteria a user should meet to be on the whitelist.
Second, specify how data about users on the whitelist will be stored.
Third, when the program starts, gather information about the user - when the program starts - that can be compared against the criteria on the whitelist.
Fourth, when comparing data about the user with stored whitelist data, start by assuming the user is NOT on the whitelist and only permit access if a match is found. If there are multiple criteria, you need to decide how to combine them to find a match (e.g. restrict a user to a specific IP, allow a user only if using an IP in a range - which will prevent a user starting the program from home, etc etc)
Fifth, take steps to ensure your program can access the stored whitelist data, but users cannot modify it.
There are many ways to target specific users. First, I need some extract information.. How can you identify a single user ? Your program should be a connection toward any server ? In that case, your user should provide an id and a password or it's a anonymous connection ?

How to identify who is calling my web services?

I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)

Bandwidth Monitoring in asp.net

Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.

How can I implement an IRC Server with 'owned' nicknames?

Recently, I've been reading up on the IRC protocol (RFCs 1459, 2810-2813), and I was thinking of implementing my own server.
I'm not necessarily looking into adhering religiously to the IRC protocol (I'm doing this for fun, after all), but one of the things I do like about it is that a network can consist of multiple servers transparently.
There are a number of things I don't like about the protocol or the IRC specification. The first is that nicknames aren't owned. While services like NickServ exist, they're not part of the official protocol. On the other hand, implementing something like NickServ properly kind of defeats the purpose of distribution (i.e. there'd be one place where NickServ is running, and one data store for it).
I was hoping there'd be a way to manage nicknames on a per-server basis. The problem with this is that if you have two servers that have some registered nicknames, and they then link up, you can have collisions.
Is there a way to avoid this, without using one central data store? That is: is it possible to keep the servers loosely connected (such that they each exist as an independent entity, but can also connect to one another) and maintain uniqueness amongst nicknames?
I realize this question is vague, but I can't think of a better way of wording it. I'm looking more for suggestions than I am for actual yes/no answers. So if anyone has any ideas as to how to accomplish nickname uniqueness in a network while still maintaining server independence, I'd be interested in hearing it. Note that adhering strictly to the IRC protocol isn't at all necessary; I've got no problem changing things to suit my purposes. :)
There's a simple solution if you don't care about strictly implementing an IRC server, but rather implementing a distributed message system that's like IRC, but not exactly IRC.
The simple solution is to use nicknames in the form "nick#host", much like email. So instead of merely being "mipadi", my nickname could be "mipadi#free-memorys-server.net". So I register with just your server, but when your server links up with others to form another a big ole' chat network, you can easily union all the usernames together. There might be a "mipadi" on otherserver.net, but then our nicknames become "mipadi#free-memorys-server.net" and "mipadi#otherserver.net", and everything is cool.
Of course, this deviates a good deal from IRC. :)
They have to be aware of each other. If not, you cannot prevent the sharing of nicknames. If they are, you simply need to transfer updates on the back-end. To prevent simultaneous registrations, you need a transaction system that blocks, requests permission from all other servers, and responds.
To prevent simultaneous registrations during outages, you have no choice but to timestamp the registration, and remove all but the last (or a random for truly simultaneous) registered copy of the nick.
It's not very pretty considering these servers aren't initially merged in the first place.
You could still implement nick ownership without a central instance, if your server instances trust each other.
When a user registers a nick, it is registered with the current server he's connected with
When a server receives a registration that it didn't know of, it forwards that information to all other servers that don't know it yet (might need a smart algorithm to avoid spamming the network)
When a server re-connects to another server then it tries to synchronize the list of registered nicks and which server handles which nick
If there is a collision during that sync, then the older registration is used, and the newer one marked as invalid
If you can't trust your servers, then it'll get a lot harder, as a servers could easily claim every username and even claim the oldest registration for each one.
Since you are trying to come up with something new, the idea that springs to mind, is simply including something unique about the server as part of the nick name when communicating outside of the server. So if you want to message a user on a different server you might have something like user#server
If you don't need them to be completely separate you might want to consider creating some kind of multiple-master replicated database of accounts. Where each server stores a complete copy of the account database, and each server can create new accounts which will be replicated to other servers as possible. You'll probably still have to deal with collisions on occasion though.
While services like NickServ exist, they're not part of the official protocol.
Services are not part of the official protocol because they've nothing to do with the protocol. They're bots with permissions. There's no reason why you couldn't have one running on each server but it does make them harder to maintain.
If you were to go down that path, I would probably suggest the commonly used "multiple master" database replication technique. If one receives a write (in your case, a new user is created or updated, etc) it sends the data to all the other nodes. You'll have to be careful though. If one node is offline when the others get an update, it will need to know to resync on reconnection.
Another technique would be as above but in reverse. Data is only exchanged between nodes when it's needed. Eg if a user tries to log in on a node where there's no data for it, it'll query the others and issue a move order to get all the data to that one node. This is potentially less painful than the replication version but there could be severe problems in netsplits if somebody signs up on a node disconnected from the pack for a duplicate nick.
One technique to nullify the problems of netsplits would be to make chat nodes and their bots netsplit-aware. When they're split, they probably shouldn't allow any write actions... But this could impact on your network if you're splitting lots.
You've also got to ask how secure this might or might not be. IRC network nodes are distributed for performance but they're not "secure". Because of this, service bots are usually run centrally to keep ultimate control over their running. If you distributed the bots and remote node got hacked, they'd potentially have access to the whole user database (depending on the model).

How can I share a session across multiple subdomains in ASP.NET?

I have an application where, in the course of using the application, a user might click from
virginia.usa.com
to
newyork.usa.com
Since I'd rather not create a new session each time a user crosses from one subdomain to another, what's a good way to share session info across multiple subdomains?
You tagged this with ASP.NET and IIS, so I will assume that is your environment. Make sure you have this in your web.config:
<httpCookies domain=".usa.com"/>
If your 2 subdomains map to the same application, then you are done. However, if they are different applications you will need to do some additional work, like using a SQL Server based Session storage (and hacking the stored procedures to make sure all applications share the same session data) or with an HttpModule to intercept the application name, since even with shared cookies and the same machine key, 2 applications will still use 2 different stores for their session data.
Track your own sessions and use a cookie with an appropriate domain setting, ie. .usa.com.
Alternatively, if you're using PHP, I believe there's a setting to change the default domain setting of the session cookie it uses, that may be useful too.
The settings you're looking for are:
session.use_cookies = 1
session.use_only_cookies = 1
session.cookie_domain = .usa.com
I recently went thru this and learned the hard way. Localhost is actually considered a TLD. Cookie domains require at least a second level domain - test.com. If you want cookies to work for a domain and all it's sub-domains, prefix with a '.' - .test.com.
When running/debugging locally, setting a domain of localhost will fail, and it will fail even if the domain is set properly because visual studio uses localhost by default.
This default localhost can be changed in the project properties so that the project will actually run at cookie domain test.com. Essentially, if the address in the browser matches , you can get it to work.
My issue is documented here: Setting ServiceStack Cookie Domain in Web.Config Causes Session Id to Change on Every Request
Hope this helps.
If you're using PHP, one hack would be to make a little include script (or two) to do the following:
1 Serialize your $_SESSION array
2 Pass that string as a hidden input, making all your links to those buttons in separate forms using POST.
3 Also include a boolean hidden input to let your script know whether it needs to use the current session or unserialize $_POST['session']
4 Deploy this across your site, calling things where appropriate
I wouldn't do this if there's actually a sanctioned way to transfer a session. I hope you've at least considered using cookies.
Matt's answer is definitely the way to go if you have multiple subdomains pointing at the same IIS app (which is exactly the situation I have right now, using wildcard DNS and then doing subdomain 'sniffing' on the receiving end).
However, I wanted to add something that I experienced in case anyone is finding that this is not working for them. Setting the httpCookies line alone didn't do it for me, I had to add a machineKey entry into my web.config file:
machineKey decryptionKey="12...D1" validationKey="D7..8B"
Particularly odd since I am not in a web farm setup (unless AWS/EC2 is effectively acting as such).. As soon as I did this, it worked like a champ.

Resources