ASP.NET Authentication To LDAp - asp.net

We have a web site that we have moved to LDAP authentication. But we are getting many "server not available" LDAP errors, even though the LDAP server remains in service. Do I need to worry about multiple users, each in their own thread causing concurrent authentication requests to the LDAP server and or causing too much authentication traffic for the LDAP server?. Does an (OSI) application accept multiple connections on the same incoming port at the same time or does it have to process them sequentially? Does it accept multiple connections from the same client (my web server) at the same time? If either of these are concerns, how do I architect my solution to overcome them? Should I be creating an single authentication object that is attached to the application object? Or is it o.k. to create it in each individual session/thread?

The FirstClass email system can provide LDAP services on port 389 out the 'front' of the application, and out the back, it can retrieve data from a different LDAP server.
What I would suggest is find out if there is a back end LDAP server and ask for permission to use that directly instead of proxying through FirstClass.
Man its been a while since I last saw FirstClass! Good to see they are still around!

Related

About http post from desktop application

My q is whats can stop http post from desktop applications ?
e.g
i have a desktop application before it start it's ask users form some information
like a username ad Email ,,, and then take this information and post it on php webpage and php insert it into MySql Server any way the problem now is lets say like
6 of 16 download(s) are registered and the others not so whats can make http post not run correctly ?
Note :
Software tested on every windows os and runs ok
Software run with all anti viruses programs ok
Software add port throw windows firewall ok
So whats can make http post not run correctly ?
Regards
There are many things that could stop communication between your application, and your database.
If the client has a firewall that requires authorisation for outbound requests.
If the client has to connect via a proxy server, and you application is not proxy aware
If your website fails to process your request (perhaps, if the MySql server is too busy to allow connections, etc.)
So, consider an end user behind a WebSense proxy that additionally allows administrators to filter out unwanted traffic. If your application is not proxy server aware, it will fail to connect; If your application is proxy aware, and whatever WebSense category you fall into is filtered for that client, it will also fail to connect.

Where to host SignalR when long-running service via WCF is backend

I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.

Is NLB a good way to keep a website available while deploying new code?

I want to be able to deploy a new version of my asp.net/mvc website without loosing client session state or causing any downtime. The way I'm thinking of accomplishing this is by creating a Windows Network Load Balancing server so that clients can reach it via a single url such as https://mysite.org/. It would then redirect traffic to one of two other sites (A.mysite.org or B.mysite.org). I'll set the NLB's affinity to Single, and disable site B so that all sessions are are directed to site A. When I need to deploy a new version of the website, I'll deploy to site B, enable site B, and disable site A. So, everybody that was on site A can stay there (using version 1) until they log off. All new sessions will connect to site B and run version 2. The next time I deploy, I'll do the reverse.
I've never used NLB. Is this appropriate? Is there a simpler, easier way?
How does NLB know when a request from client X already has a session on A or B? Ie. when they log off the website, and try to login again, will the nlb send them to the same site they were on before?
There are quite a few considerations here
Firstly, rather than juggling the affinity on your NLB, you will probably be better storing your ASP.NET Sessions in StateServer or SQL based Session management to allow web clients (or web service clients) to access your site without 'sticky' affinity. Once you've set up the StateServer or created the SQL Session DB, it should be a simple change to your app's web config.
NLB itself works great for keeping your site up while you upgrade your site. You will typically drainstop a server in the cluster before reinstalling your app to it, test it, and then bring it back into the NLB cluster, before repeating the process with the next server etc.
AFAIK, NLB Single Affinity works at TCP/IP level and is does not interrogate ASP.NET sessions. Basically any connection from the same client IP to the same server IP:Port combination will be directed to the same server. Also AFAIK, both servers will be sharing the NLB IP (In addition to any existing IP's they have).
Since it seems your site uses SSL, it seems that unless you have affinity, that the SSL session keys will need to be renegotiated on each request, which could have performance implications.

What's Enterprise SSO for in BizTalk Server?

Microsoft's Enterprise SSO server is bundled with BizTalk Server - I'm fairly familiar with how to configure it, make sure it's working, etc. My questsion is, what exactly does it do, and how does it do it?
My best understanding is that it is used to securely store configuration for things like ports and adapters, because configuration items often include things like credentials, passwords, connection strings, etc. In terms of "how it works", my best guess is that the configuration values are stored encrypted in an SSO database, and the "master secret" is simply the encryption key that only privileged credentials (like the one running the BizTalk hosts) have access to, so they can use it to access the encrypted configuration.
Can someone shine some light on this and point out where this is right/wrong?
You're pretty close overall. EntSSO is used by BizTalk internally to store any sensitive data. This includes particularly the adapter-specific part of any send port/receive location configuration.
But that's not all EntSSO does; it can also be used to provide credential mapping services between Windows and non-windows systems, by storing sets of encrypted credentials for other applications and mapping within them. Basically, this can be used to provide single sign-on services when building BizTalk solutions so that BizTalk can "act as" a specific user when doing stuff on their behalf.
For example, you could have BizTalk receive a message over an HTTP/SOAP receive location set up with Windows Integrated authentication, and then let BizTalk flow that authentication information over to an FTP send port where the Windows user credential is mapped to a specific username/password combination associated to it so that BizTalk can authenticate as said user to the FTP server. With this, different Windows Users sending messages to BizTalk would result in separate FTP connections created with different credentials on the other end (this is different from the default BizTalk behavior of using a single credential for all operations on a send port).
Obviously EntSSO offers a bunch of other options beyond this, but that's kinda the big deal.
BTW, the BizTalk docs actually contain a fairly extensive section on EntSSO that is pretty useful.

Windows authentication problems using asp.net

I have an asp.net application that should access data from two SQL Servers. One of the SQL Servers is present on the same machine as IIS (let us call it SQLSERVER1) whereas the other SQL server is present on another machine (SQLSERVER2).
The connection strings are trusted for both the SQL servers. Impersonation has been set to true in my web.config file. I am using Windows authentication in both IIS and web.config.
When I try to access data from SQLSERVER2, I get login failed for user(null) error. The user through which I have logged in through Windows exists as a SQL server account in SQLSERVER2.
What could be the possible reason?
NOTE: This is a newbie question IMHO.
NOTE: The IIS used is 6.0 (Windows 2003). It is not set to IIS 5.0 isolation mode.
EDIT:The user getting impersonated is a domain user
Addition:
I also want to state that I get this error message when I access it as a client of the server where IIS is running. In other words, let me say I am working on machine A, the IIS and SQLSERVER1 are on machine B, and SQLSERVER2 is on machine C.
I do not get this error message when I am working on machine B. This is stumping me more.
This is absolutely a delegation problem. As one person pointed out, you need to make sure Kereberos authentication is being used. The old style NTLM isn't going to cut it. Here's more on Kerberos vs. NTLM.
In a nutshell, if you have a webserver and a database and you want the webserver to impersonate the user when making database requests (so that you can set up permissions on the database directly on a per-user or user-group basis) you're performing a double-hop. Credentials must past first from the user's computer to the webserver and again to the database. As you can imagine, the database has to trust the webserver to "do no evil" or this could be an extremely dangerous security hole. As a result, you have to set up what is called in the Windows Server world "delegation"...
Microsoft has a good article about all this here. Further, you can look over an article like this to get an idea of how to set it all up. We've run into this frequently, and it can be a pain at first, especially since as a developer you're probably not in control of the servers directly (especially production ones) and you'll have to spend a lot of time with the server guys down the hall.
You're probably running into this problem because non-Kerberos based impersonation (NTLM) is only valid on the local machine (the webserver). If you want to be able to use those credentials to access another machine, you're going to need to make sure you're using Kerberos.
Try this: http://support.microsoft.com/kb/810572
Your authentication to the webserver is not passed through to the sql server. The web server is authenticating to the SQL Server using the account that your application pool is running under.
You should check that the machine account for SQLSERVER1 has trusted for delegation enabled. Otherwise SQLSERVER2 won't trust the impersonation running on SQLSERVER1. This is in addition to confirming that Kerberos is used to set up the impersonation in the first place. This also assumes that the servers and the users are all members of the same domain.
BTW, are sure you want to do things this way, you end up creating a lot more connections because they end up being unique to a user?
Have you tried to access the database on server2 using SQL SErver administrator from Server1 and made a successful connection?
If not then this could be because by default SQL Server installs itself with tcp turned off by default.
You will need to make sure that this is turned on for server2 to allow server1 to connect.
server1 has no problems connecting due to the fact it can use the shared memory connection.

Resources