I'm looking for how to add hostname bindings to ADFS like you would add additional hostname bindings for a website in IIS. e.g. adfs.mydomain.com is the domain used for ADFS. In addition I'd like to add server1.adfs.mydomain.com. This has nothing to do with SSL certs. I know this can be done as I did it on the ADFS server I'm retiring.
For those who ask why I want to do this. There is a farm of ADFS servers behind a load balancer all using adfs.mydomain.com hostname. I'd like specific bindings for each server e.g. server1.adfs.mydomain.com so I can probe the service on a specific server from our monitoring system to verify the ADFS service is online.
The old ADFS server is whatever role service comes with Win2k16. The new ADFS server is the role service on Win2k22. This used the be easier to find searching google, but now adfs related postings have become littered with references to Azure and O365 implementations. Anyone know how to add the additional binding? I feel like previously it was powershell or netsh command, but I could be wrong.
• Since, the ADFS servers in your ADFS farm are behind a load balancer which has a hostname of ‘adfs.mydomain.com’, the ADFS servers in the farm also are domain joined servers with their DNS records hosted in your environment’s local DNS server. Thus, to identify those ADFS servers with additional hostnames other than that assigned them during domain joining, you need to add these additional hostnames in the local hosts file of the ADFS servers serviced by the load balancer as shown below: -
Go to the path, ‘C:\Windows\System32\drivers\etc\hosts’ and open the hosts file with notepad and add the IP address of the respective ADFS Server as shown below in the screenshot: -
Thus, in this way, the monitoring server will be able to find out the ADFS server and query the ADFS service for its proper functioning. Also, it will be able to resolve them through the load balancer if it has to pass through them for service availability.
Related
I'm planning an API that will be used by a client on their internal office networks in multiple separate locations. Each location will have a separate instance installed.
They want it to be secure and running on HTTPS.
What I cant seem to understand how can a HTTPS certificate work when there is no externally facing fully qualified name. eg. MyApiServer.mycompany.com
Instead they will likely just be running it on a server/computer with just a hostname. ie. MyApiServer
The data being transferred is not necessarily sensitive but it places records in a sales system.
If HTTPS is not possible in this scenario whats an alternative method to secure the communication?
The server name has not to be "fully-qualified". For securing the call it will be enough to have the domain specified in URL equal to the domain name specified in certificate.
So your clients would call https://MyApiServer/endpoint in your LAN which should cause your service to provide server certificate where the subject would be MyApiServer.
I am very new to networking, I am facing an issue trying to implement opensso behind load balancing. The load balancer uses IP addresses, openam agent is expected to work on IIS server which is running asp.net application. Openam only works on DNS, but load balancer is only working on IP and not able to communicate.
This is a common scenario anybody has worked on such issues in past please provide guidance.
There are some technical details you need to understand, especially https://www.rfc-editor.org/rfc/rfc6265 (AKA "cookie spec"). This is the reason why OpenAM BY DEFAULT can only work on FQDNs, however it can only work on IPs if you understood the technical foundation.
Agents by default probe the OpenAM URLs (see https://bugster.forgerock.org/jira/browse/OPENAM-3294).
If an OpenAM site is configured agents also communicate with the primary site URL, configured for that site (and when the OpenAM instance belongs to that site), when validating the SSO token of a "user" and when sending policy decision requests.
If there is no OpenAM site configured, agents communicate with the server URL of the OpenAM instance where the SSO session was created (I call this the 'authoritative OpenAM instance').
OpenAM also need to be aware of which FQDNs it has to handle. This can either be achieved via an OpenAM site (understand the consequences WRT to agent communication) or via 'fqdn mapping' (advanced server property com.sun.identity.server.fqdnMap[FQDN]=FQDN)
Now you also need to understand name resolution in TCP/IP protocol stack.
clients actually communicate on IP level first (putting aside the lower levels).
A loadbalancer typically defines a 'virtual server' which has a VIP assigned (terms are confusing as on an HTTP server you can also have such a thing like a 'virtual server' but it may act on a different level of the network stack).
So you could do (after you understood the technical foundation)
create name resolution for the VIP of the LB to an appropriate FQDN
create OpenAM site leveraging that FQDN
assign OpenAM instances to that site
configure agent's to use FQDN of the OpenAM site as LoginURL (and potentially naming url in agent bootstrap file)
potentially re-configure cookie domains in OpenAM platform service
restart OpenAM
Assuming a Windows Server 2012 VPS:
It seems that many tutorials include the setting up of DNS Server (setup of forward lookup zones, and A record) as part of the basic steps to deploy and run an ASP.NET web application on IIS.
I'm slightly confused, because within IIS manager you can set the bindings ( IP address, URL, SSL, port) of a web application. Wouldn't this alone not suffice to correctly route incoming requests to the correct web application?
What would be the advantage to running DNS Server?
IIS Manager can only manage IIS related Windows settings, but to make a site work you need much more settings than that.
DNS settings are critical to direct web browsers to your side. Nobody uses IP addresses to access a site, so a typical URL uses domain name. That requires DNS to translate the domain name to an IP address so that browsers can send HTTP packets to the proper location.
IIS Manager could not manage that for you, as which DNS product to use or how to configure it is usually vendor specific and out of IIS's scope.
Here's the desired setup:
Service with wsHttpBinding is on IIS 6 on Machine 1 behind the firewall.
Client is front end website on IIS 6 on Machine 2 on a DMZ.
We are currently able to authenticate the client using Windows authentication, but with impersonation
<identity impersonate="true" userName="OurCompany\Me" password="Blahblahblah" />
since the website would use the "ASPNET" as username, which is not in the domain.
We now want to move away from this method, because of safety issue; we don't want to expose this kind of info on the DMZ.
Is there any way to get authenticated properly without using
the impersonate on the client
config?
If we changed so that we use
certificate authentication, would it
affect service operations that
require impersonations (needed
impersonations for file access on
the network for example)?
thanks.
This has been resolved now, and I think it'd be constructive to share the solutions.
In terms of my original question - whether it's able to do impersonation without setting it explicitly in the config or in the front end code. As mentioned by the above, the App Pool method does work, but only when both the client and server are on the same domain.
Since the web site client being situated in the DMZ has no access of the local network at all, meaning we are unable to impersonate any network user (this is a flaw in my original question, saying the impersonation works - it was actually not working).
So the only way to go was using certificate. Since this is internal communications, I have generated a test certificate on each of the server / client sides with the makecert. Using peer trust certificate authentications, I am able to get the communication working between the client and the server. This will ensure that no Windows / network user account information is presented in the DMZ zone.
I have an ASP.NET application that will host multiple tenants (Software-as-a-Service style). Each tenant will have their own domain name (www.mydomain.com, www.yourdomain.com) and their own SSL certificate.
Is there a way to host the application such that all of the tenants are on the same application instance?
I know you can have multiple IIS web sites pointing to the same shared location, but that won't work - it's not the same instance. That's different instances of the same application.
I also know you can use SSL host header mapping with wildcard certificates, but that won't work because all of the tenants would need to be subdomains of the same primary domain - yourdomain.commondomain.com, mydomain.commondomain.com. For the solution to be valid, everyone needs to have their own domain name, not be subdomains. (Ideally each tenant could opt to use an EV cert, too, and you can't have wildcard EV certs.)
The problem is that classic SSL requires the certificate to be presented before the web browser has indicated which host it wants to use. You can therefore only configure one certificate per IP/port combination.
There is an extension to TLS called Server Name Indication which allows the browser to indicate which logical server it wants to talk to. This feature is supported as of IIS 8.0 (Windows Server 2012).
Wildcards work because the certificate itself says that it is valid for all servers under that domain.
You constrained to only IIS - or could putting soft/hard proxies or content-switching hardware also be an option?
Thinking that you could terminate the SSL at a proxy or content-switch - then transform the request into your own internal url.
e.g. foo.com/x and bar.com/y get translated into myapp/x and myapp/y respectively under the hood - passing the original hostname in the request headers.