I have a WCF service, hosted in IIS, which I require to impersonate the annon account.
in my Webconfig
<authentication mode="Windows"/>
<identity impersonate ="true"/>
Testing the following, with vs2008
public void ByRuleId(int ruleId)
{
try
{
string user = WindowsIdentity.GetCurrent().Name;
string name = Thread.CurrentPrincipal.Identity.Name;
........
//get the data as a string.
using (FileStream fs = File.Open(location, FileMode.Open))
using (StreamReader reader = new StreamReader(fs))
{
rawData = reader.ReadToEnd();
}
}
catch.....
}
this works. however if I add impersonation attribute
[OperationBehavior(Impersonation=ImpersonationOption.Required)]
public void ByRuleId(int ruleId)
this does not work with the error message
"Either a required impersonation level was not provided, or the provided impersonation level is invalid."
a little poking around I noticed the first way was authenticated by Kerboros and the second way just failed on authentication type
I am using the WCF client tool, to pass my credentials. this seems to be working.
Check the 'TokenImpersonationLevel' of identity of the current thread; you'll need it to be at least 'Impersonation' to perform operations on the machine that the service is running on.
Typically, if you are using a proxy client, you'll need to set the 'TokenImpersonationLevel' of the client:
http://www.devx.com/codemag/Article/33342/1763/page/4
the main goal of this was to get anon access, even tho MattK answer was a great help.
here is what i did to do so.
on the implementation of the WCF contract I added the
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
public class TransferFile : ITransferFile
and in the web.config
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled ="true" />
after this i was able to impersonate the anon account
Related
I'm having trouble getting the Hangfire (1.5.8) dashboard to work inside of an IIS Virtual Directoy. Everything works beautifully in my dev environment where my application is simply mapped to the root of localhost. Our beta server, on the other hand, uses Virtual Directories to separate apps and app pools.
It's an ASP.Net MVC site using Hangfire with an OWIN Startup class. It gets deployed to http://beta-server/app-name/. When I attempt to access either http://beta-server/app-name/hangfire or http//beta-server/hangfire I get a 404 from IIS.
For the purposes of troubleshooting this, my IAuthenticationFilter simply returns true.
Here is my Startup.cs, pretty basic:
public class Startup
{
public void Configuration(IAppBuilder app)
{
// For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=316888
GlobalConfiguration.Configuration
.UseSqlServerStorage(new DetectsEnvironment().GetEnvironment());
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
AuthorizationFilters = new[] {new AuthenticationFilter()}
});
app.UseHangfireServer();
}
}
Does anyone have a working implementation that gets deployed to a Virtual Directory? Are there any OWIN middleware admin/management tools I can use to dig into what URL is getting registered within IIS?
I ended up fixing this simply by adding the HTTPHandler to the section in web.config.
<system.webServer>
<handlers>
<add name="hangfireDashboard" path="hangfire" type="System.Web.DefaultHttpHandler" verb="*" />
</handlers>
</system.webServer>
I had a similar issue in ASP.NET Core 2.0 and it required proper authorization setup (I use a middleware to protect the route, so I did not rely on authorization in my example):
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new [] {new HangfireDashboardAuthorizationFilter()}
});
/// <summary>
/// authorization required when deployed
/// </summary>
public class HangfireDashboardAuthorizationFilter : IDashboardAuthorizationFilter
{
///<inheritdoc/>
public bool Authorize(DashboardContext context)
{
// var httpContext = context.GetHttpContext();
// Allow all authenticated users to see the Dashboard (potentially dangerous).
// handled through middleware
return true; // httpContext.User.Identity.IsAuthenticated;
}
}
There is not need to change anything in web.config.
For more information check Hangfire documentation about this topic.
I had the exact same problem. In my case, this was because of bad configuration - the Startup class was not called. So try to add the following to your config file:
<add key="owin:appStartup" value="YourProject.YourNamespace.Startup, YourProject" />
<add key="owin:AutomaticAppStartup" value="true" />
Hope this helps.
Martin
This is a complicated issue, so bear with me.
Scenario: using a ASHX proxy to relay request to an ArcGIS server.
Trying to use ASP.NET impersonation, so that the logged in ASP.NET user credentials are used by the proxy, when sending request to the ArcGIS server.
Issue: the proxy request to ArcGIS server is refused 401, even though I know the impersonated account (sean.ryan-B + sean.ryan) does have access.
There are 4 machines:
1. machine hosting proxy page. I am logged in as: sean.ryan-B
2. a test machine. I am logged in as sean.ryan-B
3. my laptop. I am logged in as sean.ryan
4. the arcgis server.
All 4 machines are on the same domain.
web.config:
<authentication mode="Windows"/>
<identity impersonate="true" /> <!-- userName="EUROPE\sean.ryan-B" password="xxx" -->
<authorization>
<deny users="?"/>
</authorization>
Test-1. Opening a test page, in same web app as proxy, via the proxy:
http://myHost.com/sean/ProxyAsp.Net/ArcGisProxy.ashx?http://myHost.com/sean/ProxyAsp.Net
[ok on all boxes 1-3]
This looks OK - the impersonation seems look OK,
since with impersonation OFF: WindowsIdentity.GetCurrent().Name = the AppPool account
with impersonation ON: WindowsIdentity.GetCurrent().Name = EUROPE\sean.ryan or EUROPE\sean.ryan-B
Test-2. opening an image that is hosted on the same IIS (but a different site), via the proxy:
http://myHost.com/sean/ProxyAsp.Net/ArcGisProxy.ashx?http://myHost.com:10400/sites/CaSPER/SiteAssets/CaSPER.jpg
[ok on boxes 1-3]
Test-3. opening the ArcGIS map URL, via the proxy:
http://myHost.com/sean/ProxyAsp.Net/ArcGisProxy.ashx?http://mapserver1.com/ArcGIS/rest/services/Global/2D_BaseMap_SurfaceGeology/MapServer?f=json&callback=dojo.io.script.jsonp_dojoIoScript1._jsonpCallback
[fails on boxes 2,3 but succeeds on the proxy host (box 1)!]
code for the ASHX code-behind:
public partial class ArcGisProxy : IHttpHandler, IReadOnlySessionState //ASHX implements IReadOnlySessionState in order to be able to read from session
{
public void ProcessRequest(HttpContext context)
{
try
{
HttpResponse response = context.Response;
// Get the URL requested by the client (take the entire querystring at once
// to handle the case of the URL itself containing querystring parameters)
string uri = context.Request.Url.Query;
uri = uri.Substring(1); //the Substring(1) is to skip the ?, in order to get the request URL.
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)WebRequest.Create(uri);
{
req.Credentials = CredentialCache.DefaultCredentials; //this works on local box, with -B account. this is the account the web browser is running under (rather than the account logged into CaSPER with, as ASHX has separate server session).
req.ImpersonationLevel = TokenImpersonationLevel.Impersonation;
}
//to turn off caching: req.CachePolicy = new RequestCachePolicy(RequestCacheLevel.NoCacheNoStore);
req.Method = context.Request.HttpMethod;
req.ServicePoint.Expect100Continue = false;
req.Referer = context.Request.Headers["referer"];
// Set body of request for POST requests
req.Method = "GET";
// Send the request to the server
System.Net.WebResponse serverResponse = null;
try
{
serverResponse = req.GetResponse();
}
catch (System.Net.WebException webExc)
{
//logger.Log(GetMyUrl(), webExc, context.Request);
response.StatusCode = 500;
response.StatusDescription = webExc.Status.ToString();
response.Write(webExc.ToString());
response.Write(webExc.Response);
response.Write("Username = " + context.User.Identity.Name + " " + context.User.Identity.IsAuthenticated + " " + context.User.Identity.AuthenticationType);
response.End();
return;
}
// Set up the response to the client
....
......
response.End();
}
catch (Exception ex)
{
throw;
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
note: the following changes, meant proxy request to the map server DOES succeed:
a) set the identity in the web.config to explicitly set username, password to the sean.ryan-B account:
-OR-
b) set the App Pool account to be sean.ryan-B and turn OFF impersonation in the web.config file.
however these changes are not acceptable for Production.
The problem seems to be that:
- ASP.NET impersonation works well enough for test page + image hosted on same IIS (tests 1 and 2)
but NOT well enough for the map server.
as far as I know, the ArcGIS map server is using Negotiate, and then Kerberos authentication.
With WireShark, I monitored a successful proxy request, and found:
after 401, proxy sends GET with AUTH using SPNEGO (Kerberos)
Has anyone had similar issue with ArcGIS proxy ?
My theory is, that the impersonation on box 1 'works better', because browser is running on same box as the proxy.
Could the ArcGIS Server (or the IIS site it is using) be restricted to prevent accepting impersonation ?
Any suggestions welcome ...
p.s. had a hard time getting this post through - had to format most of it as code, as s-o is detecting it as source code !
I have an ASP.NET application which uses claims bases authentication against ADFS. I also map it to a WindowsClaimsIdentity by using the Claims to Windows Identity Service. That works fine.
But now I need to impersonate the current request/thread so I can access a service which is not claims aware. How should I do that?
Should I acquired a WindowsImpersonationContext in the Application_PostAuthenticate event and save that in the HttpContext.Items and then in the Application_EndRequest call the Undo method?
Or are there other preferred ways to do this?
Update: As I didn't get any hints on what the preferred way to impersonate I tried my own suggestion. I created this code in the global.asax.cs:
private static readonly string WICKey = typeof(System.Security.Principal.WindowsImpersonationContext).AssemblyQualifiedName;
protected void Application_PostAuthenticateRequest()
{
var wid = User.Identity as System.Security.Principal.WindowsIdentity;
if (wid != null)
{
HttpContext.Current.Trace.Write("PostAuthenticateRequest PreImpersonate: " + System.Security.Principal.WindowsIdentity.GetCurrent().Name);
HttpContext.Current.Items[WICKey] = wid.Impersonate();
HttpContext.Current.Trace.Write("PostAuthenticateRequest PostImpersonate: " + System.Security.Principal.WindowsIdentity.GetCurrent().Name);
}
}
protected void Application_EndRequest()
{
var wic = HttpContext.Current.Items[WICKey] as System.Security.Principal.WindowsImpersonationContext;
if (wic != null)
{
HttpContext.Current.Trace.Write("EndRequest PreUndoImpersonate: " + System.Security.Principal.WindowsIdentity.GetCurrent().Name);
wic.Undo();
HttpContext.Current.Trace.Write("EndRequest PostUndoImpersonate: " + System.Security.Principal.WindowsIdentity.GetCurrent().Name);
}
}
When I look to the trace log I see this
PostAuthenticateRequest PreImpersonate: NT AUTHORITY\NETWORK SERVICE
PostAuthenticateRequest PostImpersonate: MyDomain\CorrectUser
Home: NT AUTHORITY\NETWORK SERVICE
EndRequest PreUndoImpersonate: NT AUTHORITY\NETWORK SERVICE
EndRequest PostUndoImpersonate: NT AUTHORITY\NETWORK SERVICE
So in the second line you can see the thread is impersonated correctly. But in the next lines you see that the impersonation is lost. (the third line originates from a controller).
When I use the following code to impersonate locally it works fine:
var wid = User.Identity as System.Security.Principal.WindowsIdentity;
if (wid != null)
{
using (var ctx = wid.Impersonate())
{
//Do something
}
}
But I want to impersonate the whole request lifetime.
How should I do that?
You said the backend service is not claims aware. Can you elaborate on this? Do you mean that the compiled code is not claims aware but you have the ability modify the web.config file? If so then you can try to configure the backend service to use the WIF pipeline for authN by wedging in the WSFederationAuthenticationModule, SessionAuthenticationModule and a custom ClaimsAuthorizationManager if you need to also do authZ. You can then use WIF's ActAs or OnBehalfOf features when your ASP.NET application calls the backend service.
Sorry for digging up this old thread, but for your code to work make sure the Managed Pipeline Mode of the Application Pool running your application is set to Classic.
I am working on a asp.net web application that has is a part of TFS and is used by the development team. Recently as part of the project we setup ADFS and are now attempting to enforce authentication of the project to an ADFS server.
On my development machine I have gone through the steps of adding STS reference which generates the Federation Meta-Data as well as updates the web.config file for the project. Authorization within the web.config uses thumbprint certification which requires me to add to my local machine the ADFS certificate as well as generate a signed certificate for the dev machine and add this to ADFS.
All is setup and working but in looking at the web.config. and FederationMetadata.xml document these "appear" to be machine specific. I suspect that if I check the project/files into TFS the next developer or tester that takes a build will end up with a broken build on their machine.
My question is within TFS what is the process for a scenario like this to check in and still allow my team to check out, build, and test the project with the latest code in their development or test environments?
My work around at this time is to exclude the FederationMetaData.xml and web.config from check in then on each development machine manually setup ADFS authentication as well as for product test. Once done each person can prevent their local copy of the FederationMetatData.xml and web.config from being checked in.(aka have their own local copy) then when checking in/out just ensure that each developer preserves their own copy (or does not check them into TFS)
This seems extremely inefficient, and all but bypasses the essence of source code management as developers are being required to keep local copies of files on their machine. This also seems to introduce the opportunity for accidental check-in of local files or overwriting local files.
Does anyone have any references, documentation or information on how to check-in code for (ADFS) machine specific configurations and not hose up the entire development environment?
Thanks in advance,
I agree that the way that the WIF toolset does configuration is not great for working in teams with multiple developers and test environments. The approach that I've taken to get past this is to change WIF to be configured at runtime.
One approach you can take is to put a dummy /FederationMetadata/2007-06/FederationMetadata.xml in place and check that in to TFS. It must have valid urls and be otherwise a valid file.
Additionally, you will need a valid federationAuthentication section in web.config with dummy (but of valid form) audienceUris, issuer and realm entries.
<microsoft.identityModel>
<service>
<audienceUris>
<add value="https://yourwebsite.com/" />
</audienceUris>
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="https://yourissuer/v2/wsfederation" realm="https://yourwebsite.com/" requireHttps="true" />
<cookieHandler requireSsl="false" />
</federatedAuthentication>
etc...
Then, change your application's ADFS configuration to be completely runtime driven. You can do this by hooking into various events during the ADFS module startup and ASP.NET pipeline.
Take a look at this forums post for more information.
Essentially, you'll want to have something like this in global.asax.cs. This is some code that I've used on a Windows Azure Web Role to read from ServiceConfiguration.cscfg (which is changeable at deploy/runtime in the Azure model). It could easily be adapted to read from web.config or any other configuration system of your choosing (e.g. database).
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated;
}
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
/// Due to the way the ASP.Net pipeline works, the only way to change
/// configurations inside federatedAuthentication (which are configurations on the http modules)
/// is to catch another event, which is raised everytime a request comes in.
ConfigureWSFederation();
}
/// <summary>
/// Dynamically load WIF configuration so that it can live in ServiceConfiguration.cscfg instead of Web.config
/// </summary>
/// <param name="sender"></param>
/// <param name="eventArgs"></param>
void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs eventArgs)
{
try
{
ServiceConfiguration serviceConfiguration = eventArgs.ServiceConfiguration;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")))
{
serviceConfiguration.AudienceRestriction.AllowedAudienceUris.Add(new Uri(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")));
Trace.TraceInformation("ServiceConfiguration: AllowedAudienceUris = {0}", serviceConfiguration.AudienceRestriction.AllowedAudienceUris[0]);
}
serviceConfiguration.CertificateValidationMode = X509CertificateValidationMode.None;
Trace.TraceInformation("ServiceConfiguration: CertificateValidationMode = {0}", serviceConfiguration.CertificateValidationMode);
// Now load the trusted issuers
if (serviceConfiguration.IssuerNameRegistry is ConfigurationBasedIssuerNameRegistry)
{
ConfigurationBasedIssuerNameRegistry issuerNameRegistry = serviceConfiguration.IssuerNameRegistry as ConfigurationBasedIssuerNameRegistry;
// Can have more than one. We don't.
issuerNameRegistry.AddTrustedIssuer(RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
Trace.TraceInformation("ServiceConfiguration: TrustedIssuer = {0} : {1}", RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
}
else
{
Trace.TraceInformation("Custom IssuerNameReistry type configured, ignoring internal settings");
}
// Configures WIF to use the RsaEncryptionCookieTransform if ServiceCertificateThumbprint is specified.
// This is only necessary on Windows Azure because DPAPI is not available.
ConfigureWifToUseRsaEncryption(serviceConfiguration);
}
catch (Exception exception)
{
Trace.TraceError("Unable to initialize the federated authentication configuration. {0}", exception.Message);
}
}
/// <summary>
/// Configures WIF to use the RsaEncryptionCookieTransform, DPAPI is not available on Windows Azure.
/// </summary>
/// <param name="requestContext"></param>
private void ConfigureWifToUseRsaEncryption(ServiceConfiguration serviceConfiguration)
{
String svcCertThumbprint = RoleEnvironment.GetConfigurationSettingValue("FedAuthServiceCertificateThumbprint");
if (!String.IsNullOrEmpty(svcCertThumbprint))
{
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.LocalMachine);
try
{
certificateStore.Open(OpenFlags.ReadOnly);
// We have to pass false as last parameter to find self-signed certs.
X509Certificate2Collection certs = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, svcCertThumbprint, false /*validOnly*/);
if (certs.Count != 0)
{
serviceConfiguration.ServiceCertificate = certs[0];
// Use the service certificate to protect the cookies that are sent to the client.
List<CookieTransform> sessionTransforms =
new List<CookieTransform>(new CookieTransform[] { new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(serviceConfiguration.ServiceCertificate)});
SessionSecurityTokenHandler sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());
serviceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
Trace.TraceInformation("ConfigureWifToUseRsaEncryption: Using RsaEncryptionCookieTransform for cookieTransform");
}
else
{
Trace.TraceError("Could not find service certificate in the My store on LocalMachine");
}
}
finally
{
certificateStore.Close();
}
}
}
private static void ConfigureWSFederation()
{
// Load the federatedAuthentication settings
WSFederationAuthenticationModule federatedModule = FederatedAuthentication.WSFederationAuthenticationModule as WSFederationAuthenticationModule;
if (federatedModule != null)
{
federatedModule.PassiveRedirectEnabled = true;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps")))
{
federatedModule.RequireHttps = bool.Parse(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps"));
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer")))
{
federatedModule.Issuer = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer");
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm")))
{
federatedModule.Realm = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm");
}
CookieHandler cookieHandler = FederatedAuthentication.SessionAuthenticationModule.CookieHandler;
cookieHandler.RequireSsl = false;
}
else
{
Trace.TraceError("Unable to configure the federated module. The modules weren't loaded.");
}
}
}
This will then allow you to configure the following settings at runtime:
<Setting name="FedAuthAudienceUri" value="-- update with audience url. e.g. https://yourwebsite/ --" />
<Setting name="FedAuthWSFederationIssuer" value="-- update with WSFederation endpoint. e.g. https://yourissuer/v2/wsfederation--" />
<Setting name="FedAuthWSFederationRealm" value="-- update with WSFederation realm. e.g. https://yourwebsite/" />
<Setting name="FedAuthTrustedIssuerThumbprint" value="-- update with certificate thumbprint from ACS configuration. e.g. cb27dd190485afe0f62e470e4e3578de51d52bf4--" />
<Setting name="FedAuthTrustedIssuerName" value="-- update with issuer name. e.g. https://yourissuer/--" />
<Setting name="FedAuthServiceCertificateThumbprint" value="-- update with service certificate thumbprint. e.g. same as HTTPS thumbprint: FE95C43CD4C4F1FC6BC1CA4349C3FF60433648DB --" />
<Setting name="FedAuthWSFederationRequireHttps" value="true" />
UPDATE:
The service cannot be activated because it does not support ASP.NET compatibility. ASP.NET compatibility is enabled for this application. Turn off ASP.NET compatibility mode in the web.config or add the AspNetCompatibilityRequirements attribute to the service type with RequirementsMode setting as 'Allowed' or 'Required'.
when i try to access wcf service i get this error: the reason is HttpContext.Current is null, what should i do in this case? any help?
Object reference not set to an instance of an object.
System.Web.Script.Serialization.JavaScriptSerializer s = new System.Web.Script.Serialization.JavaScriptSerializer();
Person p = new Person() { FirstName = "First name", LastName= "last name" };
string json = s.Serialize(p);
System.Web.HttpContext.Current.Response.Write("jsoncallback" + json);} //error
HttpContext is an ASP.Net construct. If you want to be able to access it in your service, then you need to enable ASP.Net Compatibility for your service.
Either through the web.config:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
Or declaratively:
[AspNetCompatibilityRequirements(RequirementsMode=AspNetCompatibilityRequirementsMode.Required)]
public class MyService : IMyService { ... }
If simple response write is the thing for you, then consider using a simple HttpHandler (by implementing the IHttpHandler interface). The Response object is not meant be used in a WCF Service...
If WCF however is the thing for you (maybe the stack of tech it offers is something you need), then consider using the plumming already there to output json:
[ServiceContract]
public interface IService
{
[OperationContract]
[WebGet(ResponseFormat = WebMessageFormat.**Json**)]
String DoStuff();
}