SBT Publish to Visual Studio Team Services (Web) Packages Plugin Repo - sbt

I'm trying to use SBT powered projects with Visual Studio Team Services, specifically the Packages plugin.
The packages plugin has explicit instructions for how to get it to work with Maven, but I haven't been able to determine a means to adapt the instructions for SBT as they seem to rely on a configuration-powered hack of the Maven HTTP interface.
The specific instructions I have are:
Add credentials to your user settings.xml inside the <servers> tag
<server>
<id>projectspace-visualstudio.com-java</id>
<configuration>
<httpHeaders>
<property>
<name>Authorization</name>
<!--Treat this auth token like a password. Do not share it with anyone, including Microsoft support. The generated token expires on or before 12/24/2017-->
<value>Basic dXNlci5uYW1lOjQ5ZmphMm1leUowZVhBZ09pSktWMVFpTENKaGJHY2lPaUpTVXpJMU5pSXNJbmcxZENJNkltOVBkbU42TlUxZk4zQXRTR3BKUzJ4R1dIbzVNM1ZmVmpCYWJ5SjkuZXlKdVlXMWxhV1FpT2lKak5qZGhORFZoWmkwME5UZ3lMVFpsTlRFdFltUXhNeTB6WTJRMk1HVTJPRGhpTmpjaUxDSnpZM0FpT2lKMmMyOHVaSEp2Y0Y5M2NtbDBaU0IyYzI4dWNHRmphMkZuYVc1blgzZHlhWFJsSWl3aVlYVnBJam9pWTJZM1l6ZGxaRGt0TXpVeE55MDBZalU1TFRrMk4yRXRaalZoWW1RNE16UTNaV1UySWl3aWMybGtJam9pWVdZek1XRXpOVEF0TXpBNVl5MDBNalF3TFdKbU1XRXRZelV4TURJek5HWXhPV0ppSWl3aWFYTnpJam9pWVhCd0xuWnpjM0J6TG5acGMzVmhiSE4wZFdScGJ5NWpiMjBpTENKaGRXUWlPaUpoY0hBdWRuTnpjSE11ZG1semRXRnNjMzFaR2x2TG1OdmJYeDJjMjg2WWpFME5tUTBZalF0TVRSaU55MDBOVE5qTFdJNU5qa3RZVEpoTXpsaFpEZGtNVGc0SWl3aWJtSm1Jam94TlRBMk16M016UTVMQ0psZUhBaU9qRTFNVFF4TkRNek5UQjkuQkJLY25Wa1dZbHYwTFJrZkVIQnpEY3loaFJodTFwTmhFNk51WTB5UEFDTDY4MktiRGVTRXNTUWFZSkJOcG82Y3Bnal9lZThBbkhqc1otUG1PYWY0aGtsVE1Dd3hwbDhuTXdSRzVYeGJWMTFFS1lTOFFhMTdvWFFGY1JIMl9JbG84MlJMMS1PWlAxXzExcEZ0TU1ST0tTVW85X0ttTGM3RzF2YWlJcXc5YkFrejEyemRGeUNobVJEWmFDdWFBV1NQaUU1VVRPaV9aMi1oS291UVBWd0E4N29oelpZMjU0X25fN0o3UFdnczUweXVOaXZRc3Q5Y1U5MGJPMWNZWHUyMmtLMEVyeC05ZlptMUlwWGRoQ1hkZm1aTDlxUWFSbnp5dW9QaGVFelJoZWd6bExNTjFSaVk1U0FwOENqR1FnR3NmWEZsNlNMTnNYYnhUOUd0YjVGRUJ3</value>
</property>
</httpHeaders>
</configuration>
</server>
Note: The credentials there are deliberately a bit scrambled from what was actually assigned for obvious reasons. The contents of the auth header being forced is a standard Auth-Basic Base64 username:password combination.
They further instruct
Add this to your project pom.xml inside both the <repositories> tag and the <distributionManagement> tag
<repository>
<id>projectspace-visualstudio.com-java</id>
<url>https://projectspace.pkgs.visualstudio.com/_packaging/java/maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
I've tried breaking that username and password out, assigning them to a Credentials entry and attempting to publish to "https://projectspace.pkgs.visualstudio.com/_packaging/java/maven/v1" but it inevitably fails.
As near as I can tell, the VSTS package system doesn't give the standard HTTP Auth challenge with a Realm, and without the Realm SBT (or is it Ivy?) never attempts to send the credentials, giving up. Meanwhile Maven just sends the credentials on the first attempt.
Is there a way to make SBT send the credentials regardless, or similarly attach a mandatory header? (Or did I completely misdiagnose the issue?)
Thanks.

I think I stumbled upon the solution while attempting to add more information to my question.
Upon attempting to deliberately fail the upload with CURL, I discovered in the verbose output:
< WWW-Authenticate: Bearer authorization_uri=https://login.windows.net/67dd666e-d00e-4f5f-9f71-76760f050c78
< WWW-Authenticate: Basic realm="https://pkgsprodscussu2.app.pkgs.visualstudio.com/"
< WWW-Authenticate: TFS-Federated
Upon changing my realm to https://pkgsprodscussu2.app.pkgs.visualstudio.com/ SBT was suddenly able to publish.
Hooray. Unfortunately there seems to be no guarantee that realm value is stable, but it works for now at least.
For the reference of others, this is the solution I ended up with:
publishTo in ThisBuild := Some("vsts" at "https://projectspace.pkgs.visualstudio.com/_packaging/java/maven/v1/")
credentials in ThisBuild += {
import java.nio.charset.StandardCharsets
import java.util.Base64
val decodedArray: Array[Byte] = Base64.getDecoder.decode(
"""dXNlci5uYW1lOjQ5ZmphMm1leUowZVhBZ09pSktWMVFpTENKaGJHY2lPaUpTVXpJMU5pSXNJbmcxZENJNkltOVBkbU42TlUxZk4zQXRTR3BKUzJ4R1dIbzVNM1ZmVmpCYWJ5SjkuZXlKdVlXMWxhV1FpT2lKak5qZGhORFZoWmkwME5UZ3lMVFpsTlRFdFltUXhNeTB6WTJRMk1HVTJPRGhpTmpjaUxDSnpZM0FpT2lKMmMyOHVaSEp2Y0Y5M2NtbDBaU0IyYzI4dWNHRmphMkZuYVc1blgzZHlhWFJsSWl3aVlYVnBJam9pWTJZM1l6ZGxaRGt0TXpVeE55MDBZalU1TFRrMk4yRXRaalZoWW1RNE16UTNaV1UySWl3aWMybGtJam9pWVdZek1XRXpOVEF0TXpBNVl5MDBNalF3TFdKbU1XRXRZelV4TURJek5HWXhPV0ppSWl3aWFYTnpJam9pWVhCd0xuWnpjM0J6TG5acGMzVmhiSE4wZFdScGJ5NWpiMjBpTENKaGRXUWlPaUpoY0hBdWRuTnpjSE11ZG1semRXRnNjMzFaR2x2TG1OdmJYeDJjMjg2WWpFME5tUTBZalF0TVRSaU55MDBOVE5qTFdJNU5qa3RZVEpoTXpsaFpEZGtNVGc0SWl3aWJtSm1Jam94TlRBMk16M016UTVMQ0psZUhBaU9qRTFNVFF4TkRNek5UQjkuQkJLY25Wa1dZbHYwTFJrZkVIQnpEY3loaFJodTFwTmhFNk51WTB5UEFDTDY4MktiRGVTRXNTUWFZSkJOcG82Y3Bnal9lZThBbkhqc1otUG1PYWY0aGtsVE1Dd3hwbDhuTXdSRzVYeGJWMTFFS1lTOFFhMTdvWFFGY1JIMl9JbG84MlJMMS1PWlAxXzExcEZ0TU1ST0tTVW85X0ttTGM3RzF2YWlJcXc5YkFrejEyemRGeUNobVJEWmFDdWFBV1NQaUU1VVRPaV9aMi1oS291UVBWd0E4N29oelpZMjU0X25fN0o3UFdnczUweXVOaXZRc3Q5Y1U5MGJPMWNZWHUyMmtLMEVyeC05ZlptMUlwWGRoQ1hkZm1aTDlxUWFSbnp5dW9QaGVFelJoZWd6bExNTjFSaVk1U0FwOENqR1FnR3NmWEZsNlNMTnNYYnhUOUd0YjVGRUJ3"""
)
val decodedString = new String(decodedArray, StandardCharsets.UTF_8)
print("decoded: ")
println(decodedString)
val Array(userName, passwd) = decodedString.split(":", 2)
Credentials(
realm = "https://pkgsprodscussu2.app.pkgs.visualstudio.com/",
host = "projectspace.pkgs.visualstudio.com",
userName = userName,
passwd = passwd
)
}

For me it works this way:
You need go to Artifacts, choose you feed and open connect to feed. There are open Gradle and generate password, user name will be in Gradle settings on this page. And then use following settings in your build sbt.
val azureArtifactory ="Azure artifactory" at "https://projectspace.pkgs.visualstudio.com/_packaging/java/maven/v1/"
val azureArtifactoryCreds = Credentials(
""https://projectspace.pkgs.visualstudio.com",
"projectspace.pkgs.visualstudio.com", USER_NAME,
PASSWORD)```
.settings(publishTo in ThisBuild := Some(azureArtifactory),
credentials += azureArtifactoryCreds)

Related

Basic Auth for SOAP Client Proxy within WebSphere Liberty

We are trying to deploy an EAR on WebSphere Liberty which has been running on WebSphere Application Server 7 before. The application calls an external SOAP Service. The WSDL of the service defines a wsp:Policy with http:BasicAuthentication xmlns:http="http://schemas.microsoft.com/ws/06/2004/policy/http"/
After deployment when we send a request to our application, which would trigger that SOAP-call we get an error:
None of the policy alternatives can be satisfied.
In addition, we get this Warning:
[WARNING ] No assertion builder for type {http://schemas.microsoft.com/ws/06/2004/policy/http}BasicAuthentication registered.
The server.xml file has this feature added:
<feature>wsSecurity-1.1</feature>
The Service Fetching
public IServiceFacade getBasicHttpBindingIServiceFacade() {
return super.getPort(new QName("http://tempuri.org/", "BasicHttpBinding_IService"), IServiceFacade.class);
}
We have previously on WAS 7 been setting the Basic Auth as follows:
IServiceFacade proxy = service.getBasicHttpBindingIServiceFacade();
Map<String, Object> requestContext = ((BindingProvider) proxy).getRequestContext();
((BindingProvider)proxy).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpoint);
/* Basic authentication */
requestContext.put(BindingProvider.USERNAME_PROPERTY, user);
requestContext.put(BindingProvider.PASSWORD_PROPERTY, password);
The following code has been functional on WAS 7 but is failing on Liberty.
UPDATE 1
The issue here seems to be that we are not able to access the cxf ClientProxy from the internal liberty-provided cxf client dependency. After some digging I found that liberty does not expose these libraries. The only solution being, that I need to exclude the jaxws-2.2 and provide all needed dependencies by myself, but as a result of that, I lose all built in functionality provided by liberty with regards to jax-ws's.
https://developer.ibm.com/answers/questions/236182/how-can-i-access-the-libertys-jaxrs-20-apache-cxf/
UPDATE 2
After providing my own cxf jars and excluding the jaxws-2.2 feature from Liberty. I can now access the HTTPConduit through usiing ClientProxy(proxy).getConduit(). However, now the issue seems to be that CXF does not support the provider: http://schemas.microsoft.com/ws/06/2004/policy/http.
It throws the following error:
DEBUG org.apache.cxf.ws.policy.PolicyEngineImpl - Alternative {http://schemas.microsoft.com/ws/06/2004/policy/http}BasicAuthentication is not supported
I have added the following deps to by pom:
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-rs-client</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-frontend-jaxws</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-transports-http</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-ws-security</artifactId>
<version>3.2.0</version>
</dependency>
I also tried adding the following, with no luck:
Endpoint endpoint = client.getEndpoint()
endpoint.getOutInterceptors().add(new WSS4JOutInterceptor())
UPDATE 3
After some help from IBM support I was instructed to follow the following link:
https://www.ibm.com/support/knowledgecenter/SSAW57_liberty/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/twlp_sec_ws_basicauth.html
We added an ibm-ws-bnd.xml file to our META-INF folder (as per section 4 and below), in addition, we used #WebServiceRef to access the webservice defined in our tags in the xml file. The file looks as such:
<?xml version="1.0" encoding="UTF-8"?>
<webservices-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-ws-bnd_1_0.xsd"
version="1.0">
<service-ref name="service/servicename">
<port name="BasicHttpBinding"
namespace="http://ibm.com/ws/jaxws/transport/security/"
username="username"
password="suchwowsecretpassword">
</port>
</service-ref>
</webservices-bnd>
Usign #WebServiceRef, I am getting back the service which is instantiated by the ibm-ws-bnd.xml file. However, the Basic Auth WS-policy is still not satisfied. Upon removing that policy assertion, we can see that the external service is failing with a 401-unauthorized error.
In addition, When we inspect the message in our handlerchain, we can see the following:
We can see that both username and pw values are null on the conduit properties. Which (as per my knowledge), should indicate that ibm-ws-bnd is not setting the actual Basic Auth header on our service.
We basically ran into the same problem a while back [1], but unfortunately were not able to solve this.
My suggestion would be to setup the entire SOAP-Client stuff in normal Java-code and not rely on anything from your Application Server, because then you are able to set the Authentication like the following snippet:
HTTPConduit http = (HTTPConduit) client.getConduit();
http.getAuthorization().setUserName("user");
http.getAuthorization().setPassword("pass");
Note: We actually did not solve our problem like that; We went for a workaround. Our usage of WebSphere Liberty was limited to the Developers-environment. On our Integration Test, Acceptance and Production-server, we use a 'real' WebSphere Application Server.
Our workaround was start to remove the policy line from the WSDL and
not use Basic Authentication in our developers test.
The real WebSphere still applies the HTTP Basic Authentication if it is configured to do so, even if the WSDL does not specify the policy anymore.
I hope you will manage to find a appropriate solution.
Cheers,
Marco
1: How to setup HTTP Basic Authentication for SOAP Client within WebSphere Liberty
Removing accidental edit

Web.config - allow authentication mode to be altered for each deployment

Assuming web software is deployed to multiple customers, there may be a requirement to have a different authentication mode set for each customer.
Let's say 1 customer wants to use forms authentication, and the other wants to use Windows authentication - this can be managed by setting the authentication mode accordingly in the Web.config file.
However, when a software update is deployed to them, how can I get a new Web.config file to them without overwriting their authentication mode?
Would an include file do the job (so that the settings are held outside of Web.config), or is there a better way to handle this?
Two options
Set the authentication mode programmatically
You could customize your code so that the authentication mode is set programmatically, perhaps based on a business rule setting in the client's database instance.
For example, this sets windows authentication mode:
String applicationPath = String.Format("{0}/{1}", _server.Sites["Default Web Site"].Name, "AppName");
Configuration config = _server.GetApplicationHostConfiguration();
ConfigurationSection anonymousAuthenticationSection = config.GetSection("system.webServer/security/authentication/anonymousAuthentication", applicationPath);
anonymousAuthenticationSection["enabled"] = false;
ConfigurationSection windowsAuthenticationSection = config.GetSection("system.webServer/security/authentication/windowsAuthentication", applicationPath);
windowsAuthenticationSection["enabled"] = true;
_server.CommitChanges();
Use a separate config file
Or, you could "outsource" the configuration to a separate config file that is distributed per client, , e.g. see this answer.
<authentication configSource="ConfigFiles\authentication.config" />
Do both
You could code your site so that it picks the config source for the authentication settings programmatically, perhaps based on the host header in the incoming http request.
authenticationSection.ConfigSource = String.Format("d:\Configurations\{0}.config", HttpContext.Current.Request.Host);
Then you distribute the secondary config file per client.
FirstCustomer.com.config
SomeOtherCustomer.com.config
YetAnother.net.config
See this article.

WNS Notification : Channel URL incompatible with caller app

I'm currently developping an mobile application based on Cordova (version 4.0.0) for Windows Phone 8.1.
I implemented the Java code from API Java-WNS (from github of fernandospr) to send notification to my device.
When I push the notification message to WNS, I get this error :
Client in-bound response
403
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-WNS-DEBUG-TRACE: DB5SCH101111133
Date: Fri, 22 Jan 2016 10:44:55 GMT
Content-Length: 0
X-WNS-STATUS: dropped
X-WNS-ERROR-DESCRIPTION: Channel URL incompatible with caller app
X-WNS-MSG-ID: 6D850FC61AE7FDB5
X-WNS-NOTIFICATIONSTATUS: dropped
Here's the different steps to configure my app to receive notifications :
I registered my app from windows developper dashboard
I have a SID package : ms-app://s-1-15-2-[...]-[...]-[...]-[...]-[...]-[...]-[...]-2403721117
I have also my client secret like this (just an example) : Nk2C+pmadqcHNQR51lN6F7LGaJYUTRPb
This is my channel URI obtained from WNS :
https://db5.notify.windows.com/?token=AwYAAAD8sfbDrL9h7mN%2bmwlkSkQZCIfv4QKeu1hYRipj2zNvXaMi9ZAax%2f6CDfysyHp61STCO1pCFPt%2b9L4Jod72JhIcjDr8b2GxuUOBMTP%2b6%2bqxEfSB9iZfSATdZbdF7cJHSRA%3d
Most important, I associated my app to windows store from Visual Studio. Then, package name, publisher display and publisher ID have been added to my appxmanifest.file
Here's the appxmanifest.file (truely name "package.phone.appxmanifest from platforms/windows folder and cordova windows phone project) :
<?xml version='1.0' encoding='utf-8'?>
<Package xmlns="http://schemas.microsoft.com/appx/2010/manifest" xmlns:m2="http://schemas.microsoft.com/appx/2013/manifest" xmlns:m3="http://schemas.microsoft.com/appx/2014/manifest" xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest">
<Identity Name="company-name.70**********2" Publisher="CN=02******-****-****-****-***********9" Version="1.1.0.0" />
<mp:PhoneIdentity PhoneProductId="06******-****-****-****-**********k" PhonePublisherId="s*******-****-****-****-***********5" />
<Properties>
<DisplayName>Demo Windows App Phone</DisplayName>
<PublisherDisplayName>My Company Name</PublisherDisplayName>
<Logo>images\StoreLogo.png</Logo>
</Properties>
<Prerequisites>
<OSMinVersion>6.3.1</OSMinVersion>
<OSMaxVersionTested>6.3.1</OSMaxVersionTested>
</Prerequisites>
<Resources>
<Resource Language="x-generate" />
</Resources>
<Applications>
<Application Id="com.company-name.demo" StartPage="www/index.html">
<m3:VisualElements BackgroundColor="transparent" Description="CordovaApp" DisplayName="Demo Windows App Phone" ForegroundText="light" Square150x150Logo="images\Square150x150Logo.png" Square44x44Logo="images\Square44x44Logo.png">
<m3:DefaultTile Square71x71Logo="images\Square71x71Logo.png" Wide310x150Logo="images\Wide310x150Logo.png">
<m3:ShowNameOnTiles>
<m3:ShowOn Tile="square150x150Logo" />
<m3:ShowOn Tile="wide310x150Logo" />
</m3:ShowNameOnTiles>
</m3:DefaultTile>
<m3:SplashScreen Image="images\SplashScreenPhone.png" />
</m3:VisualElements>
<ApplicationContentUriRules>
<Rule Match="https://dev.company-name.fr/demo-windows-app/*" Type="include" />
</ApplicationContentUriRules>
</Application>
</Applications>
<Capabilities>
<Capability Name="internetClientServer" />
<DeviceCapability Name="webcam" />
<DeviceCapability Name="microphone" />
</Capabilities>
</Package>
From server side, I use the authentification to WNS with two parameters :
SID package
Client secret password
I checked differents topics about this error and the most of the useful answer is to associate the app to windows store. But I always did it and I don't work for me.
If I understand well, WNS platform just need to know the ID App (SID package) to find my app and to send notification to my app. I don't need to deploy my app through the windows store.
Do you have an idea about how to fix it ? Do you think I forget something or do a mistake ?
EDIT :
I'm thinking I find the problem ! I'm working on it and when I'll resolve my problem, I'll come back here to post my solution.
I found the solution about my problem. I'm stupid because I didn't see the relationship between the association of my app to windows store and the appx archive which must have generated. I deployed the wrong appx and that's why I never received the notifications windows. In fact, the wrong appx was never associated to the windows store...
So, here's the steps (from Visual Studio 2015 RC) to deploy the appx archive linked to windows store :
Go to project tab and select windows store
Click on create app packages
Following the differents steps to generate an appx archive with the windows store informations associated (one of steps will be associate your app with the windows store)
the appx archive generated is stored to the following folder (in my case) : C:\Users\pcharpin\Documents\Visual Studio 2015\Projects\demo-app-windows\demo-app-windows\AppPackages\CordovaApp.Phone_1.1.0.0_arm_Test
To deploy this archive on your remote device, use the Windows Phone Application Deployment 8.1 tool. Select target as remote device and select also the app package which is CordovaApp.Phone_1.1.0.0_arm_Test. To finish, click on deploy and your app will be deploy on your remote device.
You're ready to send notifications windows and receive them to your windows phone device
So, don't forget to create an app package to associate it to the windows store and can receive the notifications windows.
You can retrieve the guidelines about this from documentation of Create a Windows 8.1 app package (excepted about the deployment's step).
If the notification is still not working after associating the application, double check the below configuration as well.
Goto Windows Dev Center of your account -> Dashboard
Select you app -> Services -> Push notifications
Click on "Live Services site" link
Below "Package SID" there is config for "Application Identity" like,
< Identity Name="09FSERVSD.YourAppName" Publisher="CN=xxxxxx"/>
Open your application appxmanifest in the text editor and make sure the same name is added in the Identify tag.
Associating the app through the VS tool ideally should update this entry with "Name" and "Publisher". But in my case it updated only "Publisher" and I had to manually set this value to make it work. This way WNS will know that the target application is same as the one which is associated and the notification should go through.
Hope this helps for those who is struggling with error "Channel URL incompatible with caller app" while testing WNS.
Thanks to all of you guys,
In my case it was the Publisher field (in package.windows10.appxmanifest, package.windows.appxmanifest and package.phone.appxmanifest) that defaulted to CN=$username$.
<Identity Name="com.CordovaApp" Publisher="CN=$username$" Version="2.2.11.0" />
Once set as such, everything went smooth:
<Identity Name="COM.CordovaApp" Publisher="CN=11111111-2222-3333-444444444444" Version="2.2.11.0" />
If you continue receiving the error message "X-WNS-ERROR-DESCRIPTION: Channel URL incompatible with caller app" after you've setup the correctly, try to delete the *_TemporaryKey.pfx file from the project directory.

Azure Access Control Service (ACS) - ACS50001: Relying party with identifier 'https://[namespace].accesscontrol.windows.net/' was not found

I have an ACS namespace with a WS-Federation identity provider set up. Since I'm using Visual Studio 2012, I used the Identity and Access Tool to create the relying party. The tool uses the realm and return url values that I give it when it creates the relying party (I use the Azure cloud service url where I'm deploying my project - i.e. http://myapp.cloudapp.net). There is only one rule in the rule group for my relying party after I run the tool - Pass through all claims for [Relying Party]. I tested the ACS for my app with just that one rule, and also after generating all the rules for the WS-Federation identity provider.
Regardless of the rules in the rule group, I get the error in the title of my question. My browser is redirected to ACS, however for some reason it can't find the correct relying party. I have created an ACS namespace, identity provider, and relying party in two different Azure accounts, with exactly the same result.
I've also tried publishing my project to the Azure cloud service with both http and https endpoints, and both endpoints yield the same result.
The WS-Federation identity provider's federation metadata is coming from Windows Azure Active Directory.
UPDATE
FederationConfiguration section from web.config:
<federationConfiguration>
<cookieHandler requireSsl="false" />
<wsFederation passiveRedirectEnabled="true" issuer="https://[MyNamespace].accesscontrol.windows.net/v2/wsfederation" realm="http://[MyApp].cloudapp.net/" requireHttps="false" />
</federationConfiguration>
UPDATE 2:
Still no solution. It looks like the issue stems from the fact that I set up my own ACS identity provider, and downloaded the federation metadata from Windows Azure Active Directory (WAAD) for that identity provider. That essentially chains 2 ACS instances together. When my app redirects to my ACS, it passes my app's url as the realm. Then, my ACS redirects to the identity provider, WAAD, and passes its own url as the realm. That's why the error I get back has the strange characteristic of a relying party identifier = the url of my own ACS admin portal. I'm not sure why it's not passing the realm all the way through from my app to WAAD.
Well, the answer to this was much more obscure than I had expected - I had to run the following powershell script against my CRM Online WAAD:
Connect-MsolService
Import-Module MSOnlineExtended -Force
$replyUrl = New-MsolServicePrincipalAddresses –Address "https://lefederateur.accesscontrol.windows.net/"
New-MsolServicePrincipal –ServicePrincipalNames #(“https://lefederateur.accesscontrol.windows.net/”) -DisplayName “LeFederateur ACS Namespace” -Addresses $replyUrl
This told WAAD to recognize my ACS namespace, so it wouldn't throw the error saying the ACS namespace was not a valid relying party identifier. Read the whole process here:
http://www.cloudidentity.com/blog/2012/11/07/provisioning-a-directory-tenant-as-an-identity-provider-in-an-acs-namespace/
Thanks to Azure support, I'm now past the error.
Go into the Azure ACS Management Portal. Open Relying Party Applications, and select the relying party you have configured for this app. Make sure that the field "Realm" matches exactly what you have for Realm in the web.config under <federationConfiguration><wsFederation realm=""/>.
All you require is to setup access to ACS in Active directory
After installing powershell Azure Commandlets, run the below commands as mentioned by Andrew
Connect-MsolService
Import-Module MSOnlineExtended -Force
$replyUrl = New-MsolServicePrincipalAddresses –Address "https://xxx.accesscontrol.windows.net/"
New-MsolServicePrincipal –ServicePrincipalNames #("https://xxx.accesscontrol.windows.net/") -DisplayName "xxx ACS Namespace" -Addresses $replyUrl
In case anyone else stumbles on this, double check your realm code here:
wsFederation passiveRedirectEnabled="true" issuer="must match endpoint" realm="must match audience URI " requireHttps="true"
AND
<add key="ida:Realm" value="must match audience uri" />
<add key="ida:AudienceUri" value="must match audience uri" />
my issue was a / at the end of my URI that I added instinctively - i.e. https://somuri.com/ - whereas the portal setting was https://someuri.com
Removal of the / worked.

IIS 7 Error "A specified logon session does not exist. It may already have been terminated." when using https

I am trying to create Client Certificates Authentication for my asp.net Website.
In order to create client certificates, I need to create a Certificate Authority first:
makecert.exe -r -n “CN=My Personal CA” -pe -sv MyPersonalCA.pvk -a
sha1 -len 2048 -b 01/01/2013 -e 01/01/2023 -cy authority
MyPersonalCA.cer
Then, I have to import it to IIS 7, but since it accepts the .pfx format, i convert it first
pvk2pfx.exe -pvk MyPersonalCA.pvk -spc MyPersonalCA.cer -pfx MyPersonalCA.pfx
After importing MyPersonalCA.pfx, I try to add the https site binding to my Web Site and choose the above as SSL Certificate, but I get the following error:
Any suggestions?
I ran across this same issue, but fixed it a different way. I believe the account I was using changed from the time I initially attempted to set up the certificate to the time where I returned to finish the work, thus creating the issue. What the issue is, I don't know, but I suspect it has to do with some sort of hash from the current user and that is inconsistent in some scenarios as the user is modified or recreated, etc.
To fix it, I ripped out of both IIS and the Certificates snap-in (for Current User and Local Computer) all references of the certificate in question:
Next, I imported the *.pfx file into the certs snap-in in MMC, placing it in the Local Computer\Personal node:
Right-click the Certificates node under Personal (under Local Computer as the root)
All Tasks -> Import
Go through the Wizard to import your *.pfx
From that point, I was able to return to IIS and find it in the Server Certificates. Finally, I went to my site, edited the bindings and selected the correct certificate. It worked because the user was consistent throughout the process.
To the point mentioned in another answer, you shouldn't have to resort to marking it as exportable as that's a major security issue. You're effectively allowing anyone who can get to the box with a similar set of permissions to take your cert with them and import it anywhere else. Obviously that's not optimal.
Security warning: what the checkbox really means is that the certificate can be read by users that shouldn't be able to read it. Such as the user running the IIS worker process. In production use the other answer instead.
Happened to me too, and was fixed by ensuring that "Allow this certificate to be exported" is checked when you import it:
(thanks to this post!)
This must be some kind of IIS bug, but I found the solution.
1- Export MyPersonalCA.pfx from IIS.
2- Convert it to .pem:
openssl pkcs12 -in MyPersonalCA.pfx -out MyPersonalCA.pem -nodes
3- Convert it back to .pfx:
openssl pkcs12 -export -in MyPersonalCA.pem -inkey MyPersonalCA.pem -out MyPersonalCA.pfx
4- Import it back to IIS.
We had the same issue due to incorrectly importing the certificate into the Current User Personal certificate store. Removing it from the Current User Personal store and importing it into the Local Machine Personal certificate store solved the problem.
Nobody probably cares about this anymore, but I just faced this issue with my IIS 7 website binding. The way I fixed it was going to the Certificate Authority and finding the certificate issued to the server with the issue. I verified the user account that requested the certificate. I Then logged into the IIS server using RDP with that account. I was able to rebind the https protocol using that account only. No exports, reissuing, or extension changing hacks were needed.
Instead of importing the cert from IIS, do it from MMC.
Then goto IIS for binding.
In our case this problem occurred because we have installed the certificate in a Virtual Machine and made an image of it for further use.
When creating another VM from the image previously created the certificate sends the message.
To avoid this be sure to install the certificate on every new VM installed.
According to the MSDN blog post, this can happen when the current user account doesn't have permission to access the private key file which is under the folder "C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys". Apparently this can be resolved by granting the user account / user group Full Access permission to the above folder.
I've come across the same issue, and was able to resolve it by simply re-importing the .pfx file with the Allow this certificate ti be exported checkbox selected.
However, this method imposes a security risk - as any user who has
access to your IIS server will be able to export your certificate with
the private key.
In my case, only I have access to my IIS server - therefore it was not a huge risk.
I got this error due to wrong openssl command-line during export PKCS #12 certificate. -certfile key was wrong. I exported certificate again and it was imported successfully.
We found another cause for this. If you are scripting the certificate install using PowerShell and used the Import-PfxCertificate command. This will import the certificate. However, the certificate imported cannot be bound to a website in IIS with the same error as this question mentions. You can list certificates using this command and see why:
certutil -store My
This lists the certificates in your Personal store and you will see this property:
Provider = Microsoft Software Key Storage Provider
This storage provider is a newer CNG provider and is not supported by IIS or .NET. You cannot access the key. Therefore you should use certutil.exe to install certificates in your scripts. Importing using the Certificate Manager MMC snap-in or IIS also works but for scripting, use certutil as follows:
certutil -f -p password -importpfx My .\cert.pfx NoExport
See this article for more information: https://windowsserver.uservoice.com/forums/295065-security-and-assurance/suggestions/18436141-import-pfxcertificate-needs-to-support-legacy-priv
Guys after trying almost every single solution to no avail i ended up finding my solution to '“A specified logon session does not exist. It may already have been terminated.” when using https" below
Verify your pfx cert is healthy with correct private key
Run certutil and locate the certs 'unique Container name' - i used certutil -v -store my
3.Navigate to C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys and locate the system file that corresponds to your Container name found above
Check permissions and ensure 'system' has full control to file.
Once applied i then checked IIS and was able to apply to https without error
I had the same issue. Solved by removing the certificate from de personal store (somebody put in it) and from the webhosting. All done through the IIS manager. Then I added again to the webhosting store (with everything checked) and I can use HTTPS again...
In my case it was because the World Wide Publishing Service user didn't have permissions to the certificate. After installing the certificate, access the certificates module in MMC and right-click the certificate with the issue. Select "Manage Private Keys..." from the "All Tasks" menu and add the above user. This was SYSTEM user in my case.
I was getting a this error when trying to bind localhost pfx cert for my development machine.
Before i tried any of this above, tried something simpler first.
Closed any localhost dev site i had openned.
Stopped my IIS server and closed the manager
run the manager as Admin
Added all my https bindings, no errors or issues this time.
restarted iis
Everything seems to work after that.
I was getting same error whilst binding the certificate, but fixed after deleting the certificate and importing again through mmc console.
In my case, it has been fixed by using certutil -repairstore command. I was getting following error, when trying to add certificate to Web Binding on IIS using powershell:
A specified logon session does not exist. It may already have been terminated.
I fixed it by running:
certutil.exe -repairstore $CertificateStoreName $CertThumbPrint
where CertificateStoreName is store name, and CertThumbPrint is the thumbprint of imported certificate.
I recieved this error message when trying to use the following powershell command:
(Get-WebBinding -Port 443 -Name
"WebsiteName").AddSslCertificate("<CertificateThumbprint>", "My")
The solution for me was to go into certificate manager and give IIS_IUSRS user permission to see the certificate.
These are the steps I followed:
Move the certificate into [Personal > Certificates]
Right click [All Tasks > Manage Private Keys]
Add the IIS_IUSRS user (which is located on the local computer not in your domain if you're attached to one)
Give read permission
I managed to fix this problem by importing the SSL certificate PFX file using Windows Certificate Manager.
http://windows.microsoft.com/en-us/windows-vista/view-or-manage-your-certificates
I just had this issue today and feel compelled to post my solution in the hope that you will lose less hair than I've just done.
After trying the solutions above, we had to re-issue the SSL certificate from the SSL provider (RapidSSL issuing as a reseller for GeoTrust).
There was no cost with this process, just the five minute wait while the confirmation emails (admin#) arrived, and we gained access again.
Once we had the response, we used IIS > Server Certificates to install it. We did not need the MMC snap-in.
https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&id=SO5757
We kept a remote desktop window to the server open throughout, to avoid any issues with differing login accounts/sessions, etc. I do believe it is an IIS bug as another expert believes, as we only have one RDC account. What is most infuriating is that the very same certificate has been working perfectly for two months before suddenly "breaking".
In my case I imported a newer version of a certificate (PFX for IIS) from StartSSL just recently and forgot to remove the old one, which somehow caused this error (now two certs sort of the same). I removed both of them, imported the proper one, and now it works.
I was able to fix this problem by removing the then importing it by double clicking the certificate.
For me, the fix was to delete the cert from IIS and re-import it, but into the "personal" certificate store instead of "web hosting"
According to the below, this is fine, at least for my own circumstances.
What's the difference between the Personal and Web Hosting certificate store?
Also, should it make any difference, I imported the certificate via the wizard after double clicking on it on the local machine, instead of via the IIS import method. After this the certificate was available in IIS automatically.
Here's what worked for me:
Step 1: Open up a Run window and type "mmc"
Step 2: Click File > Add/Remove Snap In
Step 3: Add > Certificates, Click OK
Step 4: Choose "Computer Account", then "Local Computer" and proceed.
Step 5: Hit OK
Step 6: Right click the Certificates folder on: Console Root > Certificates (Local Computer) > Personal > Certificates
Step 7: Select All Tasks > Import (Please note that the "Local Machine" is selected on the next window)
Step 8: Browse your .pfx file
Step 9: Then go to the IIS and create https binding
Try :
Go into IIS and delete "VSTS Dev Router" web site and "VSTS Dev Router Pool" application pool.
Run “certlm.msc” and open Personal/Certificates
Delete any cert named “*.vsts.me” and "vsts.me"
Re-deploy

Resources