I am implementing an online store for the last few months and have it successfully connected to the Sandbox of paypal for paypal payments pro gateway. It worked flawlessly since the beginning.
Since over the weekend it is not working anymore. The store gives me the following error:
ERROR CALLING PAYMENT GATEWAY
The trace gives me this error:
Could not create SSL/TLS secure channel
Page URL:/checkoutreview.aspx Source:System.Web.Services Message:The request was aborted: Could not create SSL/TLS secure channel.
Stack Trace:
at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at AspDotNetStorefrontGateways.Processors.PayPalAPIAASoapBinding.DoDirectPayment(DoDirectPaymentReq DoDirectPaymentReq) in C:\Development\Natrol\AspDotNetStorefront\ASPDNSFGateways\PayPalSvcAPIv30.cs:line 956
at AspDotNetStorefrontGateways.Processors.PayPal.ProcessCard(Int32 OrderNumber, Int32 CustomerID, Decimal OrderTotal, Boolean useLiveTransactions, TransactionModeEnum TransactionMode, Address UseBillingAddress, String CardExtraCode, Address UseShippingAddress, String CAVV, String ECI, String XID, String& AVSResult, String& AuthorizationResult, String& AuthorizationCode, String& AuthorizationTransID, String& TransactionCommandOut, String& TransactionResponse) in C:\Development\Natrol\AspDotNetStorefront\ASPDNSFGatewayProcessors\GatewayPayPal\PayPal.cs:line 415
at AspDotNetStorefrontGateways.GatewayTransaction.CallGateway(String gateway) in C:\Development\Natrol\AspDotNetStorefront\ASPDNSFGateways\GatewayTransaction.cs:line 205
at AspDotNetStorefrontGateways.GatewayTransaction.Process() in C:\Development\Natrol\AspDotNetStorefront\ASPDNSFGateways\GatewayTransaction.cs:line 176
What is going on here ? Any idea what happened and how to solve it ? Why would it break all of a sudden ?
thanks,
Michael
If you are using paypal_base.dll, then the url you need to change is embedded inside it and PayPal have not release a new one (as yet). To override the setting you need to add the following to your web.config file.
Add the following to the < configSections >.
<section name="paypal" type="com.paypal.sdk.core.ConfigSectionHandler, paypal_base"/>
Then add the following < paypal > section.
<paypal>
<endpoints>
<wsdl>
<environment name="live">
<port name="PayPalAPI">https://api.paypal.com/2.0/</port>
<port name="PayPalAPIAA">https://api-aa.paypal.com/2.0/</port>
<port name="PayPalAPI" threetoken="true">https://api-3t.paypal.com/2.0/</port>
<port name="PayPalAPIAA" threetoken="true">https://api-aa-3t.paypal.com/2.0/</port>
</environment>
<environment name="sandbox">
<port name="PayPalAPI">https://api.sandbox.paypal.com/2.0/</port>
<port name="PayPalAPIAA">https://api-aa.sandbox.paypal.com/2.0/</port>
<port name="PayPalAPI" threetoken="true">https://api-3t.sandbox.paypal.com/2.0/</port>
<port name="PayPalAPIAA" threetoken="true">https://api-3t.sandbox.paypal.com/2.0/</port>
</environment>
</wsdl>
</endpoints>
</paypal>
(see https://www.x.com/developers/paypal/forums/paypal-sandbox/c-sdk-sandbox-three-token-endpoint)
Following is true if you are using "signature" authentication:
Point is that few weeks ago endpoint https://api.sandbox.paypal.com/2.0/ stopped working. Now should use this one instead: https://api-3t.sandbox.paypal.com/2.0/
To do that I changed endpoints for sandbox in "paypal-endpoint.xml" found in PayPal's SDK. Download SDK, find "paypal-endpoint.xml", find Sandbox section and change addresses to be one mentined above. Then recompile the paypal_base.dll and use it
Here is posted very similar solution, but XMLs are published in web.config: www . x . com/developers/paypal/forums/paypal-sandbox/c-sdk-sandbox-three-token-endpoint
Google for "PayPal endpoints" to get more info about current PayPal's endpoints
I'm guessing you are using "Signature" as opposed to "Certificate: auth? And possibly running from a local IP when testing?
The paypal sandbox environment gets confused :) (Official response from paypal) and wants to see "Certificate" in some calls.
We have an instance where we are using the .NET paypal API SDK with APISignature auth - Meaning we have no way to change endpoint (.NET sdk version 51), and we have no cert installed (not needed with signature auth). The function of creating a profile works FINE on sandbox (CreateRecurringPaymentsProfile), BUT doing a transaction lookup (TransactionSearch) results in "Could not create SSL/TLS secure channel". When we move to the "live" environment, both work fine.
The only fix we have found is to change to "cert: auth, install the cert, and it seems to work fine. Which is a royal PITA.
Related
I've encountered a challenge regarding internet-facing deployment installation for CRM using a AD FS server. After the setup is complete, users are able to access the CRM server - but when trying to run custom pages the following error message is prompted:
"The authentication endpoint Kerberos was not found on the configured Secure Token Service!"
I've found several solutions on the internet for this issue:
First I found a KB article from Microsoft providing a possible
solution, this involves updating MEX endpoints by running a provided
PowerShell script.
(https://support.microsoft.com/en-us/help/2828015/configuring-ad-fs-2.1-with-microsoft-dynamics-crm-2011).
But this doesn't seem to be the issue.
Another solution could be to update the CRM rollup version (currently have version 14 installed, latest is version 18) - this is something that I want to avoid as it might lead to further issues.
Have anybody else encountered a similar issue, and in that case how did you solve it?
I have just spent last few days to figure this exact same error message and it turned out that it was the "Domain" attribute in crm connection string. Copied my answer to my own question at the Microsoft Dynamics CRM community forum here:
"Well, I found the culprit - it was the Domain attribute in the connection string:
For connecting from outside the domain, it does not like to have a Domain in the connection string:
Connection string format 1 (without Domain attribute): "Authentication Type=Passport;Server=https://devcrm.myco.com;Username=devuser#myco.com;Password=pwd" - this works both inside and outside the domain "myco.com"
Connection string format 2 (with Domain attribute): "Authentication Type=Passport;Server=https://devcrm.myco.com;Domain=myco;Username=devuser#myco.com;Password=pwd" - this only works inside the domain myco.com but NOT outside (exception: The authentication endpoint Kerberos was not found on the configured Secure Token Service!)
The key is in the Xrm.Client.CrmConnection.ClientCredential:
If Domain is NOT specified in the connection string, when connecting from outside domain, Xrm.Client.CrmConnection.ClientCredentials.UserName is populated whereas the ClientCredentials.Windows.ClientCredentials.UserName is empty.
But if the Domain is specified, Xrm.Client.CrmConnection.ClientCredentials.UserName becomes null and Xrm.Client.CrmConnection.ClientCredentials.Windows.ClientCredentials.UserName populated, which led to the service trying to authenticate user as a Windows AD user so of course it would fail when running app from outside Windows domain. And it explains why the same app works inside the domain even with Domain specified in the connection string.
For more detail, refer here for my original post asking for help in Dynamics CRM Forum
I'm configuring a Service Provider to connect to ADFS, and looking up the error we get says:
The Federation Service encountered an error while processing the SAML authentication request.
Microsoft.IdentityModel.Protocols.XmlSignature.SignatureVerificationFailedException: MSIS0037: No signature verification certificate found for issuer 'myapp.domain.com'.
at Microsoft.IdentityServer.Protocols.Saml.Contract.SamlContractUtility.CreateSamlMessage(MSISSamlBindingMessage message)
at Microsoft.IdentityServer.Service.SamlProtocol.SamlProtocolService.Issue(IssueRequest issueRequest)
at Microsoft.IdentityServer.Service.SamlProtocol.SamlProtocolService.ProcessRequest(Message requestMessage)
I'm just the client / SP, I don't have access to the ADFS server, its managed by a different company, in a different country. So, like Jon Snow, I know nothing.
The internet seems to suggest that perhaps these two Microsoft KB's might be relevant:
KB2843638 (a security update that causes an issue)
KB2896713 (a follow up patch)
Is the metadata not trusted by the IDP, or that would be a different issue?
I have seen this error when the request and the Relying Party identifier registration on ADFS (2.1) did not match in casing. For instance the error would occur if the request said: urn:MyRPId and the ADFS registration was urn:myrpid.
I have a WebForms app that uses the WindowsAzure.Storage API v3. It works fine in development and in one production environment, but I'm rolling out a new instance and any code that calls out Azure Blob Storage gives me a 403 error.
I've been fiddling with this for awhile, and it fails on any call out to Blob Storage, so rather than show my code I'll show my stack trace:
[WebException: The remote server returned an error: (403) Forbidden.]
System.Net.HttpWebRequest.GetResponse() +8525404
Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync(RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext) +1541
[StorageException: The remote server returned an error: (403) Forbidden.]
Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync(RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext) +2996
Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobContainerPublicAccessType accessType, BlobRequestOptions requestOptions, OperationContext operationContext) +177
ObsidianData.Azure.Storage.GetContainer(CloudBlobClient client, Containers targetContainer) in D:\Dev\nSource\Obsidian\Source\ObsidianData\Azure\Storage.vb:84
ObsidianWeb.Leads.HandleListenLink(String fileName, HyperLink link) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:188
ObsidianWeb.Leads.LoadEntity_ContactDetails(BoLead lead) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:147
ObsidianWeb.Leads.LoadEntity(BoLead Lead) in D:\Dev\nSource\Obsidian\Source\ObsidianWeb\Bdc\Leads.aspx.vb:62
EntityPages.EntityPage`1.LoadEntity() +91
EntityPages.EntityPage`1.Page_LoadComplete(Object sender, EventArgs e) +151
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4018
Here's what I've tried...
The AzureStorageConnectionString that fails in this environment definitely works in production
Other connection strings (from the other production environment, which works) also get a 403 here
There seemed to be an issue with timestamps in some old versions of the REST api (which I am not directly using...) so I made certain the times are correct, even tried switching the server to UTC time.
Tried toggling the connection string between http/https.
Upgraded to the latest version of the API (v3.1)
Tried fiddling with the code to ensure that every call out to Azure Storage gets 403. It does.
In desperation, Installed Azure Powershell on the server just to verify that some type of communication with Azure is working. And that worked fine.
Browsed to the azure management portal as well and that works fine.
Any ideas? This should just be using port 80 or 443, right? So there should be no way this is some kind of network issue. Let me know if that's wrong.
The working production machine is an Azure VM (Server 2008 R2 with IIS 7.5)
There are also some differences with the server:
This new machine is physical hardware (Server 2012 and IIS 8)
This IS using a different storage account inside my azure subscription, however I've tried a total of 3 connection strings and none of them work here.
UPDATE: someone asked to see the code. Okay, I wrote a class called Azure.Storage, which just abstracts my cloud storage code. We are failing on a call to Storage.Exists, so here's the part of that class that feels relevant:
Public Shared Function Exists(container As Containers, blobName As String) As Boolean
Dim Dir As CloudBlobContainer = GetContainer(container)
Dim Blob As CloudBlockBlob = Dir.GetBlockBlobReference(blobName.ToLower())
Return Blob.Exists()
End Function
Private Shared Function GetContainer(client As CloudBlobClient, targetContainer As Containers)
Dim Container As CloudBlobContainer = client.GetContainerReference(targetContainer.ToString.ToLower())
Container.CreateIfNotExists()
Container.SetPermissions(New BlobContainerPermissions() With {.PublicAccess = BlobContainerPublicAccessType.Blob})
Return Container
End Function
Private Shared Function GetCloudBlobClient() As CloudBlobClient
Dim Account As CloudStorageAccount = CloudStorageAccount.Parse(Settings.Cloud.AzureStorageConnectionString())
Return Account.CreateCloudBlobClient()
End Function
...Containers is just an enum of container names (there are several):
Public Enum Containers
CallerWavs
CampaignImports
Delve
Exports
CampaignImages
Logos
ReportLogos
WebLinkImages
End Enum
...Yes, they have upper-case characters, which causes problems. Everything is forced to lowercase before it goes out.
Also I did verify that the correct AzureConnectionString is coming out of my settings class. Again, I tried a few that work elsewhere. And this one works elsewhere also!
Please check the clock on the servers in question. Apart from the incorrect account key, you can also get 403 error if the time on the server is not in sync with the time on storage servers (Give or take +/- 15 minutes deviation is allowed).
I also ran into this error. My problem was that I had turned ON dynamic IP security restrictions in my web.config and the number of files being downloaded in some cases (e.g. with pages with lots of images) was exceeding the max thresholds I had defined in my web.config.
In my case Access key is not same as connection string using by the source code.
So try to recheck on your Azure -> [Storage Account Name] -> Access Keys -> key1 -> Key & Connection string.
I have an ACS namespace with a WS-Federation identity provider set up. Since I'm using Visual Studio 2012, I used the Identity and Access Tool to create the relying party. The tool uses the realm and return url values that I give it when it creates the relying party (I use the Azure cloud service url where I'm deploying my project - i.e. http://myapp.cloudapp.net). There is only one rule in the rule group for my relying party after I run the tool - Pass through all claims for [Relying Party]. I tested the ACS for my app with just that one rule, and also after generating all the rules for the WS-Federation identity provider.
Regardless of the rules in the rule group, I get the error in the title of my question. My browser is redirected to ACS, however for some reason it can't find the correct relying party. I have created an ACS namespace, identity provider, and relying party in two different Azure accounts, with exactly the same result.
I've also tried publishing my project to the Azure cloud service with both http and https endpoints, and both endpoints yield the same result.
The WS-Federation identity provider's federation metadata is coming from Windows Azure Active Directory.
UPDATE
FederationConfiguration section from web.config:
<federationConfiguration>
<cookieHandler requireSsl="false" />
<wsFederation passiveRedirectEnabled="true" issuer="https://[MyNamespace].accesscontrol.windows.net/v2/wsfederation" realm="http://[MyApp].cloudapp.net/" requireHttps="false" />
</federationConfiguration>
UPDATE 2:
Still no solution. It looks like the issue stems from the fact that I set up my own ACS identity provider, and downloaded the federation metadata from Windows Azure Active Directory (WAAD) for that identity provider. That essentially chains 2 ACS instances together. When my app redirects to my ACS, it passes my app's url as the realm. Then, my ACS redirects to the identity provider, WAAD, and passes its own url as the realm. That's why the error I get back has the strange characteristic of a relying party identifier = the url of my own ACS admin portal. I'm not sure why it's not passing the realm all the way through from my app to WAAD.
Well, the answer to this was much more obscure than I had expected - I had to run the following powershell script against my CRM Online WAAD:
Connect-MsolService
Import-Module MSOnlineExtended -Force
$replyUrl = New-MsolServicePrincipalAddresses –Address "https://lefederateur.accesscontrol.windows.net/"
New-MsolServicePrincipal –ServicePrincipalNames #(“https://lefederateur.accesscontrol.windows.net/”) -DisplayName “LeFederateur ACS Namespace” -Addresses $replyUrl
This told WAAD to recognize my ACS namespace, so it wouldn't throw the error saying the ACS namespace was not a valid relying party identifier. Read the whole process here:
http://www.cloudidentity.com/blog/2012/11/07/provisioning-a-directory-tenant-as-an-identity-provider-in-an-acs-namespace/
Thanks to Azure support, I'm now past the error.
Go into the Azure ACS Management Portal. Open Relying Party Applications, and select the relying party you have configured for this app. Make sure that the field "Realm" matches exactly what you have for Realm in the web.config under <federationConfiguration><wsFederation realm=""/>.
All you require is to setup access to ACS in Active directory
After installing powershell Azure Commandlets, run the below commands as mentioned by Andrew
Connect-MsolService
Import-Module MSOnlineExtended -Force
$replyUrl = New-MsolServicePrincipalAddresses –Address "https://xxx.accesscontrol.windows.net/"
New-MsolServicePrincipal –ServicePrincipalNames #("https://xxx.accesscontrol.windows.net/") -DisplayName "xxx ACS Namespace" -Addresses $replyUrl
In case anyone else stumbles on this, double check your realm code here:
wsFederation passiveRedirectEnabled="true" issuer="must match endpoint" realm="must match audience URI " requireHttps="true"
AND
<add key="ida:Realm" value="must match audience uri" />
<add key="ida:AudienceUri" value="must match audience uri" />
my issue was a / at the end of my URI that I added instinctively - i.e. https://somuri.com/ - whereas the portal setting was https://someuri.com
Removal of the / worked.
Usually using the Google OpenId works fine, thousands of times a day, then it will start intermittently going wrong and timing out for an hours or so (some requests will validate but not all). Repeated validation will eventually work.
Error messages are:
Event code: 200000
Event message: No OpenID endpoint found. : https://www.google.com/accounts/o8/id
Sequence contains no elements
Adding in log4net yields:
DotNetOpenAuth.Yadis:
Error while performing discovery on: "https://www.google.com/accounts/o8/id":
DotNetOpenAuth.Messaging.ProtocolException:
Error occurred while sending a direct message or getting the response.
---> System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at DotNetOpenAuth.Messaging.StandardWebRequestHandler.GetResponse
(HttpWebRequest request, DirectWebRequestOptions options)
in c:\...\Dot...Core\Messaging\StandardWebRequestHandler.cs:line 127
--- End of inner exception stack trace ---
at DotNetOpenAuth.Messaging.StandardWebRequestHandler.GetResponse
(HttpWebRequest request, DirectWebRequestOptions options)
in c:\...\Dot...Core\Messaging\StandardWebRequestHandler.cs:line 175
at DotNetOpenAuth.Messaging.UntrustedWebRequestHandler.GetResponse
(HttpWebRequest request, DirectWebRequestOptions options)
in c:\...\Dot...Core\Messaging\UntrustedWebRequestHandler.cs:line 250
at DotNetOpenAuth.Yadis.Yadis.Request
(IDirectWebRequestHandler requestHandler,
Uri uri, Boolean requireSsl, String[] acceptTypes)
in c:\...\Dot...OpenId\Yadis\Yadis.cs:line 172
at DotNetOpenAuth.Yadis.Yadis.Discover
(IDirectWebRequestHandler requestHandler, UriIdentifier uri, Boolean requireSsl)
in c:\...\DotNetOpenAuth.OpenId\Yadis\Yadis.cs:line 63
at DotNetOpenAuth.OpenId.UriDiscoveryService.Discover
(Identifier identifier, IDirectWebRequestHandler requestHandler,
Boolean& abortDiscoveryChain)
in c:\...\DotNet...OpenId\OpenId\UriDiscoveryService.cs:line 51
at DotNetOpenAuth.OpenId.IdentifierDiscoveryServices.Discover
(Identifier identifier)
in c:\...\Dot...OpenId\OpenId\IdentifierDiscoveryServices.cs:line 58
at DotNetOpenAuth.OpenId.RelyingParty.AuthenticationRequest.Create
(Identifier userSuppliedIdentifier, OpenIdRelyingParty relyingParty,
Realm realm, Uri returnToUrl, Boolean createNewAssociationsAsNeeded)
in ...OpenId.RelyingParty\OpenId\RelyingParty\AuthenticationRequest.cs:line 364
And
DotNetOpenAuth.Http WebException:
Timeout from https://www.google.com/accounts/o8/id, no response available.
Any ideas?
It sounds like you need to fix your network latency. It seems highly unlikely that Google would be the bottleneck here.
You may also want to increase the HTTP timeouts on your end to reduce the failure rate. The full set of options is available here. Specifically you're probably looking for:
<untrustedWebRequest
timeout="00:00:10"
readWriteTimeout="00:00:01.500" />
Check out the configurations link to see the context of where this goes.
We recently ran into this same issue. Having read several different scenarios and having gone through the trace steps I finally concluded, as I have seen elsewhere that this problem can be caused by a DNS server issue. In our case we had a production server that has been in use for over 18 months and just recently started getting the same issue as mentioned above, but it was very consistent on this one server. Another server on a another network and our development computers did not have any issues.
Long story short I changed the primary DNS for the production server to Google's public DNS, 8.8.8.8 and it instantly started working. I had manually flushed the DNS cache on the production server prior to this (without positive outcome) so it leads me to believe the DNS server (provided by our hosting center) had a bad cache entry that was ultimately causing the problem.
Hope this helps someone else who runs across this scenario.