In asp.net, Web Service endpoint is incorrect when client connects to production server - asp.net

I've been scouring the net for almost two days and must be missing something (possibly basic).
On the test (local) web server I have set up a simple service, and using a client, I discover the service and run it without problems.
Using the same client, I discover the same service, but on the production server using https://MyNewStuff.com/WebServices/MyService.asmx (the real internet address of the service) without problems, but when I try to run it it fails with an EndPointNotFound exception. Upon investigating I find that the client's app.config is incorrect as follows;
<endpoint address="https://ProductionWeb.Ourdomain.com/WebServices/MyService.asmx"
binding="basicHttpBinding" bindingConfiguration="MyServiceSoap"
contract="MOX24.MyServiceSoap" name="MyServiceSoap" />
i.e., not set up correctly as it reflects https://ProductionWeb.Ourdomain.com ... and not https://MyNewStuff.com/WebServices, indicating that the service (discovery) is sending the wrong information to the clients (it is sending the server's name and domain and not the 'web' name).
Any help on this would be greatly appreciated!!

If your client is a web application, put https://MyNewStuff.com/WebServices/MyService.asmx in the Web.Release.config.

Related

How can I switch an existing Azure web-role from http over to https

I have a working Azure web role which I've been using over an http endpoint. I'm now trying to switch it over to https but struggling mightily with what I thought would be a simple operation. (I'll include a few tips here for future readers to address issues I've already come across).
I have created (for now) a self-signed certificate using the powershell commands documented by Microsoft here and uploaded it to the azure portal. I'm aware that 3rd parties won't be able to consume the API while it has a self-signed certificate but my plan is to use the following for local client testing before purchasing a 'proper' certificate.
ServicePointManager.ServerCertificateValidationCallback += (o, c, ch, er) => true;
Tip: you need upload the .pfx file and then supply the password you used in the powershell script. Don't be confused by suggestion to create a .cer file which is for completely different purposes.
I then followed the flow documented for configuring azure cloud services here although many of these operations are now done directly through visual studio rather than by hand-editing files.
In the main 'cloud service' project under the role I wanted to modify:
I imported the newly created certificate. Tip: the design of the dialog used to add the thumbprint makes it very easy to incorrectly select the developer certificate that is already installed on your machine (by visual studio?). Click 'more options' to get to _your_ certificate and then check the displayed thumbprint matches that shown in the Azure portal in the certificates section.
Under 'endpoints' I added a new https endpoint. Tip: use the standard https port 443, NOT the 'default' port of 8080 otherwise you will get no response from your service at all
In the web.config of the service itself, I changed the endpoint binding for the service so that the name element matched the new endpoint.
I then published the cloud project to Azure (using Visual Studio).
At this point, I'm not seeing the results I expected. The service is still available on http but is not available on https. When I try to browse for it on https (includeExceptionDetailInFaults is set to true) I get:
HTTP error 404 "The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable"
I interpret this as meaning that the https endpoint is available but the service itself is bound to http rather than https despite my changes to web.config.
I have verified that the publish step really is uploading the new configuration by modifying some of the returned content. (Remember this is still available on http.)
I have tried removing the 'obsolete' http endpoint but this just results in a different error:
"Could not find a base address that matches scheme http for the endpoint with binding WebHttpBinding. Registered base address schemes are [https]"
I'm sure I must be missing something simple here. Can anyone suggest what it is or tips for further trouble-shooting? There are a number of stack-overflow answers that relate to websites and suggest that IIS settings need to be tweaked but I don't see how this applies to a web-role where I don't have direct control of the server.
Edit Following Gaurav's suggestion I repeated the process using a (self-signed) certificate for our own domain rather than cloudapp.net then tried to access the service via this domain. I still see the same results; i.e. the service is available via http but not https.
Edit2 Information from csdef file... is the double reference to "Endpoint1" suspicious?
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="HttpsEndpoint" />
<Binding name="Endpoint1" endpointName="HttpEndpoint" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="HttpsEndpoint" protocol="https" port="443" certificate="backend" />
<InputEndpoint name="HttpEndpoint" protocol="http" port="80" />
</Endpoints>
<Certificates>
<Certificate name="backend" storeLocation="LocalMachine" storeName="My" />
</Certificates>

The remote name could not be resolved: 'maps.googleapis.com'

I have an asp.net web application that uses Google Maps. Everything has been running fine for quite some time. All of a sudden yesterday, no calls to Google API would process. I assume we reached a query limit but upon checking, all quota totals were 0. I then realized we were not including our API Key in the requests. I added the appropriate API key and the maps came back online. However, I still cannot GeoCode a address using the following:
maps.googleapis.com/maps/api/geocode/xml?address=(PROPERTY-ADDRESS)&key=(OUR-KEY)
returns:
The remote name could not be resolved: 'maps.googleapis.com'
We get the same error when trying to use:
maps.googleapis.com/maps/api/distancematrix/xml?origins=(LOCATION1)&destinations=(LOCATION2)&units=imperial&key=(OUR-KEY)
However all calls to:
maps.googleapis.com/maps/api/js?key=(OUR-KEY)
work fine.
This code has worked fine for a long time, with no modifications.
We are on a dedicated server, using the same IP, However the site does run through Incapsula (Not sure if that makes a difference)
I already added:
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
</defaultProxy>
</system.net>
but this did not help.
The error: The remote name could not be resolved makes it seem like a DNS issue, but the server is online and can resolve maps.googleapis.com/maps/api/js without any problem.

Azure Dedicated Cache Role not working in the cloud

I'm sorry there's not a lot to go on with this, but some pointers in the right direction would be greatly appreciated.
I have an Azure Cloud Service with a web role and a dedicated cache worker role. In the web role, I'm using the cache like so in a Web Api controller:
var cacheFactory = new DataCacheFactory();
_cache = cacheFactory.GetDefaultCache();
And in the web.config:
<dataCacheClients>
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="MyProject.Workers.MyCache" />
</dataCacheClient>
It works fine locally using the Azure emulators, but on deploying to Azure, the controller method times out (after about 15 minutes!). The only error message I have is:
ErrorCode:SubStatus:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.). Additional Information : The client was trying to communicate with the server: net.tcp://MyProject.Workers.MyCache:24233.
EDIT:
Similar lack of success trying to use the web role itself for caching:
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="MyProject.WebRole" />
<localCache isEnabled="true" sync="NotificationBased" objectCount="100000" ttlValue="300" />
<clientNotification pollInterval="60" />
</dataCacheClient>
Simply nothing coming back from the server. It doesn't even time out!

Access Session in WCF service from WebHttpBinding

I'm using WCF service (via WebGet attribute).
I'm trying to access Session from WCF service, but HttpContext.Current is null
I added AspNetCompatibilityRequirements and edited web.config but I still cannot access session.
Is it possible to use WebGet and Session together?
Thank you!
Yes, it is possible. If you edit the web.config:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
and add the AspNetCompatiblityRequirements, the HttpContext.Current should be available.
Check everything once again, maybe you've put the attribute in the wrong place (the interface instead of the class?).
A RESTfull service with a session?
See excellent discussion here: Can you help me understand this? "Common REST Mistakes: Sessions are irrelevant"
http://javadialog.blogspot.co.uk/2009/06/common-rest-mistakes.html (point 6)
and
http://www.peej.co.uk/articles/no-sessions.html
Quote from Paul Prescod:
Sessions are irrelevant.
There should be no need for a client to "login" or "start a connection." HTTP authentication is done
automatically on every message. Client applications are consumers of
resources, not services. Therefore there is nothing to log in to!
Let's say that you are booking a flight on a REST web service. You
don't create a new "session" connection to the service. Rather you ask
the "itinerary creator object" to create you a new itinerary. You can
start filling in the blanks but then get some totally different
component elsewhere on the web to fill in some other blanks. There is
no session so there is no problem of migrating session state between
clients. There is also no issue of "session affinity" in the server
(though there are still load balancing issues to continue).

How to expose a wcf service to different clients

I am creating a wcf service. When i add the service as a "Web reference" to my web site (I do this by using the url: http://localhost/myservice.svc?wsdl ) and then call the web methods exposed by the service, I get a "Operation has timed out" exception. However when i add the service as a "Service Reference" to the site, the calls work fine.
The reason iam adding it as a web reference is, i want to expose the wcf service to all clients like java, php .....
I have looked at the article in "http://blogs.msdn.com/juveriak/archive/2008/03/18/wcf-proxy-that-works-with-different-clients.aspx", but i have not tried converting the wsdl to a typed proxy as suggested by this article.
Any ideas on why i get a time out error when using it as a web reference?
Likely you're using WsHttpBinding rather than BasicHttpBinding. .NET 2.0 web services cannot consume a WsHttpBinding service.
The problem is one of protocol. Web service protocols are constantly changing, adding security, federated identity, and so forth. As they change, older technologies can't communicate using the newer protocols.
Thankfully, WCF will allow you to use multiple protocols in a single service -- just set up separate endpoints for each protocol you want to use. Be wary, however, as some are more secure than others.
Regarding versioning, the MessageVersion class is a good starting point.
Edit: I should have mentioned that you need to use MessageVersion as part of a custom TextMessageEncodingBindingElement binding, like so:
<bindings>
<customBinding>
<binding name="MyBinding">
<textMessageEncoding messageVersion="Soap11WSAddressing10"/>
<httpTransport/>
</binding>
</customBinding>
</bindings>

Resources