SOAP call works on HTTP but not on HTTPS - http

I have become a custodian of a very old set of clients that access a webservice hosted by one of our suppliers.
The dept. that used it stopped using it during COVID, and when the clients fired up the clients after COVID it had stopped working, well for this dept. anyway.
After investigation we found out that we could make the call on HTTP for the WSDL, but when HTTPS was invoked it would raise errors and end.
The main error it reports is a Security Problem
Other organisations report they can use it fine and no one is saying (admitting) to any changes.
The clients were written in VB6 Pro and Soap Toolkit3.0
There isn't much online about this, apart from load the WSDL manually, but we no longer have VB6 Pro to do any changes to do this so I am hoping someone recognises the issue i.e., networking, security, ssl, tls (insert any other grab at straw term).
This is the last ditch attempt to get it working before we let them know they ahve to re-write it in something else

Related

Debugging requests which are 'stuck' in an IIS worker process

In case of TL;DR - I basically need guidance regarding what tools are available to debug requests which are issued to IIS and which stall inside a module.
I have a problem with an old ASP 2.0 app at the moment whereby it will periodically become unavailable and recycling the app pool (horrible as that may be) doesn't bring it back up 100% of the time.
So first of all it presents itself as requests entering the app pool and being trapped in state 'BeginRequest' in RewriteModule.
It is not a specific request which is always the first to experience this issue. The issue cannot be easily recreated either.
Eventually requests join this backlog and when it becomes 70+ deep the app pool fails to respond to pings from WAS and it forcibly recycles. Predictably it doesn't stop on-time and the old app pool is forced to stop. When the new app pool comes up it either works just fine or it instantly experiences the same issue as the outgoing one and requests begin to queue.
In issues like this all the official guidance is understandably focussed around looking at why the RewriteModule may choke.
I have validated my redirections and though complex there are no obvious issues with syntax (XML validates).
Likewise in inetmgr loading up the URL Rewrite Module seems to parse the configs fine and show them visually.
Basic stuff like permissions is all fine.
When the app is working normally I also used Failed Request Tracing/Logging to look at the request pipeline for a sample URL which stalled and I can confirm that there is no circular logic or weird errors presenting - the request seems to be handled just fine. This also showed me how high up the rewritemodule is invoked and from this I really don't see how the issue could be app-related as .NET isn't invoked at this point.
Annoyingly when an app pool is experiencing this issue and I can throw in requests which just stall Failed Request Tracing is no good because you actually need a request to get to the end of it's journey and fail otherwise it refuses to log anything out.
I resorted to taking process dumps of affected w3wp.exe's and running them through DebugDiag. Unfortunately the only thing I see is that threads are open accessing the rewritemodule but precious little about what they are stuck on.
As anyone else would do I've tried to track the start of the issue back to any recently installed patches or code changes but nothing matches. Likewise this is happening on 3x servers otherwise I would try reinstalling the rewritemodule. Other sites on the same server which invoke rewritemodule are unaffected.
Has anyone else experienced issues like this - the net seems to have relatively little info in this case. Perhaps you can recommend further debugging tools or approaches for IIS which I can adapt to this scenario? This is sort of a cry for help from someone more used to Apache/Nginx - sorry for the long post.

Secured WCF service timing out on 2nd invocation of client channel

We have a secured & authenticated WCF service which cannot use service references. Thus, we provide the interface for the contracts and open client channel manually.
We have found out that as long we open it once, everything works fine. We can call several methods several times. However, if the channel is closed or just set to a new instance, the Login() (which happens to be required for first step prior to using the service), times out.
To make the matters even more mysterious, this only happens on our production server. If I run the same project locally, I am able to login many times as I want. Consuming the methods inside a web browser (even on a code-behind ASPX page) do not have this problem even with the production server. ONLY when it's a .NET client trying to open a client channel against the production server, do we have this problem.
We are not even sure where to start looking. Any advices would be greatly appreciated.
UPDATE:
As per #Rene's suggestion, we turned on logging on both sides. From client's log, there is a record of error which is basically the same timeout error we already got via the exception. Nothing meaningful. On the server's logs, there are records of service methods being invoked successfully even after 2nd login() and from server's POV, the request is served.
Additionally, I discovered that I could not even reproduce this issue on my machine using same test project to reproduce this problem. This reproduces on my developer's machine. I verified that we were at same version of .NET framework and Visual Studio. It has to be surely a client-side problem. What could be it?
In case anyone else is looking for answer, we finally found it -- the issue is due to the need to set on client's side System.Net.ServicePointManager.DefaultConnectionLimit to some higher value. The default value is 2 but in reality this allows only one proxy to be created and be usable. Setting it to 3 would allow 2 proxies to be created & be used.

OpenLaszlo 4.9 DHTML login servlet forwards but never loads page

I am having some problems at the moment using a LoginServlet running OL 4.9, on Tomcat 7.
I have Tomcat configured to allow crossContext to be true, and that allows me to work with other app contexts on the same server. Specifically a Login Servlet. My only other app is the OpenLaszlo presentation server LPS(lps-4.9.0).
I am using a Tomcat Request Filter that snoops the incoming addresses and looks for a particular cookie of authentication, which then makes its way to the LoginServlet that does a forwarding to the OpenLaszlo page. This was done to KEEP the cookie alive when the Request Filter was awakened at the loading of the OpenLaszlo page.
All of that is working now.
There are no errors or warnings in the lps.log file or the localhost.<date>.log either, however the page loading goes on forever, and never completes.
Could it be something that I am passing along in the forwarded URL? I am using at least 2 parameters to cause lzr to be set to "dhtml" and then lzt to be set to "html."
I can't even get a simple <canvas> page with a simple button to load. Has anyone seen this, and been able to fix the problem?
Since I first wrote my description I wrote another plea for help to some friends and ex coworkers, and this will help update the details of what I have discovered thus far.
Here’s the scenario: I am using Tomcat 7, and have installed the WAR file for OpenLaszlo 4.9.
Alongside of this I created a LoginServlet hierarchy and code and web.xml file just under
“webapps”; the same level that lps-4.9.0 is installed.
The sequence of events is the following:
1. A login page comes up that takes the username and password, and sends that
off to /LoginServlet to process. Note: I have also written and registered a Request Filter
for Tomcat that halts traversal beyone /lps-4.9.0 and checks for proper authentication
as I retrieve the cookies from requests trying to access those levels.
2. In the LoginServlet, I am creating a MACH COOKIE that I’ll send along with the response,
so that the Filter will allow me past the /lps-4.9.0 level. To do this I had to do a FORWARD
operation to preserve the cookie. a REDIRECT would just drop them. Since you can’t
give a relative path higher than the Servlet’s root, I had to turn on Tomcat’s “crossContext”
feature that allows me to do that in the same domain. And I have both contexts registered
in Tomcat’s conf directory in server.xml, I believe. Anyhow it works. I can grab the
/lps-4.9.0 context, get a Request Dispatcher, and then use that dispatcher to FORWARD
the request/response pair to my OpenLaszlo file(the LZX file).
So it seems to get as far as LOADING the OpenLaszlo page, but when I perused the console
messages in Chrome’s Developer Tools debugger, it showed that it was actually trying
to use the context of the original request(i.e. /LoginServlet); and of course that doesn’t
exist. I guess when I passed along the original request/response pair, the request had
the FIRST context used, and then tried to derive the relative path to the file off of that.
QUESTION: Can I just copy the stuff from the original request, but change the context,
and forward THAT?  Or architecturally should I try something else?
Thanks,
C
And the answer is..... You CAN'T DO IT... Period.
BTW. The Openlaszlo website server is DOWN, DEAD, KAPUT, NIX, GONE, NO MORE...
This will be the final project that I personally implement with the tool
with no support.
It's very sad to see something that had the right idea about development cycle times,
and keeping the client side GUI construction simple, fast, and easy could be something
that dies because of lack of interest? Say wha? Can't be because FLASH was in jeopardy.
I'm pretty sure that we, as programmers, aren't so paranoid about losing our jobs
that we think we must spend lots of hours CODING an interface to keep it secret.
I'm certainly not paranoid about it. I know there is NET BEANS for swing type
GUIS, and I've heard that GWT has adopted something similar now, and so I'll
keep looking for that perfect invention and deal with what is left over.
Critical Path must have been purchased by someone else too, and so the
site sponsor has no motivation to keep it alive, while it dies a slow death.

Scheduled Mail in asp.net

Hai Guys,
My application deals scheduled mail concept (i.e) every morning 6.00 am my users gets a remainder mail about their activities for the day... I dont know how to do this.... Many told use windows service but i will host my website on a shared server i may not get rights to do windows service... Is there any dll for sending mails at a schduled time through asp.net application ..please help me out guys......
You cant do much in a shared hosting. Try upgrading your hosting or else write a windows service, to run on your machine, which will call an asp.net which can send out emails. Of course your machine has to be switched on all the time or at least during 6:00 AM :). You will have to take proper steps to avoid unauthorized request for that aspx page.
you can check this article too: http://www.codeproject.com/KB/aspnet/ASPNETService.aspx
You can't really do this with ASP.Net. ASP.Net is for web pages - which are reactive to HTTP requests.
You need a scheduled task or a service. All a website can do is respond to requests. I guess you could program the functionality into a web page and have a remote process request the page every morning - but what happens if someone else requests the page?
You can either have a program that runs constantly and has a timer or a loop that checks the time of day and then sleeps for a really long time and when the timer goes off or it's the right time of day it sends an email, or you can launch a program as a scheduled task. The first method can also be implemented as a service if you would like. Keep in mind you dont need ASP.Net to send emails, all you need is a console application that uses System.Net.Mail. Check out the mailer sample on MSDN for a very simple idea.
One other thing you can consider: IIS has an smtp service that you can install and it uses a pickup directory to send mail. You write an email to the pickup directory as an .eml file and IIS grabs it and sends it almost immediately. If you do that, you'll still have to write the emails (System.net.Mail will write the .eml files from a MailMessage, just set SmtpClient.DeliveryMethod to SpecifiedPickupDirectory or PickupDirectoryFromIIS and call SmtpClient.Send) but it will then send them for you. You'll still need to schedule something somehow so this might not be all that more useful but I thought I'd at least let you know that it exists.
One thing to be aware of: when the IIS SMTP service reads the send envelope of the .eml file, the order of the Sender and From headers is significant; if the From header appears before the Sender header then the MAIL FROM command will use the From header, which is incorrect (and MS won't be fixing this one). This appears to be an issue ONLY with the IIS SMTP service as it hasn't been reported anywhere else that I'm aware of. Reversing the order of the headers is the work-around. By default SmtpClient always writes the From header first. I'm aware of the issue and IIS isn't fixing it but I may be able to get a fix into SmtpClient for the .NET 4.0 RC build that re-orders the headers for you but no promises.
If you happen to have it handy (and I assume you do), you can use a SQL Server Agent job to make a request to an ASP.NET page that sends the email.
Here's some example code:
http://nicholasclarke.co.uk/blog/2008/01/16/web-request-from-sql-server-via-c/
Of course, since you're using SQL Server to call CLR code anyway, you could just have that code send out the emails (via System.Net.Mail) rather than requesting a page on IIS to do so. To do this, SQL Server would need:
Access to all of the data needed to send the emails
Outbound firewall access to send an email
CLR code that encapsulates all of the logic needed to know where/what to send.
Okay this is interesting, and what I did fits silky's definite of 'cheating', but no it was pretty cool for me.
What I did was spawn a new thread from ASP.Net code (it was possible on that host), and that thread did the scheduled job.
I checked whether the thread was alive (which is pretty easy) on every visit to the website (not so reliable I know, but it worked cause that website has plenty of visitor).
If at all you do this
Treat this as a stop-gap while you arrange to get a dedicated host or VPS.
Rest assured that the hosting company will kill your thread and withdraw permissions when they discover you're doing this.

Using a remote, external web service instead of a database

I am building an ASP.NET web application that will be deployed to a 4-node web farm.
My web application's farm is located in California.
Instead of a database for back-end data, I plan to use a set of web services served from a data center in New York.
I have a page /show-web-service-result.aspx that works like this:
1) User requests page /show-web-service-result.aspx?s=foo
2) Page's codebehind queries a web service that is hosted by the third party in New York.
3) When web service returns, the returned data is formatted and displayed to user in page response.
Does this architecture have potential scalability problems? Suppose I am getting hundreds of unique hits per second, e.g.
/show-web-service-result.aspx?s=foo1
/show-web-service-result.aspx?s=foo2
/show-web-service-result.aspx?s=foo3
etc...
Is it typical for web servers in a farm to be using web services for data instead of database? Any personal experience?
What change should I make to the architecture to improve scalability?
You have most definitely a scalability problem: the third-party web service. Unless you have a service-level agreement with that service (agreeing on the number of requests that you can submit per second), chances are real that you overload that service with your anticipated load. That you have four nodes yourself doesn't help you then.
So you should a) come up with an agreement with the third party, and b) test what the actual load is that they can take.
In addition, you need to make sure that your framework can use parallel connections for accessing the remote service. Suppose you have a round-trip time of 20ms from California to New York (which would be fairly good), you can not make more than 50 requests over a single TCP connection. Likewise, starting new TCP connections for every request will also kill performance, so you want pooling on these parallel connections.
I don't see a problem with this approach, we use it quite a bit where I work. However, here are some things to consider:
Is your page rendering going to be blocked while waiting for the web service to respond?
What if the response never comes, i.e. the service is down?
For the first problem I would look into using AJAX to update the page after you get a response back from the web service. You'll also want to consider how to handle the no response or timeout condition.
Finally, you should really think about how you could cache the web service data locally. For example if you are calling a stock quoting service then unless you have a real-time feed, there is no reason to call the web service with every request you get. Store the data locally for a period of time and return that until it becomes stale.
You may have scalability problems but most of these can be carefully engineered around.
I recommend you use ASP.NET's asynchronous tasks so that the web service is queued up, the thread is released while the request waits for the web service to respond, and then another thread picks up when the web service is done to finish off the request.
MSDN Magazine - Wicked Code - Asynchronous Pages in ASP.NET 2.0
Local caching is an absolute must. The fewer times you have to go from California to New York, the better. You might want to look into Microsoft's Velocity (although that's still in CTP) or NCache, or another distributed cache, so that each of your 4 web servers don't all have to make and cache the same data from the web service - once one server gets it, it should be available to all.
Microsoft Project Code Named "Velocity"
NCache
Other things that can go wrong that you should engineer around:
The web service is down (obviously) and data falls out of cache, and you can't get it back. Try to make it so that the data is not actually dropped from cache until you're sure you have an update available. Then the only risk is if the service is down and your application pool is reset, so don't reset it as a first-line troubleshooting maneuver!
There are two different timeouts on web requests, a connect and an overall timeout. Make sure both are set extremely low and you handle both of them timing out. If the service's DNS goes down, this can look like quite a different failure.
Watch perfmon for ASP.NET Queued Requests. This number will rise rapidly if the service goes down and you're not covering it properly.
Research and adjust ASP.NET performance registry settings so you have a highly optimized ASP.NET thread pool. I don't remember the specifics, but I seem to remember that there's a limit on IO Completion Ports and something else of that nature that are absurdly low for the powerful hardware I'm assuming you have on hand.
the trendy answer is REST. Any GET request can be HTTP Response cached (with lots of options on how that is configured) and it will be cached by the internet itself (your ISP, essentially).
Your project has an architecture that reflects they direction that Microsoft and many others in the SOA world want to take us. That said, many people try to avoid this type of real-time risk introduced by the web service.
Your system will have a huge dependency on the web service working in an efficient manner. If it doesn't work, or is slow, people will just see that your page isn't working properly.
At the very least, I would get a web stress tool and performance test your web service to at least the traffic levels you expect to get at peaks, and likely beyond this. When does it break (if ever?), when does it start to slow down? These are good metrics to know.
Other options to look at: perhaps you can get daily batches of data from the web service to a local database and hit the database for your web site. Then, if for some reason the web service is down or slow, you could use the most recently obtained data (if this is feasible for your data).
Overall, it should be doable, but you want to understand and measure the risks, and explore any potential options to minimize those risks.
It's fine. There are some scalability issues. Primarily, with the number of calls you are allowed to make to the external web service per second. Some web services (Yahoo shopping for example) limit how often you can call their service and will lock out your account if you call too often. If you have a large farm and lots of traffic, you might have to throttle your requests.
Also, it's typical in these situations to use an interstitial page that forks off a worker thread to go and do the web service call and redirects to the results page when the call returns. (Think a travel site when you do search, you get an interstitial page while they call out to an external source for the flight data and then you get redirected to a results page when the call completes). This may be unnecessary if your web service call returns quickly.
I recommend you be certain to use WCF, and not the legacy ASMX web services technology as the client. Use "Add Service Reference" instead of "Add Web Reference".
One other issue you need to consider, depending on the type of application and/or data you're pulling down: security.
Specifically, I'm referring to authentication and authorization, both of your end users, and the web application itself. Where are these things handled? All in the web app? by the WS? Or maybe the front-end app is authenticating the users, and flowing the user's identity to the back end WS, allowing that to verify that the user is allowed? How do you verify this? Since many other responders here mention a local data cache on the front end app (an EXCELLENT idea, BTW), this gets even MORE complicated: do you cache data that is allowed to userA, but not for userB? if so, how do you verify that userB cannot access data from the cache? What if the authorization is checked by the WS, how do you cache the permissions then?
On the other hand, how are you verifying that only your web app is allowed to access the WS (and an attacker doesn't directly access your WS data over the Internet, for instance)? For that matter, how do you ensure that your web app contacts the CORRECT WS server, and not a bogus one? And of course I assume that all the connection to the WS is only over TLS/SSL... (but of course also programmatically verify the cert applies to the accessed server...)
In short, its complicated, and many elements to consider here.... but it is NOT insurmountable.
(as far as input validation goes, that's actually NOT an issue, since this should be done by BOTH the front end app AND the back end WS...)
Another aspect here, as mentioned by #Martin, is the need for an SLA on whatever provider/hosting service you have for the NY WS, not just for performance, but also to cover availability. I.e. what happens if the server is inaccessible how quickly they commit to getting it back up, what happens if its down for extended periods of time, etc. That's the only way to legitimately transfer the risk of your availability being controlled by an externality.

Resources