OpenLaszlo 4.9 DHTML login servlet forwards but never loads page - servlets

I am having some problems at the moment using a LoginServlet running OL 4.9, on Tomcat 7.
I have Tomcat configured to allow crossContext to be true, and that allows me to work with other app contexts on the same server. Specifically a Login Servlet. My only other app is the OpenLaszlo presentation server LPS(lps-4.9.0).
I am using a Tomcat Request Filter that snoops the incoming addresses and looks for a particular cookie of authentication, which then makes its way to the LoginServlet that does a forwarding to the OpenLaszlo page. This was done to KEEP the cookie alive when the Request Filter was awakened at the loading of the OpenLaszlo page.
All of that is working now.
There are no errors or warnings in the lps.log file or the localhost.<date>.log either, however the page loading goes on forever, and never completes.
Could it be something that I am passing along in the forwarded URL? I am using at least 2 parameters to cause lzr to be set to "dhtml" and then lzt to be set to "html."
I can't even get a simple <canvas> page with a simple button to load. Has anyone seen this, and been able to fix the problem?
Since I first wrote my description I wrote another plea for help to some friends and ex coworkers, and this will help update the details of what I have discovered thus far.
Here’s the scenario: I am using Tomcat 7, and have installed the WAR file for OpenLaszlo 4.9.
Alongside of this I created a LoginServlet hierarchy and code and web.xml file just under
“webapps”; the same level that lps-4.9.0 is installed.
The sequence of events is the following:
1. A login page comes up that takes the username and password, and sends that
off to /LoginServlet to process. Note: I have also written and registered a Request Filter
for Tomcat that halts traversal beyone /lps-4.9.0 and checks for proper authentication
as I retrieve the cookies from requests trying to access those levels.
2. In the LoginServlet, I am creating a MACH COOKIE that I’ll send along with the response,
so that the Filter will allow me past the /lps-4.9.0 level. To do this I had to do a FORWARD
operation to preserve the cookie. a REDIRECT would just drop them. Since you can’t
give a relative path higher than the Servlet’s root, I had to turn on Tomcat’s “crossContext”
feature that allows me to do that in the same domain. And I have both contexts registered
in Tomcat’s conf directory in server.xml, I believe. Anyhow it works. I can grab the
/lps-4.9.0 context, get a Request Dispatcher, and then use that dispatcher to FORWARD
the request/response pair to my OpenLaszlo file(the LZX file).
So it seems to get as far as LOADING the OpenLaszlo page, but when I perused the console
messages in Chrome’s Developer Tools debugger, it showed that it was actually trying
to use the context of the original request(i.e. /LoginServlet); and of course that doesn’t
exist. I guess when I passed along the original request/response pair, the request had
the FIRST context used, and then tried to derive the relative path to the file off of that.
QUESTION: Can I just copy the stuff from the original request, but change the context,
and forward THAT?  Or architecturally should I try something else?
Thanks,
C

And the answer is..... You CAN'T DO IT... Period.
BTW. The Openlaszlo website server is DOWN, DEAD, KAPUT, NIX, GONE, NO MORE...
This will be the final project that I personally implement with the tool
with no support.
It's very sad to see something that had the right idea about development cycle times,
and keeping the client side GUI construction simple, fast, and easy could be something
that dies because of lack of interest? Say wha? Can't be because FLASH was in jeopardy.
I'm pretty sure that we, as programmers, aren't so paranoid about losing our jobs
that we think we must spend lots of hours CODING an interface to keep it secret.
I'm certainly not paranoid about it. I know there is NET BEANS for swing type
GUIS, and I've heard that GWT has adopted something similar now, and so I'll
keep looking for that perfect invention and deal with what is left over.
Critical Path must have been purchased by someone else too, and so the
site sponsor has no motivation to keep it alive, while it dies a slow death.

Related

how to prevent vulnerability scanning

I have a web site that reports about each non-expected server side error on my email.
Quite often (once each 1-2 weeks) somebody launches automated tools that bombard the web site with a ton of different URLs:
sometimes they (hackers?) think my site has inside phpmyadmin hosted and they try to access vulnerable (i believe) php-pages...
sometimes they are trying to access pages that are really absent but belongs to popular CMSs
last time they tried to inject wrong ViewState...
It is clearly not search engine spiders as 100% of requests that generated errors are requests to invalid pages.
Right now they didn't do too much harm, the only one is that I need to delete a ton of server error emails (200-300)... But at some point they could probably find something.
I'm really tired of that and looking for the solution that will block such 'spiders'.
Is there anything ready to use? Any tool, dlls, etc... Or I should implement something myself?
In the 2nd case: could you please recommend the approach to implement? Should I limit amount of requests from IP per second (let's say not more than 5 requests per second and not more then 20 per minute)?
P.S. Right now my web site is written using ASP.NET 4.0.
Such bots are not likely to find any vulnerabilities in your system, if you just keep the server and software updated. They are generally just looking for low hanging fruit, i.e. systems that are not updated to fix known vulnerabilities.
You could make a bot trap to minimise such traffic. As soon as someone tries to access one of those non-existant pages that you know of, you could stop all requests from that IP address with the same browser string, for a while.
There are a couple of things what you can consider...
You can use one of the available Web Application Firewalls. It usually has set of rules and analytic engine that determine suspicious activities and react accordingly. For example in you case it can automatically block attempts to scan you site as it recognize it as a attack pattern.
More simple (but not 100% solution) approach is check referer url (referer url description in wiki) and if request was originating not from one of you page you rejected it (you probably should create httpmodule for that purpose).
And of cause you want to be sure that you site address all known security issues from OWASP TOP 10 list (OWASP TOP 10). You can find very comprehensive description how to do it for asp.net here (owasp top 10 for .net book in pdf), i also recommend to read the blog of the author of the aforementioned book: http://www.troyhunt.com/
Theres nothing you can do (reliabily) to prevent vulernability scanning, the only thing to do really is to make sure you are on top of any vulnerabilities and prevent vulernability exploitation.
If youre site is only used by a select few and in constant locations you could maybe use an IP restriction

ASP.NET MVC3 creating new sessions on ajax requests

Well, I finally had to create an account here. Been using this for years and have often found my answer here, but not this time.
Well, I actually have found a lot of people with similar problems, but none of their solutions have helped me.
I have started on a new MVC3 project, so it's quite simple so far. I've made a handful before, so I kinda know what I'm doing (but not quite, obviously, why else be here ;-)
My problem is apparently a fairly common one: A request starts a new Session, even though the user already has one.
The most frustrating part of this is, it works perfectly on my hosted service, but is broken on localhost.
I have done a number of things to solve this:
There is no underscore in my computer's name.
The Session contains custom data (the error only occurs after user has logged in).
I have added the following to web.config (hmpf, guess you'll have to assume the gt / lt chars):
httpProtocol
customHeaders
clear /
add name="Access-Control-Allow-Origin" value="*" /
/customHeaders
/httpProtocol
and this too:
modules runAllManagedModulesForAllRequests="false"/
With InProc sessionstate, I have tried with 'cookieless' both true and false.
My hosts file contains nothing about localhost.
hm. Looking at this list I'm sure I've left some out. Some on purpose too, as they were hopeless (yes, even more than the above), and born from desperation.
As mentioned this is particularly unnerving as it works on my host - could there be some configuration settings I need to tweak on the dev server (VS2010)?
I've been working from the premise that the issue is due to cross-domain security (it thinks I'm coming from another domain).
The fail happens on this request:
url: 'http://localhost:50396/moody/changeBuilding/' + elem.selectedIndex,
It's part of the options array I use with the jQuery.ajax function.
I change the domain when uploading to the host, but only the part localhost:port, everything else in the application is identical.
I've been banging my head against this for 2 days now, and will miss my exam :-(
I'm determined to bury this 6 feet under, though.
I would be very grateful for any and all suggestions!
I change the domain when uploading to the host, but only the part localhost:port, everything else in the application is identical.
Reading the above, I image the session cookie isn't being sent because you're changing domains.
Let's sit back and think about how sessions work. Basically ASP.NET contains a collection of sessions and their data. When each request comes in, ASP.NET must map that request to an existing session OR create a new session for them.
So how does ASP.NET know what session belonged to each incoming request? Or know that it needs to create a new request? The only way to know this is if the request contained some information, a 'key', which told ASP.NET what session to give the request... or in the absence of this 'key', create a new session.
How does the request send this 'key'? Through cookies.
So therefore, if you change the domain, the cookies isn't going to be sent... so therefore, ASP.NET will create a new session for the request.
Have you tried using something like fiddler to make sure that the session cookie is being sent in the AJAX request. It should be sent if the domain is the same but it's work checking.
Edit: This SO post on changing ports is worth reading too.
Edit: Given the new information in Charlino's comments (and the sterling detective work carried out therein) if the problem is only on your local dev machine then the easiest way to work around your localhost/127.0.0.1 issue is by manually changing the browser url from 127.0.0.1:50396 to localhost:50396, logging in again to get the new cookie, then you are good to go.

Surrender control from Pipeline to an HttpHandler

I have a problem with a website. The server is IIS 7, running ASP.NET on the .NET 4.0 CLR. We are hosting a Sitecore application and I hesitated in adding it as a tag, because I really feel it's more the 'why' of the question and not necessarily related to the underlying tech that's causing the problem.
One of the things Sitecore does is add a boatload of custom pipelines. One of these pipelines is called the LayoutResolverPipeline, which is responsible for determining the path to the layout file the requested page will use. We've come up with a terribly useful and complicated way of hosting global content items across multiple domains. Which domain will serve which items is completely configurable through the Administrator web GUI (aka the Sitecore shell). The end goal is to make it possible for our marketing/consumer experience team to run multivariate testing to find the best user experience.
To that end, we have a 'launch' page that is responsible for considering everything about the current user, everything about the current system and domain settings, and determines which experience to give the customer. For most domains, this comes down to a weighted roll of the dice - for the test results to be statistically sound, they have to be sufficiently random. It is written as an IHttpHandler and it stores its decisions in the HttpContext.Current.Session (which is accomplished by also having it implement the IRequiresSessionState interface). The decision is stored so that if the customer decides to backtrack, we don't roll the dice again and instead give them a consistent experience for the duration of their visit. The decision is carried out by the handler issuing a 302 redirect for the next page in the customer's visit.
The launch handler is defined in the web.config file in the usual way:
<system.webServer>
<handlers>
<add verb="*" path="launch.ashx"
type="CMS.HttpHandlers.LaunchRequestHandler, CMS"
name="LaunchHandler"/>
We occasionally do business with partners who, for whatever reason, don't want the resultant 302 between their page and ours. They will instead link directly to a certain customer experience. Over time, however, we depreciate, move or obsolete whole user experiences, which for certain demanding and lazy partners result in lingering links to unsupported or non-existent items. We also have to handle the case of people mis-typing, mis-remembering, mis-linking, revisiting from their browser history or just trying random urls.
These latter cases have resulted in some nasty exceptions in the LayoutResolverPipeline. I am trying to resolve these exceptions by having it fall back to the LaunchHandler if it can't figure out what to do. I have this implemented as a Redirect, but I would like to simply invoke the LaunchHandler directly; it is going to do a 301 to a different item, anyways, and having multiple redirects on a single request is a costly waste of resources that I would like to avoid.
Enough background. The problem is that LayoutResolverPipeline is bound to the HttpBeginRequest portion of the IIS processing stack, which is well before the Session information is ready. This is a constraint of Sitecore's and it can't be moved without solving a whole load of other problems.
Questions:
Is there a way to pass control to a specific IHttpHandler other than redirecting to the URL it is bound to?
Is there a way to rejoin the code a later point in the event pipeline? I suppose this would mean binding to the Application.PostAcquireRequestState event for a single request only, which sounds ludicrous.
Is there a way to acquire session state information early?
I'm of course open to suggestions for how I might be doing it completely wrong. Oh, and if you know of a more useful tag to throw on it for the Asp.net/IIS pipeline specifically, I wasn't able to find one that wasn't a read herring. Thanks!
I don't think you want to go manually invoking any handlers... that sounds pretty hacky. What about using Server.Transfer() here instead of a 301 Redirect? Then it's transparent on the user's end. Of course the disadvantage there is that it doesn't update the apparent URL, but you can't do that without some sort of redirect going on.

Who is calling my WebService?

I have a web service that is on an internal server. It can be called from any website on our network.
More and more developers are starting to use it. Current probably 20+ pages use this service, and the number is growing fast. I can see a year from now, someone asking what pages are using this service and what methods.
I would like to log the url of the pages that use my web service as the request come in.
It would also be nice to know the method they are calling.I need to do something in such a way, that it does not affect the client web sites.My first thought was that I could write some code in the global.asax.
I have added some code to the Application_BeginRequest to log the request object details, but there does not appear to be anything about the requesting url.
What am I missing? Should I be looking at a different object?
Thanks.
Without disrupting existing users this is going to be difficult. The httpContect.Current.RequestUrl will just return the URL used to call your web service, not which web page called it.
The closest you can do without disrupting existing apps and forcing developers to change them is to grab the HttpContext.Current.Request.UserHostAddress, so you can at least get the IP of the machine calling your service.
Beyond this, what you might want to consider is adding a parameter to your functions for "CallingApp" and then log that in your code. That's pretty much what we did once re realized that we needed to know which apps are calling our service. We actually have an application monitoring service that uses a GUID for every new app we develop, and we pass that GUID to any web service. It[s extra work but to us it was critical because it allows us to know which apps will be affected when we need to perform updates or take the app server down for maintenance.
Edit - added
As a side note, at the point we realized we needed to track this, we had already been using web services for about a year. When faced with the same problem, we created a new set of web services, and included the extra field for the calling app in all of the new services, and then slowly went back and changed the older programs to point to the new services.
IN retrospect, we wish we had known we would need to do this up front because it created a lot of extra work. I'm guessing you'll be facing something similar if you really want to know exactly who is calling your services.
The only thing you can probably retrieve from the consumer is the IP address without changing your interface.
If you can change this you could do this e.g. by adding authentication and logging who is calling what, or by having some simple "token" principle.
However both methods require you to change the interface and therefore break backwards compatibility - which you should never do.
By always ensuring both back and forward compatibility you should not need to know exactly who is calling your service, but only that it is actually used.
#David Stratton
Thanks for your help. I think your suggestions were great. I accually did something very different, after your answer gave me some new ideas.
I should have mentioned that I was generating the web proxy that most of my users were using to make calls against my web service. My client in general do NOT use the proxy that Visual Studio creates.
Here is what did:
I generated my web proxy client again, and added calls to log the httpcontext of the client before every call. Because the proxy is running on the client, he had access to everything I needed. That allowed me to record everything about the client and the specific call they were making. I realize this would not work for most cases. But all of my clients are internal web sites.
It also had the advantage in that the clients did not have to modify their code at all. I just gave them all a new DLL. Problem solved. I get all the tracking data I want, and they did not have to modify their code.
I was stuck trying to solve the problem from the web service's point of view.
I realize that there is still a whole in this implementation, because someone does not have to use my client proxy to call my service. I guess I could force that at some point in the future. For now, they could let Visual Studio genereate a web proxy for my service. However, if they do that I guess I don't care. That is not the recommened way to call my service. I think the only one doing that is an ASP.NET 1.1 web site. When they upgrade, they will probably switch to my generated proxy.
Without implementing some sort of authentication, there isn't a guraenteeted way of knowing exactly who is calling your service - web metrics are the only way you can gauge what volume of traffic is hitting your service.
I'm sure you already know this but the whole point of a web service isn't to know or care who is calling it.
I have successfully used ...
Dim strReferrer As String = HttpContext.Current.Request.UrlReferrer.AbsoluteUri
to get the calling page that called my WEB API 2 Web Service.

Prioritise ASP.NET requests

I would be surprised if this is possible, but you never know.
Is there a way in which I could prioritise ASP.NET requests? For example, if the request is a NEW request (coming from Location X) I would like it to take priority over a request coming from a known location.
This will be running under IIS 7 so can I make use of the integrated pipeline to pre-process requests before they take threads out the ThreadPool?
Hmmm. Any feedback welcomed, even if it's to say No!
Thanks
Duncan
I don't think what you're after is possible in the truest sense of what you're asking for, but it might be possible to 'simulate' what you're after at the application level. John's right, they're processed first come, first served. But you might be able to give some kind of priority to your web application by setting a cookie for all visitors, and checking if that cookie is present before you render your homepage. If it is not present, you could assume that the request is new and therefore continue to render your homepage (or whatever). If it is present, you might choose to redirect them to another page (or perhaps a cached copy of your page).
Like I said, this isn't the 'truest' sense of what you are after, but if your homepage is particulary process intensive right now, and you want some way to separate recurring visitors from new visitors, this might do the trick.
Since you've asked, though - I'd have to ask you why it is necessary in your implementation to prioritise requests as you have mentioned. Is load on your web server a problem, and you want to appear more responsive to new customers?
Just hazarding a guess - interesting question, though! :)
Best,
Richard.

Resources