Can I use MiniProfiler to instrument an ASP.NET MVC WebApi website? - asp.net

The ASP.NET MVC website I'm working on has some (Controller-derived) "user" pages and some (ApiController-derived) "api" pages.
The site uses MiniProfiler to instrument the "user" pages, and I really like what it does. I'd like to have the same or similar functionality in the "api" pages - specifically, a record of the SQL statements that were executed, and how long everything took.
I saw this link which looked promising, where the URL of the entry point is simply entered into the browser address bar, but that's using the default view that comes out of the box with ASP.NET MVC WebApi. My own URLs return an XML document (or JSON response).
Also, I'd prefer something that will allow me to get away from the browser, since my real-life API calls are initiated by another program, and I'd like to be able to record information about a whole session rather than just a single request.
Any advice?

You can have MiniProfiler log it's results to a database instead of disposing of the results. At that point you'll be able to look back at the performance over time (against a session or an end point).
Add:
MiniProfiler.Settings.Storage = new SqlServerStorage("connection string here");
to your settings and it should start logging to the database.

Related

Fresh response vs Cached response in Asp.net

With my recent development work, I need a way to determine whether the current response received is from Cache or if the Server has sent a very fresh response. This is so because there are some javascript codes that needs to be executed for every fresh response & NOT every fresh user.
You all may agree that showing the Javascript code Which Will be executed on every fresh response won't add anything meaningfull to my question, since it's totally irrelevant and not connected with the way a server respose is sent.
So, Is there any way to differentiate whether the response is from the Cache or is a new fresh copy sent by the server ??
You should consider creating a custom OutputCacheProvider that extends the built in cache provider used in MVC.
Some links that might help:
MSDN Article: Building and Using Custom OutputCache Providers in ASP.NET
Creating a Custom Output Cache Provider in ASP.NET 4
Custom Output Caching with MVC3 and .NET 4.0 – Done Properly!
Within your provider, you can use the same functionality as the regular output cache provider. And on the Get() action, you can add content to the item returned from the cache that will indicate that it was in fact retrieved from cache (you will want to experiment with this, making sure that you are only adding this to the items that you want, and are doing it in a way that doesn't mess up the output).

Scraping ASP.NET with Python and urllib2

I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.

Strategy for developing a multi function asp.net web application

I'm about to start a new project and want some advice on how to implement.
I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out.
If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules.
to make things more difficult, there is some shared functionality like common users, roles and security.
Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional.
Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame.
I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched'
Any better alternatives?
Unless I am missing something, if you run ASPX.net (v2, 3, whatever) you can replace the ASPX files (including any CLASS Files) on the fly and the WEB SERVER will automagically "do the right thing."
So if you wrap your "modules" in classes, you can replace those files on a whim without harming the functionality of other classes (not modified.)
As I re-read this am I getting convinced that I am misunderstanding your goal...
Sounds like you what you want is to have some thing along the lines of the Composite Application Block but for a web application (the CAB is for a smart client application).
One of the main things you would want to do is reduce and abstract the coupling between the modules as much as possible.
Keeping the session in the database would go a long way help your ability to dynamically load modules into the application.
This would allow you to have the time management in one server and the booking engine in another. When you update the functionality of one you simply update one server while the other keeps on serving the user.
Add two class library to your web application. one for "booking module" and one for "time management" module.
After compiling you will have one DLL for each module and put them in bin folder of web app (Visual Studio will do) then you can replace them separately when you need.
Maybe you know this already :
Sessions are the heart of problems in web if misunderstood.
Http is a connection-less protocol which means both sides of connection don't care about the flow of the communication. Simply a request has a single response. Without tracking a client how web applications can work ? assume we login to Yahoo mail. Single request (filled login page) is sent to server and a single response (inbox page) returns, then what if we want to see "Draft" folder ?
To inspire state to HTTP a simple solution added which we know as "cookies".
cookies are simple texts send with each request to a specified server. So on login page Yahoo server sends the response with some other text (cookie) which forces client (browser) to remember it and sent with every new request. This way Yahoo server (web application) can keep track of sequence of requests. This is why we should not simply close the browser window when we are done with yahoo and should logout. With logout yahoo server will forget about that cookie and any subsequent requests with that cookie are not accepted. So because Yahoo can't find out we closed the browser "connectionless" is a good name.
How asp.net handle this ?
simply asp.net uses a "session cookie" for any new request (requests without cookie) and let's you put your variables in "Session" object on server side. As long as we are at the same application we can use same session variables. What asp.net is doing behind is creating a table for "session Id" cookies and you "session variables". This is transparent to asp.net programmer. We just simply put a value in a session variable like this : session("Age") = 19; and read it when we need. ASP.NET take care of the rest with session cookies this way: you create a session variable (here "Age") so asp.net should track of this request; whatever is the response, asp.net adds a "session cookie" to it. "Session cookie" is a unique text which should be send by that client on consequent request till it expires (usually 20 minutes in asp.net). Use Firefox with "web developer" add-on to see and manipulate cookies.
Related concepts: session cookies vs permanent cookies, cookie properties (domain, expiration date, ...)
how server deals with cookies (keeping in memory, storing in database, ...)

ASP Classic GET request without multithreading

We are talking about Classic ASP and NOT ASP.NET!
Lets start from top. We are using ISAPI_Rewrite and we would like to dynamically offer our customers to control rewriting of urls (giving them httpd.ini is not an option). We were thinking that all unknown url requests (we define this in httpd.ini) are controlled by one asp file which creates a GET request to select url (customers creates key -> value table). Now, we can make a request to another page and just print the output but we cannot make a request to our own server. As I am aware, ASP doesnt offer this.
We could write a .NET extension to control this but we are looking for other options. I know that declining .NET is a stupid thing, but its a long story...
Is there a solution to this problem in ASP?
Have a look at Server.Execute it allows dynamic (run time) code inclusion of other ASP files. An added bonus is that it's treated as part of the original request so SESSION, COOKIE are all available in the included file. HOWEVER variables defined in the master are not available to the included the page. You circumvent this using temporary Session variables though.
Session("variable") = "value";
Server.Execute(url);
Session.Abandon;
Response.end;
Session.Abandon will clear ALL session variables, you might want to clear them individually.
You can make a request to your own server but the page making the request needs to NOT have session enabled in the page declaration right at the top of the page:
Each page locks the session object and its that which stops you making a request to your own server. If you declare you are not going to use session in the calling script then it wont lock it and you can run it again using a XMLRequest and pass what you like on the querystring, post data and session cookies too so session etc. will all still exist.

How do I "single-instance" an ASP.Net AJAX web portal?

I’ve been asked if we can optionally “single-instance” our web portal. See
this post on Hanselman's blog for the same idea in a WinForms app.
Suppose we have 2 shortcuts on the same client machine:
http://MyServer/MyWebPortal/Default.aspx?user=username&document=Foo
http://MyServer/MyWebPortal/Default.aspx?user=username&document=Bar
Clicking on the first shortcut would launch our web portal, log in, and display the document “Foo”. Clicking on the second shortcut should display the document “Bar” in the running instance of the web portal.
My current approach is this: In the Page Load, for the first instance create a per-client Application variable. The second instance looks for the Application variable to see if the portal is running on the client. If it is, the second URL is recorded in another Application variable and the second instance is forcibly exited. I’ve tried creating a ASP.Net AJAX Timer to poll the Application variable for a document to display. This sort of works. In order to respond quickly to the second request I’ve set the Timer interval to 2 seconds. This makes the portal irritating to use because of the frequent postbacks.
Using my approach, is there a way for the second instance to notify the first instance to check the application variable without polling? Is there a better overall approach to this problem?
Thanks in advance
There is no way on the server side to control which browser instance your page opens up on the client. You can't force all requests to open in the same browser window.
Also, an Application scope variable is shared by all users of your application. At least make this a Session-scope variable - otherwise you would only be allowing one user to access your portal at a time!
Honestly this sounds like a silly request from someone who a) probably doesn't understand how these types of things work and b) is trying to do an end-around for users who aren't that bright and actually see a problem with having more than one instance of your portal open.

Resources