What are the conceptual differences between live reloading, hot reloading, and hot module replacement? - reload

I've seen a lot of posts and publications about live reloading, hot reloading, and hot module replacement, referring to different practices to reflect changes in code immediately in the browser when working in the web client/FE layer.
I have a fair understanding of what are these terms referring to, my only question is if these concepts are properly defined somewhere and which are the specific differences between them.

So I just came across the same question today and thought it's good to share my findings:
Live ReloadTriggers an app wide reload that listens to file changes
Hot Module ReplacementIs the same as Live Reload with the difference that it only replaces the modules that have been modified, hence the word Replacement. The advantage of this is that it doesn't lose your app state e.g. your inputs on your form fields, your currently selected tab etc. Here's a full-blown explanation from another SO answer.
Lastly, Hot Reloading is just short for Hot Module Replacement.
Here's a explanatory video which you can look at and differentiate LR from HMR.

Related

What is the standard or fast way of delivering dynamic stylesheets in ASP.NET / MVC5? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
For a big single page web application using ASP.net, MVC5 and Angular JS I'm looking to serve dynamic generated stylesheets. The application will have an big amount of users (> 1000s) that individually can set about 10 variables. These include a few colors and logos. These colors and logos will be used in the less stylesheet to generate lots of classes/styles. The stylesheet is compiled from a lot of files, about 50 .less files.
I'm planning to use the following approach:
On loading index.html, after user login, I call an ASP action to load the stylesheet <link href="Give/Me/My/Dynamic/Stylesheet/For/User/12122312">
The controller action will do its magic and gets the 10 user variables from the database
The server will than use .dotless to compile the less to css
Server returns string
This approach will work, but I'm afraid the performance will be not so good. My questions:
Is dotless the fastest way to compile less?
How long will the databasecall + compiling take?
Is this the way other application do this?
I'm looking if there's a way to cache the css request - maybe check if there has been a change since the last time the css was compiled.
What will not work:
Compile the less to css and then change the variables in the css (saves a compile every time). Some of the colors in the stylesheet will be based upon on of the variables. (eg lighten(#333, 0.3) etc).
Use a static stylesheet and then override these in the head of the HTML doc. There will be a whole lotta custom styles involved!
Some other thoughts:
This solution seems to do the same, but compiles on build:
This solution saves the variables to a static file and uses that for compiling. This saves a database call, but the directory gets crowded with lots of users.
Your planned approach sounds fine. As far as your specific questions go:
Is dotless the fastest way to compile less?
"Fastest" is relative and debatable. The only way to know would be to run this and any available alternatives on your production machine using some sort of benchmarking. Even then, outside factors such as how much load the server is getting, how many requests are being handled simultaneously, etc. can affect those benchmarks. For example, maybe one solution is "faster" handling an ideal scenario, but has a large overhead that causes it to run much slower than another solution when the server is actually being taxed. Overall, it's impossible for anyone to give you any sort of definite answer to this, and really, it's probably too early to even be that concerned about the question. If it becomes a problem in production, then you can start investigating alternatives.
How long will the databasecall + compiling take?
Also completely impossible for anyone to give a definitive answer to. There's way too many variables involved in that relatively simple question. What database are you using? What version? What are the specs of the server it's running on. What else is running on that server? How have you configured the database server? What kind of query are you running? How many tables are involved? What's the size of the resultset being returned? What type of network infrastructure is in place? What's your latency? How capable is your network infrastructure at handling load? There's probably more questions I could ask if I though long enough and that's just about the database call portion. I don't expect answers to those questions; I'm merely trying to point out 1) there's no way anyone can answer that for you and 2) you're going to have to do a lot of research to come up with those answers yourself.
Is this the way other application do this?
This is highly speculative. First, it assumes this is somewhat common, when it's probably anything but. In my 20-some-odd years of doing web development, I've yet to encounter a scenario where I needed a dynamic stylesheet. Granted, for a large part of those years stylesheets didn't even exist or at least weren't heavily used yet and just because I haven't had a need doesn't mean there's not still a perfectly valid business-case for this. I understand the desire to want to find an accepted pattern or best practice to follow, but the sample set here is probably so small that no such thing exists. Trust your gut. Build things in a way that makes sense. Then test, refine and refactor. That's really the best advice I can give you.
I'm looking if there's a way to cache the css request - maybe check if there has been a change since the last time the css was compiled.
This one is pretty easy. Just make sure to set the appropriate response headers before returning your response. Expires is really your go-to here. If the stylesheet virtually never changes for the user, then you can set a far future Expires header and the client's browser should cache it requiring all this infrastructure to not have to do its thing again for a while. If the change-ability is variable (any time the user updates a setting, they need a new version, and this can happen at a whim), then you can still use a far-future Expires header and employ a cache-busting querystring param that will force the browser to get a fresh copy. A good choice might be adding the last modified date for the settings when rendering the link for the stylesheet. If the user hasn't modified anything, then the date won't change and the original cached version will be used. But if the date has changed, it will look like a new URL to the browser, and it will be fetched fresh.

How to test localized UI in ASP.NET WebForms

I am trying to come up with a way to automate testing of localized UI in ASP.NET WebForms. Basically I have button that toggles the current locale and code that populates the right text from resource file. The problem is how to test it.
One approach is to use BDD in a form of
As a Spanish speaking user
I want to switch to Spanish
So that I can use the site more comfortably
Scenario Outline:
... a bunch of steps to get each possible string (labels, buttons, messages, etc)
Another approach is to use TDD in a form of row based tests and check each property (which is WebForms is not trivial).
The first approach forces repeating existing scenarios, the second is very difficult and not clear.
How do people test localization?
Well I am in the same boat at this moment...
What I am trying to do.. is
to extract the user-visible strings used by the automated tests out into a swappable block (in my case this is a .net resource file). The idea is to have different machines (or VMs or change at runtime) and run the same suite across different localized versions of the app.
That leaves the switch language feature (that we don't support at the moment): that you could test by exercising the switch behavior and doing a cursory check in a test.
Finally you really need a set of human eyes to ensure everything has been localized and is accessible (e.g. not clipped and stuff). There are other aspects too that can't be automated.. e.g. the use of colors to signal alarms.
To ensure there are no hard coded user-visible strings, create a junk resource file with junk characters, wire it to the app and manually breeze through all the screens periodically (every 2 weeks). If you still see english strings, something still resides out of the resource file. Once everything is in the resource file, you still need someone who speaks the language to ensure that the localized strings appear correctly or match the context in which they are shown.

Best Practices for Passing Data Between Pages

The Problem
In the stack that we re-use between projects, we are putting a little bit too much data in the session for passing data between pages. This was good in theory because it prevents tampering, replay attacks, and so on, but it creates as many problems as it solves.
Session loss itself is an issue, although it's mostly handled by implementing Session State Server (or by using SQL Server). More importantly, it's tricky to make the back button work correctly, and it's also extra work to create a situation where a user can, say, open the same screen in three tabs to work on different records.
And that's just the tip of the iceberg.
There are workarounds for most of these issues, but as I grind away, all this friction gives me the feeling that passing data between pages using session is the wrong direction.
What I really want to do here is come up with a best practice that my shop can use all the time for passing data between pages, and then, for new apps, replace key parts of our stack that currently rely on Session.
It would also be nice if the final solution did not result in mountains of boilerplate plumbing code.
Proposed Solutions
Session
As mentioned above, leaning heavily on Session seems like a good idea, but it breaks the back button and causes some other problems.
There may be ways to get around all the problems, but it seems like a lot of extra work.
One thing that's very nice about using session is the fact that tampering is just not an issue. Compared to passing everything via the unencrypted QueryString, you end up writing much less guard code.
Cross-Page Posting
In truth I've barely considered this option. I have a problem with how tightly coupled it makes the pages -- if I start doing PreviousPage.FindControl("SomeTextBox"), that seems like a maintenance problem if I ever want to get to this page from another page that maybe does not have a control called SomeTextBox.
It seems limited in other ways as well. Maybe I want to get to the page via a link, for instance.
QueryString
I'm currently leaning towards this strategy, like in the olden days. But I probably want my QueryString to be encrypted to make it harder to tamper with, and I would like to handle the problem of replay attacks as well.
On 4 guys from Rolla, there's an article about this.
However, it should be possible to create an HttpModule that takes care of all this and removes all the encryption sausage-making from the page. Sure enough, Mads Kristensen has an article where he released one. However, the comments make it sound like it has problems with extremely common scenarios.
Other Options
Of course this is not an exaustive look at the options, but rather the main options I'm considering. This link contains a more complete list. The ones I didn't mention such as Cookies and the Cache not appropriate for the purpose of passing data between pages.
In Closing...
So, how are you handling the problem of passing data between pages? What hidden gotchas did you have to work around, and are there any pre-existing tools around this that solve them all flawlessly? Do you feel like you've got a solution that you're completely happy with?
Thanks in advance!
Update: Just in case I'm not being clear enough, by 'passing data between pages' I'm talking about, for instance, passing a CustomerID key from a CustomerSearch.aspx page to Customers.aspx, where the Customer will be opened and editing can occur.
First, the problems with which you are dealing relate to handling state in a state-less environment. The struggles you are having are not new and it is probably one of the things that makes web development harder than windows development or the development of an executable.
With respect to web development, you have five choices, as far as I'm aware, for handling user-specific state which can all be used in combination with each other. You will find that no one solution works for everything. Instead, you need to determine when to use each solution:
Query string - Query strings are good for passing pointers to data (e.g. primary key values) or state values. Query strings by themselves should not be assumed to be secure even if encrypted because of replay. In addition, some browsers have a limit on the length of the url. However, query strings have some advantages such as that they can be bookmarked and emailed to people and are inherently stateless if not used with anything else.
Cookies - Cookies are good for storing very tiny amounts of information for a particular user. The problem is that cookies also have a size limitation after which it will simply truncate the data so you have to be careful with putting custom data in a cookie. In addition, users can kill cookies or stop their use (although that would prevent use of standard Session as well). Similar to query strings, cookies are better, IMO, for pointers to data than for the data itself unless the data is tiny.
Form data - Form data can take quite a bit of information however at the cost of post times and in some cases reload times. ASP.NET's ViewState uses hidden form variables to maintain information. Passing data between pages using something like ViewState has the advantage of working nicer with the back button but can easily create ginormous pages which slow down the experience for the user. In general, ASP.NET model does not work on cross page posting (although it is possible) but instead works on posts back to the same page and from there navigating to the next page.
Session - Session is good for information that relates to a process with which the user is progressing or for general settings. You can store quite a bit of information into session at the cost of server memory or load times from the databases. Conceptually, Session works by loading the entire wad of data for the user all at once either from memory or from a state server. That means that if you have a very large set of data you probably do not want to put it into session. Session can create some back button problems which must be weighed against what the user is actually trying to accomplish. In general you will find that the back button can be the bane of the web developer.
Database - The last solution (which again can be used in combination with others) is that you store the information in the database in its appropriate schema with a column that indicates the state of the item. For example, if you were handling the creation of an order, you could store the order in the Order table with a "state" column that determines whether it was a real order or not. You would store the order identifier in the query string or session. The web site would continue to write data into the table to update the various parts and child items until eventually the user is able to declare that they are done and the order's state is marked as being a real order. This can complicate reports and queries in that they all need to differentiate "real" items from ones that are in process.
One of the items mentioned in your later link was Application Cache. I wouldn't consider this to be user-specific since it is application wide. (It can obviously be shoe-horned into being user-specific but I wouldn't recommend that either). I've never played with storing data in the HttpContext outside of passing it to a handler or module but I'd be skeptical that it was any different than the above mentioned solutions.
In general, there is no one solution to rule them all. The best approach is to assume on each page that the user could have navigated to that page from anywhere (as opposed to assuming they got there by using a link on another page). If you do that, back button issues become easier to handle (although still a pain). In my development, I use the first four extensively and on occasion resort to the last solution when the need calls for it.
Alright, so I want to preface my answer with this; Thomas clearly has the most accurate and comprehensive answer so far for people starting fresh. This answer isn't in the same vein at all. My answer is coming from a "business developer's" standpoint. As we all know too well; sometimes it's just not feasible to spend money re-writing something that already exists and "works"... at least not all in one shot. Sometimes it's best to implement a solution which will let you migrate to a better alternative over time.
The only thing I'd say Thomas is missing is; client-side javascript state. Where I work we've found customers are coming to expect "Web 2.0"-type applications more and more. We've also found these sorts of applications typically result in much higher user satisfaction. With a little practice, and the help of some really great javascript libraries like jQuery (we've even started using GWT and found it to be AWESOME) communicating with JSON-based REST services implemented in WCF can be trivial. This approach also provides a very nice way to start moving towards a SOA-based architecture, and clean separation of UI and business logic.
But I digress.
It sounds to me as though you already have an application, and you've already stretched the limits of ASP.NET's built-in session state management. So... here's my suggestion (assuming you've already tried ASP.NET's out-of-process session management, which scales signifigantly better than the in-process/on-box session management, and it sounds like you have because you mentioned it); NCache.
NCache provides you with a "drop-in" replacement for ASP.NET's session management options. It's super easy to implement, and could "band-aid" your application more than well enough to get you through - without any significant investment in refactoring your existing codebase immediately.
You can use the extra time and money to start reducing your technical debt by focusing new development on things with immediate business-value - using a new approach (such as any of the alternatives offered in the other answers, or mine).
Just my thoughts.
Several months later, I thought I would update this question with the technique I ended up going with, since it has worked out so well.
After playing with more involved session state handling (which resulted in a lot of broken back buttons and so on) I ended up rolling my own code to handle encrypted QueryStrings. It's been a huge win -- all of my problem scenarios (back button, multiple tabs open at the same time, lost session state, etc) are solved and the complexity is minimal since the usage is very familiar.
This is still not a magic bullet for everything but I think it's good for about 90% of the scenarios you run into.
Details
I built a class called CorePage that inherits from Page. It has methods called SecureRequest and SecureRedirect.
So you might call:
SecureRedirect(String.Format("Orders.aspx?ClientID={0}&OrderID={1}, ClientID, OrderID)
CorePage parses out the QueryString and encrypts it into a QueryString variable called CoreSecure. So the actual request looks like this:
Orders.aspx?CoreSecure=1IHXaPzUCYrdmWPkkkuThEes%2fIs4l6grKaznFGAeDDI%3d
If available, the currently logged in UserID is added to the encryption key, so replay attacks are not as much of a problem.
From there, you can call:
X = SecureRequest("ClientID")
Conclusion
Everything works seamlessly, using familiar syntax.
Over the last several months I've also adapted this code to work with edge cases, such as hyperlinks that trigger a download - sometimes you need to generate a hyperlink on the client that has a secure QueryString. That works really well.
Let me know if you would like to see this code and I will put it up somewhere.
One last thought: it's weird to accept my own answer over some of the very thoughtful posts other people put on here, but this really does seem to be the ultimate answer to my problem. Thanks to everyone who helped get me there.
After going through all the above scenarios and answers and this link Data pasing methods My final advice would be :
COOKIES for:
ENCRYPT[userId's]
ENCRYPT[productId]
ENCRYPT[xyzIds..]
ENCRYPT[etc..]
DATABASE for:
datasets BY COOKIE ID
datatables BY COOKIE ID
all other large chunks BY COOKIE ID
My advise also depends on the below statistics and this link details Data pasing methods :
I would never do this. I have never had any issues storing all session data in the database, loading it based on the users cookie. It's a session as far as anything is concerned, but I maintain control over it. Don't give up control of your session data to your web server...
With a little work, you can support sub sessions, and allow multi-tasking in different tabs/windows.
As a starting point, I find using the critical data elements, such as a Customer ID, best put into the query string for processing. You can easily track/filter bad data coming off of these elements, and it also allows for some integration with e-mail or other related sites/applications.
In a previous application, the only way to view an employee or a request record involving them was to log into the application, do a search for the employee or do a search for recent records to find the record in question. This became problematic and a big time sink when somebody from a related department needed to do a simple view on records for auditing purposes.
In the rewrite, I made both the employee Id, and request Ids available through a basic URL of "ViewEmployee.aspx?Id=XXX" and "ViewRequest.aspx?Id=XXX". The application was setup to A) filter out bad Ids and B) authenticate and authorize the user before allowing them to these pages. What this allowed the primarily application users to do was to send simple e-mails to the auditors with a URL in the e-mail. When they were in a big hurry, they were in their bulk processing time, they were able to simply click down a list of URLs and do the appropriate processing.
Other session related data, such as modification dates and maintaining the "state" of the user's interaction with the application gets a little more complex, but hopefully this provides a starting poing for you.

Does anyone use Iron speed designer for rapid asp.net development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Visual studio is pretty good but doesn't create stored procedures automatically. Iron Speed designer does supposedly. But is it any good?
I have used Ironspeed extensively for the past two years for most of our ASP.NET forms over data projects.
It works. Does several things well: stored procs, fast layout of table browse and CRUD screens, fast layout of single record CRUD screens. It manages the round-trip (or half-round trip) process decently, detecting changes in your back end db schema and updating its data access layer, then making the changed columns available for you to alter your UI (in record or table control panels). ISD (as they call it) does an excellent job in making security management for your app pretty painless, even down to the control level (if you use ISD's subclassed versions of asp.net controls). Final plus, not a small one, is the CSS-based theme control (easy to change to a variety of themes, easy to customize a particular theme, and not even too bad to build your own theme variant by forking an existing one you like). Depending upon whether you let ISD create your stored procs in the code base or the database, changing DB's at run time can be a piece of cake.
Fairly active forum with a core group of helpful contributors. You can probably avoid the paid tech support through the forum.
Okay, the down sides. Creates fairly large code conglomerations, being a three tiered architecture. As Galwegian says, like any framework, you've got the velvet handcuffs (get your mind out of the gutter if you are thinking about anything other than code limitations and conventions!). The velvet handcuffs are the page and control model, the data layer, lack of a business object/class capability per se, the postback model, and the temptation to make your user GUI look like THEIR user GUI that comes out of the box because it is so darned easy and convenient.
ISD builds a basic page by combining an HTML template (in to which you place ISD specific code generation tags and any other tags, etc., you which using the ISD GUI or by hand). The page model relies upon a code behind page created from a piece of code template. The base classes are almost completely overridable, so that you can override all of the default functions, regenerate the application and not lose your overrides. The database controls live in the page container, but have their own class definitions (i.e., their code-behind) in specific /app_code files. Again, each control type has its own base class with pretty completely overridable methods. A single record control (showing a single db record) is pretty simple. A table, showing several records, has a table class and a table row class. The ISD website (www.ironspeed.com/support) has good documentation of the ISD model as a whole.
So, where are the problems in this model?
1. Easy and tempting to live with their out of the box GUI. Point ISD at your database, pick the tables you want to have it turn in to pages, tell it the kinds of pages, give it a thematic style and five minutes later you're viewing the application. Cool. But, it is very easy to forget that their user GUI is probably not what your user wants to see. So, be prepared to think for yourself and tinker with the GUI thus created. Not hard to do, and you can use VS 2005 to help you.
Business objects. You could put together your own business objects, but it would be difficult and you would get no help from ISD. ISD does a LOT of building of simple validation and checking (appropriate look up values, ranges, lengths, etc.) ISD lets you build custom queries, but these are read-only. It is smart enough (and you can override the write from a page in any case) to let you take a one to many view and write it back to the database (you'd probably override the default base method, but it isn't that hard to do). However, when you get in to serious dependency checking, ISD is still really about tables and not business objects. So, you're going to write some code.
If you are smart, you'll write it once store it in app_code somewhere and use it by calling it from an overridden method in your table or record controls. If you are like most of us, you'll first spaghetti it in to one of the code-behind classes above, and then forget you did so, or have a copy in each of the 10 pages that manipulates customer data. In my world, that has usually meant 5 identical functions and 5 that are all different (even though they are all supposed to be the same). ISD makes it tempting to order marinara, because the model lends itself to spaghetti code. Of course, you can completely prevent this, but you gotta learn the ISD model to determine the best way to do it on your project.
Page state and postbacks. Although ISD is quite open about this problem and tells users not to just take the defaults of returning the whole asp.net page state in the postback stream (cache on the server instead), the default is to return the whole page. Can make for some BIG pages. Which makes users think S L O W. As I said, you can manipulate this. But, what newbie is going to get this when it is SO tempting to just point, click, and boom - instant application. Your manager is now off your back because her product inventory table is "on the web" with a cool search and edit GUI (of 400kb state pages if you've gone a bit nuts and have just taken the default behaviors of ISD). Great in-house, but the customers in the real world....
Again, knowledge is the key. You can fix this, but you need to know you SHOULD.
Database read/write postbacks. No big problem here, but you also need to know that the model is to fetch only the data used at the moment. If your table shows 1000 records in 50 record increments, when you go from records 1 to 50 to 51 through 100, you will postback and hit the database again. This keeps data current, but increases server traffic.
Overall: Try the demo version. Point it at something simple that you really want to turn in to an asp.net application. Build maybe three tables. Then dissect it using the above as a guide. See what YOU think and post back to this question.
I have used it for convenience for a very small project. It did what I wanted and saved me a couple of days work.
The main problem I found was when it came to customising or extending the generated project. You have to spend quite a bit of time trying to understand Ironspeed's way of doing things which, I'll admit, is not my way.
I'd use it again for a small project if I knew in advance I wouldn't have to customise it much after.
If stored procedure generation is all you are after, CodeSmith is a decent option at a fraction of the cost of IronSpeed. There are several sproc templates available, and you can create your own or tweak an existing if that is what you need. You can also gen .Net code to your hearts content with CodeSmith. Tons of business class templates already exist for this.
IronSpeed's value is not in the sproc generation, but in the RAD features. I agree with #Galwegian... IronSpeed is OK for mock ups or very simple apps, not so good at all if you need to do any customization.
You may want to check out Evolutility CRUD framework. It provides some of the same features (limited to CRUD) and is open source.
IronSpeed has been great (out-of-the-box) at helping me develop data-driven corporate Intranet applications. While the code model takes a little getting used to, it is effective at maintaining a nice three-tier app. While the page templates can appear garish compared to 2010's web-design, it gets the job done, when you need function over form.
Iron Speed Designer is great for simple CRUD type web applications. You can find some useful information on our web site http://www.dotnetarchitect.co.uk/

Is Wiki Content Portable?

I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.

Resources