How to execute page methods when using custom route handlers? - asp.net

When the path refers to the actual folder structure and points to the page it's not a problem, i.e. "/Default.aspx/MyMethod", however if "/" brings up "Default.aspx", then "/MyMethod" means something different. Is it even possible at all?
A possible solution, and probably a better one, is to use a web service, which is what I'm using at the moment.

You can add the following:
PageMethods.set_path('/yourpage.aspx');
I found this solution here

Related

Convert query parameters to "pretty urls"

I have an Episerver site with a JobDetailsPageController with a Index method that takes a jobId parameter and creates a view with some details about that job. The urls looks something like this: https://hostname/<root-depending-on-site-tree>/jobs/?jobid=44.
What I would like is having urls on the form .../jobs/manager-position-telco-44, essentiallly creating a slug of the job title and appending the id. I have done this in the past using standard ASP.NET MVC Attribute Routing on a non-Episerver site, but EpiServer has a routing of its own that I don't know too well and can't figure out.
Also, adding non-query strings after the slash consistently sends me (no surprise) to a 404 page, so I would need to somehow customise this behaviour. I need to use EpiServers standard routing to end up at the right "parent", but ignore the latter part (the pretty bit).
Is it possible to create such urls on a normal page in page tree in EpiServer? I do understand it is possible to create static routes, but this node can be moved around like any other page so I cannot avoid EpiServer.
Please see this blog post. What you're looking for is partial routing.
#johan is right, partial routing is one way of doing this. Just wanted to add other possible solutions that might or might not match your needs.
Import data as content
Instead of serving content dynamically, you could consider importing your job ads from whatever source you have directly in content tree as separate pages below particular root page. That would give you a lot benefits - pages would be cached, it would support multiple languages, editors would see content directly in EPiServer CMS, data could be adjusted manually, etc.
This would be a good solution if your data does not change often and you need to provide a way for editor to create a new job ad manually as well.
Implement you own content provider
Another way to serve your dynamic data to EPiServer is to write your own custom content provider. You can find documentation here: http://world.episerver.com/documentation/Items/Developers-Guide/Episerver-CMS/7/Content-Providers/Content-Providers/
This solution requires more coding and is more complex, but it has some benefits as well. If one wanted, it would be possible to not just serve content from external data source, but also update that data by changing values directly in EPiServer UI.

How to change eZPlatform Backoffice SiteAccess?

I'm new to ezplatform and I have been just thrown in a project involving migration from ez legacy to ez platform.
One problem I've just bumped into is about siteaccess and images.
Example: users have custom images defined in a var directory, let's say var/news.
Unfortunately backoffice uses default siteaccess and I don't find a way to modify it without trigger a redirect from /ez#login to /login page.
I noticed that I can't change backoffice siteaccess in demo platform too.
So I was wondering either if there's a way to achieve this purpose or there's documentation that could help to address such subject?
Just to be clear, you have one var directory per site access, that is the issue right? As /ez will use the default one, you have a problem.
I think there is no simple way to manage this use case right now, I think it will be possible to do it with 2.x (ask in Slack)
One complex solution would be to put everything in one var directory and replace in the DB the different paths... :(
(or wait 2.x)

Referencing other ASP.NET pages symbolically, with compile-time checks?

I'm noticing code accumulating in my project that looks like this:
Response.Redirect("/Foo/Bar.aspx");
This seems brittle -- if I move or rename Bar.aspx file, I need to find places where I've referenced it and correct those string constants, both in markup and codebehind. It seems like their should be a better way. Something like:
Response.Redirect( MyNamespace.BarPage.GetUrl() );
In other words, let the 'stack' figure out the URL I need. Note: I know that I can consolidate references to a particular page with a hand-coded BarPage.GetUrl() method, but even that seems failure-prone.
Isn't there a better way?
The best way would be to resource them. Add a meaningful key and the URL value to the resource file, and redirect that way.
Response.Redirect(Properties.ASPXUrls.FooBar);
The problem you'll face is that there's no real inherent link between a code-behind and it's code-infront except the <%#Page %> directive. There's no real reason a codebehind has to even have the same class name as the code-infront's file name, it only happens because it's convention and it's how the auto-generator lays it out.
This means you're not going to find anything you can reference at compile-time that even knows what aspx the .cs links to. The closest thing you'll find is the typeof(MyNamespace.BarPage).FullName which will give you the code-behind's name and by assuming things follow convention you could (but I don't recommend) construct the URL for the code infront page it's associated with.
Personally I think you're better off just doing a find-all for "barPage.aspx" when you rename it and doing a little refactoring. You'll have to deal with hyperlinks in the code-infront anyway. If barPage.aspx represents some abstract concept (Like "The login page") it may help to add a property for it, but if barpage is just another page with no real globally inherent meaning I'd leave it as-is.
I'd recommend creating a static class with different properties for each of the links. That way, you only have one place to update.
Redirects in general are fragile, no matter how you get the name of the next page. They are also a performance problem.
If you find them collecting in your system, the first question you should really ask is why: excessive redirects are almost always a sign of an architectural problem.

RegisterClientScriptBlock vs RegisterClientScriptInclude

Using RegisterClientScriptBlock I reduce server requests.
But with RegisterClientScriptInclude I can separate HTML and javascript.
Which method should I prefer?
EDIT: Additional question - where do you store your js blocks. I get used to place them into resources files.
The RegisterClientScriptBlock method is handy if you want to modify the script somehow.
If you can have the script as a static file to include I would recommend that, as the browser would cache the file so that it would only be requested the first time that it's used. Given that the script is more than just a few lines, of course.
Use which method most serves your needs at the time. I don't think there's a rule that says you must choose one or the other for every single possibility.
Both functions should be used as last resort for JS. Best way to store and attach your JavaScript to the site is to use static linking.

Anything wrong with adding the PUT,DELETE verbs to the .aspx extension in IIS?

I'm using PUT and DELETE more and more w/ my ajax work and wanted to see if it would be a "bad idea" to add these verbs to the .aspx application extension in IIS.
If your AJAX operations are taking advantage of a RESTful API then I would think there is no problem. Otherwise you may want to look at RFC2616-sec9 to analyze more on your case. I personally dont think its bad but one might want to stick to standards and look to server overhead etc.
PUT and DELETE should - if you're following the RFC - either replace or delete the .aspx file that was specified. If that's what you're doing, go for it. If you're doing anything else, file your solution under "ugly hack", and use GET and/or POST like you should.

Resources