Is there any way (like an undocumented magic word perhaps) to get the current query string (or full URL including query string) from within a Mediawiki template or Scribunto (Lua) module?
If this is an option, consider obtaining HTML content with API. This should be simpler than writing an extension. Of course this won't be a regular page, rather something composed client-side on blank article or server-side on non-wiki site. With Labeled Section Transclusion extension you mentioned this should work.
Alternatively, consider some server-side post processing on generated HTML. It should perform quite well as MediaWiki caches a lot.
AFAIK there is no magic word for checking query string and, IMO, this would be a very bad thing. Article source is like a model in MVC pattern — you shouldn't put presentation stuff there.
Related
I have an Episerver site with a JobDetailsPageController with a Index method that takes a jobId parameter and creates a view with some details about that job. The urls looks something like this: https://hostname/<root-depending-on-site-tree>/jobs/?jobid=44.
What I would like is having urls on the form .../jobs/manager-position-telco-44, essentiallly creating a slug of the job title and appending the id. I have done this in the past using standard ASP.NET MVC Attribute Routing on a non-Episerver site, but EpiServer has a routing of its own that I don't know too well and can't figure out.
Also, adding non-query strings after the slash consistently sends me (no surprise) to a 404 page, so I would need to somehow customise this behaviour. I need to use EpiServers standard routing to end up at the right "parent", but ignore the latter part (the pretty bit).
Is it possible to create such urls on a normal page in page tree in EpiServer? I do understand it is possible to create static routes, but this node can be moved around like any other page so I cannot avoid EpiServer.
Please see this blog post. What you're looking for is partial routing.
#johan is right, partial routing is one way of doing this. Just wanted to add other possible solutions that might or might not match your needs.
Import data as content
Instead of serving content dynamically, you could consider importing your job ads from whatever source you have directly in content tree as separate pages below particular root page. That would give you a lot benefits - pages would be cached, it would support multiple languages, editors would see content directly in EPiServer CMS, data could be adjusted manually, etc.
This would be a good solution if your data does not change often and you need to provide a way for editor to create a new job ad manually as well.
Implement you own content provider
Another way to serve your dynamic data to EPiServer is to write your own custom content provider. You can find documentation here: http://world.episerver.com/documentation/Items/Developers-Guide/Episerver-CMS/7/Content-Providers/Content-Providers/
This solution requires more coding and is more complex, but it has some benefits as well. If one wanted, it would be possible to not just serve content from external data source, but also update that data by changing values directly in EPiServer UI.
I have a website I’m converting from Classic ASP to ASP.NET. The old site allowed users to edit the website template to create their own design for their section of the website (think MySpace, only LESS professional.)
I’ve been racking my brain trying to figure out how to do this with .NET. My sites generally use master pages, but obviously that won’t work for end-users.
I’ve tried loading the HTML templates as a regular text file and parsing it to ‘fit around’ the content place holders. It is as ugly as it sounds.
There’s got a be something generally regarded as the best practice here, but I sure can’t find it.
Any suggestions?
How much control do you want your users to have?
There are a few ways to implement this. I'll give you a quick summary of all the ideas I can think of:
Static HTML with predefined fields.
With this approach you get full control and you minimize the risk of any security vulnerabilities. You would store per-user HTML in a database table somewhere. This HTML would have predefined fields with some markup, like using {fieldName}. You would then use a simple parser to identify curly brackets and replace fieldName with a string value pulled from a Dictionary<String,String> somewhere.
This approach can be fast if you're smart with string processing (i.e. using a state-machine parser to find the curley brackets, sending the rebuilt string either directly to the output or into a StringBuilder, etc, and importantly not using String.Replace). The downside is it isn't a real templating system: there's no provision for looping over result-sets (assuming you want to allow that), expression evaluation, but it works for simple "insert this content into this space"-type designs.
Allow users to edit their own ASPX or ASCX files.
Why build your own templating system if you can use ASP.NET's? Well, this approach is the simplest if you want to build a quick 'n' dirty reporting system, but it fails terribly for security. Unfortunately you cannot secure any <% %> / <script runat="server"> code in ASPX files in a sandbox or use CAS owing to how ASP.NET works (I looked into this myself earlier: Code Access Security on a per-view ASP.NET MVC basis ).
You don't need to store the actual ASPX and ASCX files in the website's filesystem, you can store the files in your database using a VirtualPathProvider implementation, but getting it to work right can be a bit of a pain (especially as the ASP.NET runtime compiles ASPX files, so you'd need to inform it if an ASPX file was modified by the user). You also need to be aware that ASPX loading is tied into the user's request path (unless you're using Routing or MVC) so you're better off using ASCX, not that it matters really.
A custom ASP.NET handler that runs in its own CAS sandbox that implements a fully-fledged templating engine
This is the most painful option, and it exists between the two: you get the flexibility of a first-class templating engine (loops, fields, evaluation if necessary) without needing to open your application up to gaping security flaws. The downside is you need to build pretty much everything by yourself. I won't go into detail here.
Which option you go for depends on what your requirements are. MySpace's system was more "skinning" than "templating", in that the users were free to set a stylesheet and define some arbitrary common HTML rather than modify their page's general template directly.
You can easily implement a MySpace-like system in ASP.NET assuming that each skinnable feature is implemented as a Control subclass, just extend your Render method to allow for the insertion of said arbitrary HTML. Adding custom stylesheets is also easy: just add it inside a <style type="text/css"> element in your page's <head>.
When/if you do allow the user to enter HTML directly, I strongly recommend you parse it and filter out any dangerous elements (such as <script>, <style>, <object>, etc). If you want to allow for the embedding of YouTube videos and related then you should analyse <object> elements to ensure they actually are of YouTube videos, extract the video ID, then recreate the element from a known, trusted template. It is important that any custom HTML is "tag-balanced" (you can verify this by passing it through a strict XML parser instead of a more forgiving HTML parser, as XHTML is (technically) a subset of HTML anyway), that way any custom markup won't break the page.
Have fun.
I'm building a page in asp.net that will use tiny mce to provide a rich text editor on the page. Tiny mce outputs the rich text as html which I would like to save to a database. Then at a later date, I want to pull the HTML from the database and display it in a page.
I'm concerned about allowing malicious html, js tags into my database that would later be output.
Can someone walk me through at what point in my process I should html encode/decode etc. to prevent a persistent xss attack and or sql injection attack?
We use the Microsoft Web Protection Library to scrape out any potentially dangerous HTML on the way in. What I mean by "on the way in" - when the page is posted to the server, we scrub the HTML using MS WPL and take the results of that and throw that into the database. Don't even let any bad data get to your database, and you'll be safer for it. As far as encoding, you won't want to mess with HTML encoding/decoding - just take whatever is in your tinyMCE control, scrub it, and save it. Then on your display page, just write it out like it exists in your database into a literal control or something like that, and you should be good.
I believe Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(input) will do exactly what you want here.
Are these admins that are using the RTE? If so, I wouldn't worry about it.
If not, then I don't recommend using a WYSIWYIG such as TinyMCE. You'll have to actually look for malicious input, and chances are, you will miss some. Since the RTE outputs plain HTML, which I assume you want, you can't just convert HTML entities. That would kind of eliminate the whole point of using TinyMCE.
Stopping SQL injection is done in the backend when inserting the data into the database. You will want to use a parametrized query or escape the input (not sure how in ASP.NET, I'm a PHP guy.)
Couldn't you use a rich text editor that uses BBCode and on the server, escape everything that needs to be escaped and convert BBCode to HTML markup afterwards?
You could also, instead of producing BBCode on the client, convert the HTML markup to BBCode on the server, escape the remaining HTML and convert the result from BBCode back to HTML.
There are two approaches, you will probably use the first one
1) you will make a list of permitted tags and escape/strip rest of them. TinyMCE has probably some feature to disallow user to use some tags..(vut this is only client side, you should validate it on server)
2) you will encode permitted tags differently ([b]bold[/b]), than you could save everything to DB and while rendering escape everything and than interpret your special tags
Third approach: if the user is admin (the one who should know whats is he doing), than you can leave everyhing without escaping...he is the responsible one for his own mistakes....
I recently created a website for a friend (asp.net/sql server), the website includes news of his company and he and his team update this frequently.
The question has been asked if i could now create a widget / api of some sort that visitors of the website could now include the news on there own website should they wish too. I feel this needs to be a one line of code intergration or something that is extremely easy to intergrate.
Any recommendations or articles are welcome.
EDIT: how is something like this created
http://img830.imageshack.us/img830/7769/codingabandwebsitecreat.png
Thanks
With jQuery's ajax and an asp.net handler that returns an injectable html chunk is my off hand guess at simple way. Other's will probably know of frameworks if you don't want to roll your own. Is RSS to primitive?
you could also write a REST service in WCF since you're using asp.net and have it return XML or JSON, depending on how you write your widget.
How something like that is created... Well, I've created a number of these and the process is fairly simple. Sorry I don't have an article to reference. Create a 'widget creation/builder' page with input parameters, when submitted either store those in the database and return an ID to associate with those settings, or generate a list of param's for those settings, or both. Simply output the <script> tag into a textbox. Like so (inside an AJAX callback):
$("#results").html("<script type='text/javascript' src='" + widget_path + params + "'></script>");
Where widget_path is the absolute path to the widget ASP script, and params is either something like key=454 or theme=sunny&source=34&count=50 etc. Either manually or using something like jQuery.param to serialize the form. An alternative would be two <script> tags with settings in the 2nd and calling a widget initialize function.
They can copy and paste that into their site, and that ASP script should output only JavaScript, which you can either document.write() or use a JS library (such as jQuery) to operate on the DOM (.click() etc). If using a database the ASP script would check the key param and grab the widget settings, if not then simply process the params. It's important to mention if you want to communicate back and forth with an API, you need cross-domain enabled or you could simply use JSON (in ASP script check for a 'callback' parameter, wrap JSON in that function name, and use jQuery.getJSON with &callback=? at the end); there are other methods of course.
If you use a database, make sure to take security into account (SQL injection, etc.)
WidgetBox looks like a good, mainline method as well.
This question may seem a little bit stackoverflow-implementation specific, but I have seen a similar pattern on other websites that are using REST-friendly URL rewriting as well.
For example, a link to a particular question looks like this:
ASP.NET MVC - Passing redundant arguments to actions
1388703 apparently being some kind of unique ID and the rest being the Title of the question.
The ID itself should be enough, so what may be the advantage of putting the question title (in this particular case, one can see that stackoverflow uses this almost everywhere, e.g. for badges, user profiles etc.) as second parameter?
When you remove the last part of the URL, the same page is displayed, which is expected. But when you change the last part to any other string, the same result is still displayed.
Is this only a cosmetic issue, allowing easier management of links (e.g. when storing bookmarks?) or does it have any other advantages?
It's for search engine optimization.
The link ASP.NET MVC - Passing redundant arguments to actions doesn't mean much to a search engine but ASP.NET MVC - Passing redundant arguments to actions allows a search engine to match the words "asp net mvc" etc which they will normally give more weight to as it's in the URL.
It's also more user friendly as it gives more meaning to the content of the page form the URL.
It is cosmetic and it's also better search engine optimization to keep easily readable names in the url as that will make the search engines find it easier.
The biggest advantage that the extra text has is for SEO. The extra text in the link gives google a lot more help understanding what's in the page.
I'd also mention that if a title is changed (i.e. for spelling, etc), the ID number still works.