Warning: Non-static method Zend_Controller_Request_Http::getCookie() should not be called statically in..
Iam trying the following to get Cookie values:
$cookieData = Zend_Controller_Request_Http::getCookie($key, $default);
is there an better way to this?
getCookie() method is not static, it should be called on an object.
I believe this code is from your controller, so it should basically look like
$request = $this->getRequest();
$cookieData = $request->getCookie('someCookie', 'default');
This is a slight side note, yet it may just well help avoid long fruitless hours. From my experience, the problems that occur when one cannot retrieve value from $_COOKIE in zf1 and other frameworks occur mostly because setCookie is so easy to use one forgets to add the path and the domain like so:
setcookie('cookieName', 'cookieValue', $finalExpirationTime,'/','.yourdomain.com');
and instead do this:
setcookie('cookieName', 'cookieValue', $finalExpirationTime);
This gets real annoying especially so when working on Windows with ip's instead of actual domains. Another thing to look out for would be the dot (.) in front of the domain. As stated in the manual: Older browsers still implementing the deprecated ยป RFC 2109 may require a leading . to match all subdomains.
Hope this helps
Related
Making an ad manager plugin for WordPress, so the advertisement code can be almost anything, from good code to dirty, even evil.
I'm using simple sanitization like:
$get_content = '<script>/*code to destroy the site*/</script>';
//insert into db
$sanitized_code = addslashes( $get_content );
When viewing:
$fetched_data = /*slashed code*/;
//show as it's inserted
echo stripslashes( $fetched_data );
I'm avoiding base64_encode() and base64_decode() as I learned their performance is a bit slow.
Is that enough?
if not, what else I should ensure to protect the site and/or db from evil attack using bad ad code?
I'd love to get your explanation why you are suggestion something - it'll help deciding me the right thing in future too. Any help would be greatly appreciated.
addslashes then removeslashes is a round trip. You are echoing the original string exactly as it was submitted to you, so you are not protected at all from anything. '<script>/*code to destroy the site*/</script>' will be output exactly as-is to your web page, allowing your advertisers to do whatever they like in your web page's security context.
Normally when including submitted content in a web page, you should be using htmlspecialchars so that everything comes out as plain text and < just means a less then sign.
If you want an advertiser to be able to include markup, but not dangerous constructs like <script> then you need to parse the HTML, only allowing tags and attributes you know to be safe. This is complicated and difficult. Use an existing library such as HTMLPurifier to do it.
If you want an advertiser to be able to include markup with scripts, then you should put them in an iframe served from a different domain name, so they can't touch what's in your own page. Ads are usually done this way.
I don't know what you're hoping to do with addslashes. It is not the correct form of escaping for any particular injection context and it doesn't even remove difficult characters. There is almost never any reason to use it.
If you are using it on string content to build a SQL query containing that content then STOP, this isn't the proper way to do that and you will also be mangling your strings. Use parameterised queries to put data in the database. (And if you really can't, the correct string literal escape function would be mysql_real_escape_string or other similarly-named functions for different databases.)
As the title suggests, I'm using Symfony in conjunction with the JMSTranslationBundle and JMSI18nBundle in order to serve translated routes.
Here's my currently configured route:
/{location}/{profession}/{specialty}
So the route
/berlin/arzt/allgemein
is successfully pushed to the correct controller and action.
The JMSI18nBundle is automagically prefixing my English routes with /en/. This works for every other route with a non-dynamic component (such as /profile/{slug}/). This DOES NOT work, however, when using the English version of the above example. i.e.
/en/berlin/doctor/general
I'm guessing the router is not reading this properly as the English version of the normal route, and instead tries to assign location = en, profession = berlin, etc, which is obviously incorrect.
I've tried defining optional parameters, more complicated regexes, and trailing slashes for the translation (all with cache flushes in between). None of this works. What DOES work, is inserting a pointless non-dynamic component, i.e. /en/s/berlin/doctor/general etc
As a part of the business requirements, we don't want this additional pointless non-dynamic URL component.
So, my question is: how can I use (prefixed) translatable URLs in Symfony that contain nothing but dynamic fields?
Your help is greatly appreciated!
Solved:
As is the norm with Friday-afternoon problems, I found I had a $ inside my translated route rule, like so:
/{location}/{$profession}/{specialty}
Removing it and flushing the cache resulted in the route working.
tl;dr - PEBKAC
I'm just curious is it possible (or advisable) to use _linkbypost() instead of just _link() on cross domain links with Google Analytics to avoid the problems I'm having with the long query strings that _link() produces.
_link() uses _GET to pass data by attaching a huge gibberish query string to the destination url which causes me a few headaches: It prevents my caching scheme (which keys off exact matching urls), drives many of my social media widgets crazy (which have proven super important to my business), and just looks scary and ugly which I've found really does affect how much many users trust your site.
So I'm hoping I can get the same ability to track without losing my clean orderly cacheable urls by passing that data via post instead of get. But since I don't really understand how post works I don't know if this if feasible, or if it is just a really bad idea for some other reason.
I know _linkbypost() needs a form object to function, so my plan was to add an onSubmit function to each cross-domain link like so:
var crossLink = $(this).attr("href");
var formHTML = '<form id="crossForm" action="'+crossLink+'" method="post"></form>';
$('body').append(formHTML);
var crossForm = $('#crossForm');
_gaq.push(['_linkByPost', crossForm]);
return false;
Assuming it's not a bad idea to begin with, does that implementation seem reasonable?
I'm pretty sure _linkByPost will still sen the data through your url. So I don't think that's a solution to your problem.
You can use _link to pass the query parameters at the anchor (instead as query parameters) part of the url using it's second argument as true.
_gaq.push(['_link', 'http://www.myothersite.com', true]);
This will generate a url like
http://www.myothersite.com#__utma=1.2.123123...
You will also need _gaq.push(['_setAllowAnchor', true]); to tell GA to read the data from the Anchor.
It should be enough to not break your cache anymore and reduce the issue with your social plugins.
I was reading some questions trying to find a good solution to preventing XSS in user provided URLs(which get turned into a link). I've found one for PHP but I can't seem to find anything for .Net.
To be clear, all I want is a library which will make user-provided text safe(including unicode gotchas?) and make user-provided URLs safe(used in a or img tags)
I noticed that StackOverflow has very good XSS protection, but sadly that part of their Markdown implementation seems to be missing from MarkdownSharp. (and I use MarkdownSharp for a lot of my content)
Microsoft has the Anti-Cross Site Scripting Library; you could start by taking a look at it and determining if it fits your needs. They also have some guidance on how to avoid XSS attacks that you could follow if you determine the tool they offer is not really what you need.
There's a few things to consider here. Firstly, you've got ASP.NET Request Validation which will catch many of the common XSS patterns. Don't rely exclusively on this, but it's a nice little value add.
Next up you want to validate the input against a white-list and in this case, your white-list is all about conforming to the expected structure of a URL. Try using Uri.IsWellFormedUriString for compliance against RFC 2396 and RFC 273:
var sourceUri = UriTextBox.Text;
if (!Uri.IsWellFormedUriString(sourceUri, UriKind.Absolute))
{
// Not a valid URI - bail out here
}
AntiXSS has Encoder.UrlEncode which is great for encoding string to be appended to a URL, i.e. in a query string. Problem is that you want to take the original string and not escape characters such as the forward slashes otherwise http://troyhunt.com ends up as http%3a%2f%2ftroyhunt.com and you've got a problem.
As the context you're encoding for is an HTML attribute (it's the "href" attribute you're setting), you want to use Encoder.HtmlAttributeEncode:
MyHyperlink.NavigateUrl = Encoder.HtmlAttributeEncode(sourceUri);
What this means is that a string like http://troyhunt.com/<script> will get escaped to http://troyhunt.com/<script> - but of course Request Validation would catch that one first anyway.
Also take a look at the OWASP Top 10 Unvalidated Redirects and Forwards.
i think you can do it yourself by creating an array of the charecters and another array with the code,
if you found characters from the array replace it with the code, this will help you ! [but definitely not 100%]
character array
<
>
...
Code Array
& lt;
& gt;
...
I rely on HtmlSanitizer. It is a .NET library for cleaning HTML fragments and documents from constructs that can lead to XSS attacks.
It uses AngleSharp to parse, manipulate, and render HTML and CSS.
Because HtmlSanitizer is based on a robust HTML parser it can also shield you from deliberate or accidental
"tag poisoning" where invalid HTML in one fragment can corrupt the whole document leading to broken layout or style.
Usage:
var sanitizer = new HtmlSanitizer();
var html = #"<script>alert('xss')</script><div onload=""alert('xss')"""
+ #"style=""background-color: test"">Test<img src=""test.gif"""
+ #"style=""background-image: url(javascript:alert('xss')); margin: 10px""></div>";
var sanitized = sanitizer.Sanitize(html, "http://www.example.com");
Assert.That(sanitized, Is.EqualTo(#"<div style=""background-color: test"">"
+ #"Test<img style=""margin: 10px"" src=""http://www.example.com/test.gif""></div>"));
There's an online demo, plus there's also a .NET Fiddle you can play with.
(copy/paste from their readme)
I'm working in a webforms app that uses routing in .net 4. I've defined a very basic route in global.asax as follows:
RouteTable.Routes.MapPageRoute("myRouteName", "MyRoutePath", "~/RouteHandlers/MyHandler.aspx");
In the codebehind of one of my pages I'm using GetRouteUrl to generate a the URL for this named route as follows:
Response.RedirectPermanent(GetRouteUrl("myRouteName"));
This doesn't produce the expected result of http://sitename/MyRoutePath. Instead it produces http://sitename/MyRoutePath?length=15
The length parameter doesn't seem to hurt but I've spent a lot of time making the URLs look nice so I don't want to see an extra parameter there. Any idea how to disable it?
I encountered this very same issue with just one of my routes using Web Forms this morning and I've got around it by providing a 2nd argument to the GetRouteUrl method, passing in null (as this particular route didn't require any route parameters).
For example:
GetRouteUrl("name-of-my-route", null)
My Url is now clean and is not appended with ?length=15.
Hopefully this might help your situation too.