I think there are a lot of people out there unaware of RFC's (Request for Comments). I know what they are at a logical level, but can anybody give a good description for a new developer? Also, sharing some resources on how to use and read them would be nice.
The term comes from the days of ARPANET, the predecessor to the internet, where the researchers would basically just throw ideas out there to, well, make a request for comments from the other researchers on the project. They could be about pretty much anything and were not very formal at the time. If you go read them, it’s pretty comical how informal they were.
Now, there are more standards about what goes in RFC's and you can't get an RFC published until you have met strict guidelines and have done extensive research. They are pretty much reserved for well researched network standards that have been approved by the IETF.
From http://linux.about.com/cs/linux101/g/rfclparrequestf.htm
The name of the result and the process
for creating a standard on the
Internet. New standards are proposed
and published on the Internet, as a
Request For Comments. The proposal is
reviewed by the Internet Engineering
Task Force (http://www.ietf.org/), a
consensus-building body that
facilitates discussion, and eventually
a new standard is established, but the
reference number/name for the standard
retains the acronym RFC, e.g. the
official standard for e-mail message
formats is RFC 822.
See also: RFC Wikipedia Article
This could also mean "Request for Change" in an Agile environment. Just throwing that out there as everyone is so certain is just means "Request for Comments".
Wikipedia gives a good description of what [RFC] is about but in a nutshell it is a set of recommendation from the Internet Engineering Task Force applicable to the working of the Internet and Internet-connected systems. They are used as the standards.
So if you're looking for a definitive source of the information about the implementation of FTP, LDAP, IMAP, POP etc you don't have to look further than the appropriate RFC documents.
It's a Request For Comments. That title is a little misleading though, as it's often used as a name for standards, mostly those by the IETF. See Wikipedia
Related
I just found out that there's some countries(UK, CANADA and some more) that actually have a LAW about the web-site accessibility. I was shocked, because one thing when there's some RECOMMENDATIONS and another thing is a LAW, witch means anyone can sue you for not being 'standard'.
I'm interesting in your professional opinion about why is it bad to use LAW based on WCAG 2.0 recommendations to make web-site accessible to disabled people. If you may, please provide a good examples with proper comments. There's not so many people who're fluent in WCAG 2.0 standards.
I found at wikipedia criticism about wcag here what it says:
Criticism of WAI guidelines
There has been criticism of the W3C process, claiming that it does not
sufficiently put the user at the heart of the process.[2] There was a
formal objection to WCAG's original claim that WCAG 2.0 will address
requirements for people with learning disabilities and cognitive
limitations headed by Lisa Seeman and signed by 40 organisations and
people.[3] In articles such as "WCAG 2.0: The new W3C guidelines
evaluated",[4] "To Hell with WCAG 2.0"[5] and "Testability Costs Too
Much",[6] the WAI has been criticised for allowing WCAG 1.0 to get
increasingly out of step with today's technologies and techniques for
creating and consuming web content, for the slow pace of development
of WCAG 2.0, for making the new guidelines difficult to navigate and
understand, and other argued failings.
*I may be wrong, but I think CODE should not be restricted by any law at all. It's a godamn CODE ffs
I think governments should encourage web-site owners(businesses!) to make they sites accessible, but not restrict them to some wcag for example.
Thanks!
I think there is a basic misunderstanding about how the law aspect works, it isn't based on WCAG.
In the UK, most of the EU, Canada and Australia there is no mention of WCAG2 or any particular standard for website accessibility in the law itself.
The law in the UK and in other countries like Australia says (and consider this an extreme paraphrase) that any product or service you provide should not discriminate against people with disabilities.
Whether you rely on a website to be accessible is up to you, you just have to provide your product/service in an accessible way somehow, you could do it on the phone and in a physical place.
NB: Most countries have "advisory notes" that do talk about WCAG, but see those as a means of making things accessible, not the core legal requirment.
Given that the website is generally the easiest way to provide something accessibly, WCAG2 is the most recognised set of guidelines and if you use that and make a "reasonable effort", any legal complaints will be easier to deal with.
Taking the book example (from the comments elsewhere), a paper book may not be accessible to someone who is blind, but the publisher is obliged to either make the digital copy available as an ebook (which can be read out by a computer or other device) or make the content available to services that create audio versions. They don't loose out on sales, and it is not a hardship to provide an accessible version.
There are lots of ways to make products and services available and thanks to the web being created as accessible-by-default, it is a very good channel for that.
Also, WCAG does not say "you have to do it this way or you are not standard", it says things like "All non-text content that is presented to the user has a text alternative that serves the equivalent purpose". It doesn't define the code you use (although there are obvious ways to acheive that), the guidelines are written so there there a multiple ways of acheiving the aim.
Some people complain about that and think it should be clearer and easier to implement!
Bottom line: If you are paid to make a website, making it accessible is part of a professional job.
Accessibility is not just "code", accessibility is about discrimination.
And fortunately, there are laws to sue people, not for not being standard, but for removing access to people with disabilities.
Are there any open-source implementations of NTRU-KE (Preferably in Java or C#) out there that I can use as a reference for implementing it in a different language?
The implementations listed on the Wikipedia page for NTRUEncrypt don't have it included, and there's a paper covering the algorithm here but the language is a bit too technical for me to be able to understand it fully.
Future readers, please prove me wrong (and post your own answer).
Given it is pretty new (November 2013) there probably aren't any implementations at all. Even the authors of the paper might not have implemented it themselves (you could ask them though). But as far as I can tell the protocol only uses operations that would have to be included in NTRUEncrypt implementations anyway. So it shouldn't be to difficult to write one yourself on top of an existing NTRU library. You can ask specific questions on the protocol here or on https://crypto.stackexchange.com. Probably you should try to understand the basics of NTRUEncrypt first, though.
I have am working on a Health care project and have come across NCPDP D.0 standard, although I have googled and found some basic information on wiki and other sites, I was looking are there any simple reference or open source example of software allowing pharmacy transactions in NCPDP D.0 format.
Has anyone in the community worked/working on this, and if they can share some information it would be of gr8 help.
Thanks,
HSR.
I think that you will be hard pressed to find any public documentation or open source software following this standard. I haven't been able to find it myself, and to gain access to the standard it's my understanding that you will need to be a member of NCPDP. This is a $675 bump and you are not allowed to share the standard outside of your organization. Standards Purchase.
In addition you'll also need the X12 5010 standard. On the X12 store this could set you back a few thousand dollars.
In any event, the documentation though incomplete is decent on the NCPDP.ORG site, you should check it out.
Although I might pretend very well that I know a thing about networks or security and it might help me pass an interview or fix a bug, I don't really feel I'm fooling anyone.
I'm looking for laymen explanation of current network security concepts and solutions. The information is scattered around and I didn't find a resource for "dummies" like me (e.g experienced Java developers that can speak the jargon but have no real clue what it means).
Topics I have a weak notion about and want to understand better as a Java developer:
PGP
Public / Private keys
RSA / DES
SSL and 2 way SSL (keystore / trustore)
Protecting against Man in the middle fraud
Digital Signature and Certificates
Is there a resource out there that really explains it in a way that doesn't require a Cisco certificate / Linux lingo / know what is subnet masking or other plumbing skills?
The book Cryptography Engineering by Ferguson, Schneier, and Kohno might be something that would get you a decent way down the road to understanding the topics you listed. I read the first version of this book (Practical Cryptography) and found it to be quite good. For example, I thought the descriptions of public key/private key cryptography to be reasonably straightforward to understand.
It might not explicitly describe the specific terms in all cases that you are asking about. For example, I just looked in the index of my copy of Practical Cryptography and do not see the terms "keystore" and "truststore", but the first google hit I clicked on for those provided a definition in language I understood (largely because I read the book).
I also own Applied Cryptography mentioned by Aidan Cully, and I think it is also a very good book and certainly worth owning. However I tend to think of it more as a reference book (although somewhat dated - the copyright is 1996). In terms of real-word advice, though, I think the original title of the newer book Practical Cryptography was right on. The book seems, well, practical.
Schneier's Applied Cryptography is how I learned most of what I know. I haven't read it, but expect Ross Anderson's Security Engineering would also be a good resource.
Priactical UNIX and Internet Security will cover a lot of that stuff and give you a basic UNIX background. Also, if you have extra time Academic Earth has free video lectures from top universities.
I just ran across a question with an answer suggesting the AntiXss library to avoid cross site scripting. Sounded interesting, reading the msdn blog, it appears to just provide an HtmlEncode() method. But I already use HttpUtility.HtmlEncode().
Why would I want to use AntiXss.HtmlEncode over HttpUtility.HtmlEncode?
Indeed, I am not the first to ask this question. And, indeed, Google turns up some answers, mainly
A white-list instead of black-list approach
A 0.1ms performance improvement
Well, that's nice, but what does it mean for me? I don't care so much about the performance of 0.1ms and I don't really feel like downloading and adding another library dependency for functionality that I already have.
Are there examples of cases where the AntiXss implementation would prevent an attack that the HttpUtility implementation would not?
If I continue to use the HttpUtility implementation, am I at risk? What about this 'bug'?
I don't have an answer specifically to your question, but I would like to point out that the white list vs black list approach not just "nice". It's important. Very important. When it comes to security, every little thing is important. Remember that with cross-site scripting and cross-site request forgery , even if your site is not showing sensitive data, a hacker could infect your site by injecting javascript and use it to get sensitive data from another site. So doing it right is critical.
OWASP guidelines specify using a white list approach. PCI Compliance guidelines also specify this in coding standards (since they refer tot he OWASP guidelines).
Also, the newer version of the AntiXss library has a nice new function: .GetSafeHtmlFragment() which is nice for those cases where you want to store HTML in the database and have it displayed to the user as HTML.
Also, as for the "bug", if you're coding properly and following all the security guidelines, you're using parameterized stored procedures, so the single quotes will be handled correctly, If you're not coding properly, no off the shelf library is going to protect you fully. The AntiXss library is meant to be a tool to be used, not a substitute for knowledge. Relying on the library to do it right for you would be expecting a really good paintbrush to turn out good paintings without a good artist.
Edit - Added
As asked in the question, an example of where the anti xss will protect you and HttpUtility will not:
HttpUtility.HtmlEncode and Server. HtmlEncode do not prevent Cross Site Scripting
That's according to the author, though. I haven't tested it personally.
It sounds like you're up on your security guidelines, so this may not be something I need to tell you, but just in case a less experienced developer is out there reading this, the reason I say that the white-list approach is critical is this.
Right now, today, HttpUtility.HtmlEncode may successfully block every attack out there, simply by removing/encoding < and > , plus a few other "known potentially unsafe" characters, but someone is always trying to think of new ways of breaking in. Allowing only known-safe (white list) content is a lot easier than trying to think of every possible unsafe bit of input an attacker could possibly throw at you (black-list approach).
In terms of why you'd use one over the other, consider that the AntiXSS library gets released more often than the ASP.NET framework - since, as David Stratton says 'someone is always trying to think of new ways of breaking in', when someone does come up with one the AntiXSS library is much more likely to get an updated release to defend against it.
The following are the differences between Microsoft.Security.Application.AntiXss.HtmlEncode and System.Web.HttpUtility.HtmlEncode methods:
Anti-XSS uses the white-listing technique, sometimes referred to as the principle of inclusions, to provide protection against Cross-Site Scripting (XSS) attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). System.Web.HttpUtility.HtmlEncode and other encoding methods in that namespace use principle of exclusions and encode only certain characters designated as potentially dangerous such as <, >, & and ' characters.
The Anti-XSS Library's list of white (or safe) characters support more than a dozen languages (Greek and Coptic, Cyrillic, Cyrillic Supplement, Armenian, Hebrew, Arabic, Syriac, Arabic Supplement, Thaana, NKo and more)
Anti-XSS library has been designed specially to mitigate XSS attacks whereas HttpUtility encoding methods are created to ensure that ASP.NET output does not break HTML.
Performance - the average delta between AntiXss.HtmlEncode() and HttpUtility.HtmlEncode() is +0.1 milliseconds per transaction.
Anti-XSS Version 3.0 provides a test harness which allows developers to run both XSS validation and performance tests.
Most XSS vulnerabilities (any type of vulnerability, actually) are based purely on the fact that existing security did not "expect" certain things to happen. Whitelist-only approaches are more apt to handle these scenarios by default.
We use the white-list approach for Microsoft's Windows Live sites. I'm sure that there are any number of security attacks that we haven't thought of yet, so I'm more comfortable with the paranoid approach. I suspect there have been cases where the black-list exposed vulnerabilities that the white-list did not, but I couldn't tell you the details.