I would like your opinion about using Upper or lowerCamelCase in a case insensitive and procedural language (Oracle PL/SQL).
Some guys wanna use this pattern in my job, but the programmers don't like the idea...
Oracle Forms and Reports do not support autocomplete.
My opinion: I don't see any reason to use Camel Case in a case insensitive language...
What's your opinion?
CamelCase really increases readability for variables when you need multi-word names, and its easier to write than using underscores for spaces, especially for programmers who use them all the time in most common languages.
in the long run, being case insensitive should free people to use whatever case they prefer, so not sure why this is an issue, but I vote Yes for CamelCase. SQL Keywords should probably be all caps or all lower, depending on your shop standard.
Related
Trying to use dictionary bracket syntax in match statements and it doesn't let me. Is this intentional by design, or am I missing something?
Below doesn't work
match curr:
myDict["one"]:
#do something
myDict["two"]:
#do something
I am forced to use the dot syntax instead and it works...is this by design?
I can't conclude that it is intentional, but I can tell you it does not work and will not work.
In Godot 3, the parser does not support this syntax, which might have been oversight. Or it might have been that the parser was already becoming a mess and hard to maintain, so features that weren't critical weren't considered? Perhaps… After all, GDScript was reworked from the ground up for Godot 4. So…
In Godot 4, the compiler does not support it, and there is a reason: it wants a constants, which also allow some optimizations. Does Godot 3 care about that? Nope, you can use variables, and there is no problem. And no, the match is not optimized in Godot 3, nothing is, it is all interpreted.
Do you really care if it was intentional?
You are likely OK to do this with a bunch of if statements. After all, if you are willing to write a case for each item in the dictionary, they probably are a manageable amount.
You could also throw design patterns at it. The strategy pattern comes to mind.
I read in the Julia doc page https://docs.julialang.org/en/v1/manual/variables/#:~:text=Variable%20names%20must%20begin%20with,Sm%20math%20symbols)%20are%20allowed. :
Word separation can be indicated by underscores ('_'), but use of
underscores is discouraged unless the name would be hard to read
otherwise
My question is if there any reasons to discourage the usage of underscores? Thanks.
I don't think underscores are really discouraged in user code and for internal variables. It is mostly for being consistent with the style in Base Julia, which follows this, mostly. And consistency is good, right?
But if you create a package or module, then the interface normally consists of types and functions. Typenames have strong convetion that they should be CapitalCase. User-facing functions are normally lowercase without _, because they are supposed to be simple, brief and should express a single well-defined concept. A bit like the Unix philospophy: every function should do one thing, and do it well.
A convention discouraging composite and long identifier names encourages you to create simple functions. If your function needs a name with underscores, it's possibly a sign that you should break it into multiple functions.
But in your own code, use whatever convension that suits you.
I’m no expert on Julia, but the line you quote is located under the header “Stylistic Conventions” and I would presume that’s basically it.
There is an additional section about naming conventions in the docs under Style Guide
There is a line in there that says:
“Underscores are also used to indicate a combination of concepts”.
So if you decided to use a lot of underscores in your function names, the next programmer to work on your code might think you are “combining concepts”.
It is said in julia manual that 'Word separation can be indicated by underscores ('_'), but use of underscores is discouraged unless the name would be hard to read otherwise'.
I wonder why.
We're often bound by the personal preferences of those who write the tools we use.
Standardisation is a good thing, even if we may not agree with specific standards.
I just ran across a question with an answer suggesting the AntiXss library to avoid cross site scripting. Sounded interesting, reading the msdn blog, it appears to just provide an HtmlEncode() method. But I already use HttpUtility.HtmlEncode().
Why would I want to use AntiXss.HtmlEncode over HttpUtility.HtmlEncode?
Indeed, I am not the first to ask this question. And, indeed, Google turns up some answers, mainly
A white-list instead of black-list approach
A 0.1ms performance improvement
Well, that's nice, but what does it mean for me? I don't care so much about the performance of 0.1ms and I don't really feel like downloading and adding another library dependency for functionality that I already have.
Are there examples of cases where the AntiXss implementation would prevent an attack that the HttpUtility implementation would not?
If I continue to use the HttpUtility implementation, am I at risk? What about this 'bug'?
I don't have an answer specifically to your question, but I would like to point out that the white list vs black list approach not just "nice". It's important. Very important. When it comes to security, every little thing is important. Remember that with cross-site scripting and cross-site request forgery , even if your site is not showing sensitive data, a hacker could infect your site by injecting javascript and use it to get sensitive data from another site. So doing it right is critical.
OWASP guidelines specify using a white list approach. PCI Compliance guidelines also specify this in coding standards (since they refer tot he OWASP guidelines).
Also, the newer version of the AntiXss library has a nice new function: .GetSafeHtmlFragment() which is nice for those cases where you want to store HTML in the database and have it displayed to the user as HTML.
Also, as for the "bug", if you're coding properly and following all the security guidelines, you're using parameterized stored procedures, so the single quotes will be handled correctly, If you're not coding properly, no off the shelf library is going to protect you fully. The AntiXss library is meant to be a tool to be used, not a substitute for knowledge. Relying on the library to do it right for you would be expecting a really good paintbrush to turn out good paintings without a good artist.
Edit - Added
As asked in the question, an example of where the anti xss will protect you and HttpUtility will not:
HttpUtility.HtmlEncode and Server. HtmlEncode do not prevent Cross Site Scripting
That's according to the author, though. I haven't tested it personally.
It sounds like you're up on your security guidelines, so this may not be something I need to tell you, but just in case a less experienced developer is out there reading this, the reason I say that the white-list approach is critical is this.
Right now, today, HttpUtility.HtmlEncode may successfully block every attack out there, simply by removing/encoding < and > , plus a few other "known potentially unsafe" characters, but someone is always trying to think of new ways of breaking in. Allowing only known-safe (white list) content is a lot easier than trying to think of every possible unsafe bit of input an attacker could possibly throw at you (black-list approach).
In terms of why you'd use one over the other, consider that the AntiXSS library gets released more often than the ASP.NET framework - since, as David Stratton says 'someone is always trying to think of new ways of breaking in', when someone does come up with one the AntiXSS library is much more likely to get an updated release to defend against it.
The following are the differences between Microsoft.Security.Application.AntiXss.HtmlEncode and System.Web.HttpUtility.HtmlEncode methods:
Anti-XSS uses the white-listing technique, sometimes referred to as the principle of inclusions, to provide protection against Cross-Site Scripting (XSS) attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). System.Web.HttpUtility.HtmlEncode and other encoding methods in that namespace use principle of exclusions and encode only certain characters designated as potentially dangerous such as <, >, & and ' characters.
The Anti-XSS Library's list of white (or safe) characters support more than a dozen languages (Greek and Coptic, Cyrillic, Cyrillic Supplement, Armenian, Hebrew, Arabic, Syriac, Arabic Supplement, Thaana, NKo and more)
Anti-XSS library has been designed specially to mitigate XSS attacks whereas HttpUtility encoding methods are created to ensure that ASP.NET output does not break HTML.
Performance - the average delta between AntiXss.HtmlEncode() and HttpUtility.HtmlEncode() is +0.1 milliseconds per transaction.
Anti-XSS Version 3.0 provides a test harness which allows developers to run both XSS validation and performance tests.
Most XSS vulnerabilities (any type of vulnerability, actually) are based purely on the fact that existing security did not "expect" certain things to happen. Whitelist-only approaches are more apt to handle these scenarios by default.
We use the white-list approach for Microsoft's Windows Live sites. I'm sure that there are any number of security attacks that we haven't thought of yet, so I'm more comfortable with the paranoid approach. I suspect there have been cases where the black-list exposed vulnerabilities that the white-list did not, but I couldn't tell you the details.
Are computer languages copyrighted or have some restrictions imposed on them how they can be used? What does that mean in practice? If so, what can be done or what cannot be done? Could I make a compiler or ide or anything for any of them?
For example for Pl/Sql?
Unfortunately, programming languages may be encumbered by patents. This appears to be the case e.g. with the Aikido language.
Just recently this seems to have become a non-issue for the C# programming language (and the .NET Common Language Infrastructure).
To answer your question regarding what can and what cannot be done: if in your implementation of the language you use an invention that somebody patented, you definitely don't want to try to make profit with your implementation in any country where the patent applies (unless you licence the tech, of course). However, if you can circumvent the patent, i.e., implement for example a compiler for the same language without using that specific trick but something else, then you should not have a problem. Patents need (well, should need) to be very specific, so this might often be possible. (IANAL, though.)
You really need to familiarize yourself with copyright. Copyright applies to works of art: writings, paintings, etc. So the programming language itself cannot be copyrighted. The text describing it usually is, but that only prevents you from copying that text - it doesn't prevent you from reading it, understanding it, and using it.
So for PL/SQL, it's probably the case that its description is copyrighted by Oracle, but that can't stop you from making compilers and IDEs. As Pukku points out: there are other kinds of intellectual property, such as patents and trade marks, which may prevent you from doing these things (or calling them PL/SQL when done), but not copyright.