In Meteor for collection.allow() insert callback is "insert(userId, doc) {}" Here isn't the userId argument redundant? Because we can always check it with Meteor.userId(). Why is it being passed as an argument?
Yes, it is currently unnecessary. I answered a related question about this here.
I'd have to ask a core dev why it's there, but if I had to guess it's probably some mix of:
It may make unit testing easier.
You probably want it anyway, and it's fewer characters to type than Meteor.userId().
API baggage - maybe it made a lot of sense when it was added, but removing it would break a lot of code so why bother in light of (1) and (2)?
Related
I was presented with this argument when fixing a bug related to implicit casting and speaking to the developer who told me:
If you use Closure, then you do not need absolute equality. Any value would be typed, therefor you don't need to worry about implicit casts.
My response was, that's dicey. You're making a lot of assumptions; Closure does not alter JavaScript, it's primarily a labyrinthine super layer (aside: probably a moot one, now that we have TypeScript).
Anyway, if one of these implicit things does slip by because the annotations don't resolve perfectly for some reason, you can end up with a tough bug (which is what happened; it got assigned to me because I guess the other dev didn't think it could be the problem).
I got responses of, "well if that dev had properly typed that object and this wasn't just an object and..."
Or...you could just protect against this sort of thing easily by using three equal signs instead of two. Use an assertion or console log to check the condition if necessary. Don't just leave it hanging out there.
Anyway what do you think; if you're using Closure, should you still observe the general best practice of using absolute equality in your JS code?
I know this leads to a wider conversation as well (e.g. Java 8's "optional" being "totally useless"), curious in the context of Closure though.
Question is a bit vague, code example would have helped. But yes, closure does not necessarily type every object and variable. I always use === unless there is a specific reason not to.
HttpContext.Current.Items["ctx_" + HttpContext.Current.GetHashCode().ToString("x")]
I see this exact code all ... over ... the ... place but I must be overlooking something. It's common in responses to these posts to question the appropriateness of using HttpContext, but no one points out that GetHashCode is redundant and a fixed string will do.
What am I overlooking here?
EDIT: The question is, GetHashCode() will be the same for every HttpContext.Current, so why use GetHashCode() in the four links I provided? Some of them are posts with quite a bit of work put into them, so I feel like perhaps they are addressing some threading or context issue I'm overlooking. I don't see why just HttpContext.Current.Items["ctx_"] wouldn't do exactly the same.
This is horrible. For one, HttpContext.Current.Items is local to the current HttpContext anyway so need to attempt to make the keys "more unique". Second, depending on how this technique is used the keys will collide once in a while causing spurious, undebuggable failures.
If you need a unique key (maybe because you are developing a library), use a saner solution like Guid.NewGuid().ToString(). This is guaranteed to work and even simpler.
So to answer your question :)
It doesn't make much sense to use GetHashcode for creating key.
Authors of posts you gave links to probably wanted to create key that will be unique. But doing this doesn't stop other team member to use same key somewhere else in the code base.
I think it's better to just use handwritten long keys. Instead of
["ctx_" + HttpContext.Current.GetHashCode().ToString("x")]
just use
["object_context_key"]
or something like that. That way you know what it is exactly (and that may be usefull in for example post mortem debugging) and I also think that if you have to come up with some long key it is possible that it will be 'more unique' then the one with GetHashCode.
Currently we are facing following Problem in our Application:
Around 40 % of the Code that is in the Application is never used. That means the Code would be there, and maybe functional, but the Frontend Feature has been shut off so the Users cannot reach the Functionality anymore or other Methods are replacing the old, now deprecated Methods.
What i am currently doing is removing all the old code while not trying to break anything, manually.
The Question is:
Would you remove the old Code, or hope that it may awake some time ... zombie - like
Do you think that it is worth the Effort to remove the Code (less work to find stuff in the clutter, better test coverage, easier for other people to find their way)
Should we keep the Code somewhere, as a reference? ( We are using Version Control, but i find it is pretty hard to find old code in the Revision Jungle ... any tips for that? )
Do you have arguments for convincing the team / management / developers that wrote said code?
Reasons to not deprecate and then delete Code?
TL;DR: Delete unused code or leave it as it is? Discuss!
u
If you are certain that the code is unused, definitely delete it. I assume you have a version control system, so if you ever need it again, you can still find the code back.
Deleting the unused code will make the project easier to maintain, and your team probably will end up saving time in the long run (nobody will re-read the code to try and understand what it was used for, nobody will end up changing said code thinking it may still be used...)
However, if your code contains a public API that is distributed, you will probably want to mark the classes/methods deprecated for some time before effectively deleting the code, so the callers have some time to adapt (or inform you of the issue).
Would you remove the old Code, or hope that it may awake some time ... zombie - like
I'd definitely remove it. I hate having to work out if functions ever get called.
Do you think that it is worth the Effort to remove the Code (less work to find stuff in the clutter, better test coverage, easier for other people to find their way)
Yes, definitely worth the effort.
Should we keep the Code somewhere, as a reference?
Um, you are using version control softwarev, aren't you?
I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.
When I try to refactor my functions, for new needs, I stumble from time to time about the crucial question:
Shall I add another variable with a default value? Or shall I use only one array, where I´m able to add an additional variable without breaking the API?
Unless you need to support a flexible number of variables, I think it's best to explicitly identify each parameter. In most cases you can add an overloaded method that has a different signature to support the extra parameter while still supporting the original method signature. If you use an array for passing variables it just makes it too confusing for users of your API. Obviously there are some inputs that lend themselves to an array (a list of points in a polygon, a list of account IDs you wish to perform an action on, etc.) but if it's not a variable that you would reasonably expect to be an array or list, you should pass it into the method as a separate parameter.
Just like many questions in programming, the right answer is "it depends".
To take Javascript/jQuery as an example, one good rule of thumb is whether the parameter will be required each time the function is called or whether it is optional. For example, the main jQuery function itself requires an expression to determine what element(s) the operation will affect:
jQuery(expresssion)
It makes no sense to try to pass this parameter as part of an array as it will be required every time this function is called.
On the other hand, many jQuery plugins require several miscellaneous parameters that may be optional. By convention, these are passed as parameters via an 'options' array. As you said, this provides a nice interface as new parameters can be added without affecting the existing API. This makes the API clean as well since the user can ignore those options that are not applicable.
In general, when several parameters are involved, passing them as an array is a nice convention as many of them are certainly going to be optional. This would have helped clean up many WIN32 API's, although it is more difficult to deal with arrays in C/C++ than in Javascript.
It depends on the programming language used.
If you have a run-of-the-mill OO language, you should use an object that you can easily extend, if you are really concerned about API consistency.
If that doesn't matter that much, there is the option of changing the method signature and overloading the method with more / different parameters.
If your language doesn't support either and you want the API to be binary stable, use an array.
There are several considerations that must be made.
Where is the function used? - Only in code you created? One place or hundreds of places? The amount of work that will need to be done to maintain existing code is important. Remember to include the amount of time it will take to communicate to other programmers that may currently be using your function.
How critical is the new parameter? - Do you want to require it to be used? If it has a default value, will that default value break existing use of the function in any subtle ways?
Ease of comprehension - How many parameters are already passed into the function? The larger the number, the more confusing and error prone it will be. Code Complete recommends that you restrict the number of parameters to 7 or less. If you need more than that, you should try to abstract some or all of the related parameters into one object.
Other special considerations - Do you want to optimize your efforts for any special conditions such as code speed or size? Are there any special considerations that must be taken into account for your execution environment? Keep in mind your goals for the project and make sure you aren't working against them with whatever design choice you make.
In his book Code Complete, Steve McConnell decrees that a function should never have more than 7 arguments, and rarely even that many. He presents compelling arguments - that I can't cite from memory, alas.
Clean Code, more recently, advocates even fewer arguments.
So unless the number of things to pass is really small, they should be passed in an enveloping structure. If they're homogenous, an array. If not, then a reasonably lightweight object should be built for the purpose.
You should do neither. Just add the parameter and change all callers to supply the proper default value. The reason is that parameters with default values can only be at the end, and will not be able to add any more required parameters anywhere in the parameters list, without having a risk of misinterpretation.
These are the critical steps to disaster:
1. add one or two parameters with defaults
2. some callers will supply it, and some will rely on defaults.
[half a year passed]
3. add a required parameter (before them)
4. change all callers to accept the required parameter
5. get a phone call, or other event which will make you forget to change one of the instances in part#2
6. now your program compiles perfectly, but is invalid.
Unfortunately, in function call semantics we usually don't have a chance to say, by name, which value goes where.
Array is also not a proper solution. Array should be used as a connection of similar objects, upon which there's a uniform activity performed. As they say here, if it's worth refactoring, it's worth refactoring now.