Is it possible to have a the software ignore the fact that there are unused arguments defined when a module is run?
For example, I have a module multiply(a,b), which returns the product of a and b. I will receive an error if I call the module like so:
multiply(a=20,b=30,c=10)
Returning an error on this just seems a bit unnecessary, since the required inputs a and b have been specified. Is it possible to avoid this bad behaviour?
An easy solution would be just to stop specifying c, but that doesn't answer why R behaves like this. Is there another way to solve this?
Change the definition of multiply to take additional unknown arguments:
multiply <- function(a, b, ...) {
# Original code
}
The R.utils package has a function called doCall which is like do.call, but it does not return an error if unused arguments are passed.
multiply <- function(a, b) a * b
# these will fail
multiply(a = 20, b = 30, c = 10)
# Error in multiply(a = 20, b = 30, c = 10) : unused argument (c = 10)
do.call(multiply, list(a = 20, b = 30, c = 10))
# Error in (function (a, b) : unused argument (c = 10)
# R.utils::doCall will work
R.utils::doCall(multiply, args = list(a = 20, b = 30, c = 10))
# [1] 600
# it also does not require the arguments to be passed as a list
R.utils::doCall(multiply, a = 20, b = 30, c = 10)
# [1] 600
One approach (which I can't imagine is good programming practice) is to add the ... which is traditionally used to pass arguments specified in one function to another.
> multiply <- function(a,b) a*b
> multiply(a = 2,b = 4,c = 8)
Error in multiply(a = 2, b = 4, c = 8) : unused argument(s) (c = 8)
> multiply2 <- function(a,b,...) a*b
> multiply2(a = 2,b = 4,c = 8)
[1] 8
You can read more about ... is intended to be used here
You could use dots: ... in your function definition.
myfun <- function(a, b, ...){
cat(a,b)
}
myfun(a=4,b=7,hello=3)
# 4 7
I had the same problem as you. I had a long list of arguments, most of which were irrelevant. I didn't want to hard code them in. This is what I came up with
library(magrittr)
do_func_ignore_things <- function(data, what){
acceptable_args <- data[names(data) %in% (formals(what) %>% names)]
do.call(what, acceptable_args %>% as.list)
}
do_func_ignore_things(c(n = 3, hello = 12, mean = -10), "rnorm")
# -9.230675 -10.503509 -10.927077
Since there are already a number of answers directly addressing the question, and R is often used by technically skilled non-programmers, let me quickly outline why the error exists, and advise against suppression workarounds.
The number of parameters is an important aspect defining a function. If the number of parameters mismatches, that's a good indication that there is a mismatch between the callers intent and what the function is about to do. For this reason, this would be a compilation error in many programming languages, including Java, Python, Haskell, and many others. Indeed, stricter type checking in many of these languages will also cause errors if types mismatch.
As a program grows in size, and code ages, it becomes harder to spot whether mismatches of this kind are intended or are genuine bugs. This is why an idea of "clean code" - simple to read code with no errors or warnings - is often a standard professional programmers work towards.
Accordingly, I recommend reworking the code to remove the unnecessary parameter. It will be simpler to understand and debug for yourself and others in the future.
Of course I understand that R users are often working on small scripts with limited lifespans, and the usual tradeoffs of large software engineering projects don't always apply. Maybe for your quick script, that will only be used for a week, it made sense to just suppress the error. However, it is widely observed (and I have seen in my own experience) that what code endures is rarely obvious at the time of writing. If you are pursuing open science, and publishing your code and data, it is especially helpful for that code to be useful in the future, to others, so they can reproduce your results.
A similar error is also thrown when using the select() function from the dplyr package and having loaded the MASS package too.
Minimal sample to reproduce:
library("dplyr")
library("MASS")
iris %>% select(Species)
will throw:
Error in select(., Species) : unused argument (Species)
To circumvent use:
library("dplyr")
library("MASS")
iris %>% dplyr::select(Species)
EXPLANATION:
When loading dplyr, a select function is defined and when loading MASS afterwards, the select function is overwritten. When the select function is called, MASS::select() is executed which needs a different number of arguments.
R has a function prod() which does multiplication really well. The example the asker gave works fine with the prod() function without returning an error.`
prod(a=20,b=30,c=10)
# 6000
In any case, an error highlighted is an opportunity to rectify it, so not a bad behaviour.
Related
I have the following equation:
and I'm trying to generate the analytic derivative .
I know you can use deriv() and D() for an expression , but I cannot seem to figure out how to actually implement a sum or a product notation into an expression.
partial/incomplete answer
The Deriv package offers a more robust (and extensible) alternative to the base R D and deriv functions, and appears to know about sum() already. prod() will be difficult, though (see below).
A very simple example:
library(Deriv)
Deriv(~ sum(b*x), "b")
## sum(x)
A slightly more complex answer that sort-of works:
Deriv(~ sum(rep(a, length(x)) + b*x), c("a","b"))
## c(a = sum(rep(x = 1, length(x))), b = sum(x))
Note here that sum(a+b*x) doesn't work (returns 1) for the derivative with respect to a, for reasons described in ?Deriv (search for "rep(" in the page): the rep() is needed to help Deriv sort out scalar/vector definitions. It's too bad that it can't simplify sum(rep(x=1, length(x))) to length(x) but ...
Trying
Deriv( ~ exp(sum(a+b*x))/prod(1+exp(a+b*x)))
gives an error
Could not retrieve body of 'prod()'
You might be able to add a rule for products to the derivatives table, but it will be tricky since prod() takes a ... argument. Let's try defining our own function Prod() which takes a single argument x (I think this is the right generalization of the product rule but didn't think about it too carefully.)
Prod <- function(x) product(x)
drule[["Prod"]] <- alist(for(i in 1:length(x)) { .dx[i]*Prod(x[-i]) })
Deriv(~Prod(beta*x), "x"))
Unsurprisingly (to me), this doesn't work: the result is 0 ... (the basic problem is that using .dx[i] to denote the derivative of x[i] doesn't work in the machinery).
I don't know of a way to solve this in R; if I had this problem (depending on more context, which I don't know), I might see if I could find a framework for automatic differentiation (rather than symbolic differentiation). Unfortunately most of the existing tools for autodiff in R use backends in C++ or Julia (e.g. see here (C++ + Rcpp + CppAD), here (Julia), the TMB package (C++/CppAD/user-friendly extensions). There's an ancient pure-R github project radx but it looks too incomplete to use ... (FWIW autodiffr requires a Julia installation but doesn't actually require you to write any Julia code, AFAICS ...)
What's the correct way to work out the lifecycle stage of tidyverse functions? (that is, whether the function/argument is considered stable, experimental, deprecated, or superseded)
What I know so far
I guess that the documentation will tell us, for example: library(tidyverse); ?summarise shows the .groups argument as 'experimental'.
Is assuming a function/argument is 'stable' unless it's expressly designated 'experimental', 'deprecated', or 'superseded' correct, or is there a more accurate/easier way to find out whether or not it's 'stable'?
Background
In case it's relevant, the problem I'm trying to solve is that a friend complains of the occasionally deprecated tidyverse function necessitating shiny app maintenance. So I want to suggest they check for 'stable' functions/arguments on their future work to minimise this maintenance. Just after the easiest / most sensible way for them to do that.
A potential solution is to lint your package/script using the lint_tidyverse_lifecycle() function from the lifecycle package to identify 'unstable' functions, e.g.
(In Rstudio):
library(lifecycle)
lint_tidyverse_lifecycle()
Which, in my case, gives me a large number of unstable functions that I need to correct/update/fix:
Most of these are very simple changes (i.e. dplyr::sample_n() -> dplyr::slice_sample()) and once all of the issues are addressed, all of the tidyverse functions used in the script will be 'stable'.
For debugging code, see lint_lifecycle, which searches for matches in files. To find the lifecycle of functions (here specifically for the tidyverse), we can make a custom function. Neither of these approaches identify experimental arguments like .groups in summarise. It can then lead to frustrating debugging processes later. For those functions, I'd consider finding an alternative that is stable, for example base R aggregate.
f <- function(type = NULL){
check_pkgs <- Vectorize(lifecycle:::pkg_lifecycle_statuses, "package")
pkgs <- tidyverse::tidyverse_packages()
l <- check_pkgs(pkgs)
l <- l[lapply(l, length) > 0]
df <- data.frame(pkg = unlist(l[seq.int(1, length(l), 3)], use.names = F),
fun = unlist(l[seq.int(2, length(l), 3)], use.names = F),
cycle = unlist(l[seq.int(3, length(l), 3)], use.names = F))
if(!is.null(type)) subset(df, cycle == type) else df
}
# many functions are "experimental".
f("experimental")
For non-tidyverse functions, we can replace pkgs <- tidyverse_packages() with
att <- .packages()
pkgs <- att[!att %in% rownames(installed.packages(priority="base"))]
And if situation requires it, we can do some manual searching by finding
needles <- f()$fun
haystack <- rstudioapi::getActiveDocumentContext()[["contents"]]
As an aside, there are a few more stages (no longer recommended by lifecycle) that are still used in tidyverse documentation. Choices are superseded, deprecated, questioning, defunct, experimental, soft-deprecated. Further caveats are maturing and retired.
Is it possible to have a the software ignore the fact that there are unused arguments defined when a module is run?
For example, I have a module multiply(a,b), which returns the product of a and b. I will receive an error if I call the module like so:
multiply(a=20,b=30,c=10)
Returning an error on this just seems a bit unnecessary, since the required inputs a and b have been specified. Is it possible to avoid this bad behaviour?
An easy solution would be just to stop specifying c, but that doesn't answer why R behaves like this. Is there another way to solve this?
Change the definition of multiply to take additional unknown arguments:
multiply <- function(a, b, ...) {
# Original code
}
The R.utils package has a function called doCall which is like do.call, but it does not return an error if unused arguments are passed.
multiply <- function(a, b) a * b
# these will fail
multiply(a = 20, b = 30, c = 10)
# Error in multiply(a = 20, b = 30, c = 10) : unused argument (c = 10)
do.call(multiply, list(a = 20, b = 30, c = 10))
# Error in (function (a, b) : unused argument (c = 10)
# R.utils::doCall will work
R.utils::doCall(multiply, args = list(a = 20, b = 30, c = 10))
# [1] 600
# it also does not require the arguments to be passed as a list
R.utils::doCall(multiply, a = 20, b = 30, c = 10)
# [1] 600
One approach (which I can't imagine is good programming practice) is to add the ... which is traditionally used to pass arguments specified in one function to another.
> multiply <- function(a,b) a*b
> multiply(a = 2,b = 4,c = 8)
Error in multiply(a = 2, b = 4, c = 8) : unused argument(s) (c = 8)
> multiply2 <- function(a,b,...) a*b
> multiply2(a = 2,b = 4,c = 8)
[1] 8
You can read more about ... is intended to be used here
You could use dots: ... in your function definition.
myfun <- function(a, b, ...){
cat(a,b)
}
myfun(a=4,b=7,hello=3)
# 4 7
I had the same problem as you. I had a long list of arguments, most of which were irrelevant. I didn't want to hard code them in. This is what I came up with
library(magrittr)
do_func_ignore_things <- function(data, what){
acceptable_args <- data[names(data) %in% (formals(what) %>% names)]
do.call(what, acceptable_args %>% as.list)
}
do_func_ignore_things(c(n = 3, hello = 12, mean = -10), "rnorm")
# -9.230675 -10.503509 -10.927077
Since there are already a number of answers directly addressing the question, and R is often used by technically skilled non-programmers, let me quickly outline why the error exists, and advise against suppression workarounds.
The number of parameters is an important aspect defining a function. If the number of parameters mismatches, that's a good indication that there is a mismatch between the callers intent and what the function is about to do. For this reason, this would be a compilation error in many programming languages, including Java, Python, Haskell, and many others. Indeed, stricter type checking in many of these languages will also cause errors if types mismatch.
As a program grows in size, and code ages, it becomes harder to spot whether mismatches of this kind are intended or are genuine bugs. This is why an idea of "clean code" - simple to read code with no errors or warnings - is often a standard professional programmers work towards.
Accordingly, I recommend reworking the code to remove the unnecessary parameter. It will be simpler to understand and debug for yourself and others in the future.
Of course I understand that R users are often working on small scripts with limited lifespans, and the usual tradeoffs of large software engineering projects don't always apply. Maybe for your quick script, that will only be used for a week, it made sense to just suppress the error. However, it is widely observed (and I have seen in my own experience) that what code endures is rarely obvious at the time of writing. If you are pursuing open science, and publishing your code and data, it is especially helpful for that code to be useful in the future, to others, so they can reproduce your results.
A similar error is also thrown when using the select() function from the dplyr package and having loaded the MASS package too.
Minimal sample to reproduce:
library("dplyr")
library("MASS")
iris %>% select(Species)
will throw:
Error in select(., Species) : unused argument (Species)
To circumvent use:
library("dplyr")
library("MASS")
iris %>% dplyr::select(Species)
EXPLANATION:
When loading dplyr, a select function is defined and when loading MASS afterwards, the select function is overwritten. When the select function is called, MASS::select() is executed which needs a different number of arguments.
R has a function prod() which does multiplication really well. The example the asker gave works fine with the prod() function without returning an error.`
prod(a=20,b=30,c=10)
# 6000
In any case, an error highlighted is an opportunity to rectify it, so not a bad behaviour.
I have two lists of lists. humanSplit and ratSplit. humanSplit has element of the form::
> humanSplit[1]
$Fetal_Brain_408_AGTCAA_L001_R1_report.txt
humanGene humanReplicate alignment RNAtype
66 DGKI Fetal_Brain_408_AGTCAA_L001_R1_report.txt 6 reg
68 ARFGEF2 Fetal_Brain_408_AGTCAA_L001_R1_report.txt 5 reg
If you type humanSplit[[1]], it gives the data without name $Fetal_Brain_408_AGTCAA_L001_R1_report.txt
RatSplit is also essentially similar to humanSplit with difference in column order. I want to apply fisher's test to every possible pairing of replicates from humanSplit and ratSplit. Now I defined the following empty vector which I will use to store the informations of my fisher's test
humanReplicate <- vector(mode = 'character', length = 0)
ratReplicate <- vector(mode = 'character', length = 0)
pvalue <- vector(mode = 'numeric', length = 0)
For fisher's test between two replicates of humanSplit and ratSplit, I define the following function. In the function I use `geneList' which is a data.frame made by reading a file and has form:
> head(geneList)
human rat
1 5S_rRNA 5S_rRNA
2 5S_rRNA 5S_rRNA
Now here is the main function, where I use a function getGenetype which I already defined in other part of the code. Also x and y are integers :
fishertest <-function(x,y) {
ratReplicateName <- names(ratSplit[x])
humanReplicateName <- names(humanSplit[y])
## merging above two based on the one-to-one gene mapping as in geneList
## defined above.
mergedHumanData <-merge(geneList,humanSplit[[y]], by.x = "human", by.y = "humanGene")
mergedRatData <- merge(geneList, ratSplit[[x]], by.x = "rat", by.y = "ratGene")
## [here i do other manipulation with using already defined function
## getGenetype that is defined outside of this function and make things
## necessary to define following contingency table]
contingencyTable <- matrix(c(HnRn,HnRy,HyRn,HyRy), nrow = 2)
fisherTest <- fisher.test(contingencyTable)
humanReplicate <- c(humanReplicate,humanReplicateName )
ratReplicate <- c(ratReplicate,ratReplicateName )
pvalue <- c(pvalue , fisherTest$p)
}
After doing all this I do the make matrix eg to use in apply. Here I am basically trying to do something similar to double for loop and then using fisher
eg <- expand.grid(i = 1:length(ratSplit),j = 1:length(humanSplit))
junk = apply(eg, 1, fishertest(eg$i,eg$j))
Now the problem is, when I try to run, it gives the following error when it tries to use function fishertest in apply
Error in humanSplit[[y]] : recursive indexing failed at level 3
Rstudio points out problem in following line:
mergedHumanData <-merge(geneList,humanSplit[[y]], by.x = "human", by.y = "humanGene")
Ultimately, I want to do the following:
result <- data.frame(humanReplicate,ratReplicate, pvalue ,alternative, Conf.int1, Conf.int2, oddratio)
I am struggling with these questions:
In defining fishertest function, how should I pass ratSplit and humanSplit and already defined function getGenetype?
And how I should use apply here?
Any help would be much appreciated.
Up front: read ?apply. Additionally, the first three hits on google when searching for "R apply tutorial" are helpful snippets: one, two, and three.
Errors in fishertest()
The error message itself has nothing to do with apply. The reason it got as far as it did is because the arguments you provided actually resolved. Try to do eg$i by itself, and you'll see that it is returning a vector: the corresponding column in the eg data.frame. You are passing this vector as an index in the i argument. The primary reason your function erred out is because double-bracket indexing ([[) only works with singles, not vectors of length greater than 1. This is a great example of where production/deployed functions would need type-checking to ensure that each argument is a numeric of length 1; often not required for quick code but would have caught this mistake. Had it not been for the [[ limit, your function may have returned incorrect results. (I've been bitten by that many times!)
BTW: your code is also incorrect in its scoped access to pvalue, et al. If you make your function return just the numbers you need and the aggregate it outside of the function, your life will simplify. (pvalue <- c(pvalue, ...) will find pvalue assigned outside the function but will not update it as you want. You are defeating one purpose of writing this into a function. When thinking about writing this function, try to answer only this question: "how do I compare a single rat record with a single human record?" Only after that works correctly and simply without having to overwrite variables in the parent environment should you try to answer the question "how do I apply this function to all pairs and aggregate it?" Try very hard to have your function not change anything outside of its own environment.
Errors in apply()
Had your function worked properly despite these errors, you would have received the following error from apply:
apply(eg, 1, fishertest(eg$i, eg$j))
## Error in match.fun(FUN) :
## 'fishertest(eg$i, eg$j)' is not a function, character or symbol
When you call apply in this sense, it it parsing the third argument and, in this example, evaluates it. Since it is simply a call to fishertest(eg$i, eg$j) which is intended to return a data.frame row (inferred from your previous question), it resolves to such, and apply then sees something akin to:
apply(eg, 1, data.frame(...))
Now that you see that apply is being handed a data.frame and not a function.
The third argument (FUN) needs to be a function itself that takes as its first argument a vector containing the elements of the row (1) or column (2) of the matrix/data.frame. As an example, consider the following contrived example:
eg <- data.frame(aa = 1:5, bb = 11:15)
apply(eg, 1, mean)
## [1] 6 7 8 9 10
# similar to your use, will not work; this error comes from mean not getting
# any arguments, your error above is because
apply(eg, 1, mean())
## Error in mean.default() : argument "x" is missing, with no default
Realize that mean is a function itself, not the return value from a function (there is more to it, but this definition works). Because we're iterating over the rows of eg (because of the 1), the first iteration takes the first row and calls mean(c(1, 11)), which returns 6. The equivalent of your code here is mean()(c(1, 11)) will fail for a couple of reasons: (1) because mean requires an argument and is not getting, and (2) regardless, it does not return a function itself (in a "functional programming" paradigm, easy in R but uncommon for most programmers).
In the example here, mean will accept a single argument which is typically a vector of numerics. In your case, your function fishertest requires two arguments (templated by my previous answer to your question), which does not work. You have two options here:
Change your fishertest function to accept a single vector as an argument and parse the index numbers from it. Bothing of the following options do this:
fishertest <- function(v) {
x <- v[1]
y <- v[2]
ratReplicateName <- names(ratSplit[x])
## ...
}
or
fishertest <- function(x, y) {
if (missing(y)) {
y <- x[2]
x <- x[1]
}
ratReplicateName <- names(ratSplit[x])
## ...
}
The second version allows you to continue using the manual form of fishertest(1, 57) while also allowing you to do apply(eg, 1, fishertest) verbatim. Very readable, IMHO. (Better error checking and reporting can be used here, I'm just providing a MWE.)
Write an anonymous function to take the vector and split it up appropriately. This anonymous function could look something like function(ii) fishertest(ii[1], ii[2]). This is typically how it is done for functions that either do not transform as easily as in #1 above, or for functions you cannot or do not want to modify. You can either assign this intermediary function to a variable (which makes it no longer anonymous, figure that) and pass that intermediary to apply, or just pass it directly to apply, ala:
.func <- function(ii) fishertest(ii[1], ii[2])
apply(eg, 1, .func)
## equivalently
apply(eg, 1, function(ii) fishertest(ii[1], ii[2]))
There are two reasons why many people opt to name the function: (1) if the function is used multiple times, better to define once and reuse; (2) it makes the apply line easier to read than if it contained a complex multi-line function definition.
As a side note, there are some gotchas with using apply and family that, if you don't understand, will be confusing. Not the least of which is that when your function returns vectors, the matrix returned from apply will need to be transposed (with t()), after which you'll still need to rbind or otherwise aggregrate.
This is one area where using ddply may provide a more readable solution. There are several tutorials showing it off. For a quick intro, read this; for a more in depth discussion on the bigger picture in which ddply plays a part, read Hadley's Split, Apply, Combine Strategy for Data Analysis paper from JSS.
Is it possible to have a the software ignore the fact that there are unused arguments defined when a module is run?
For example, I have a module multiply(a,b), which returns the product of a and b. I will receive an error if I call the module like so:
multiply(a=20,b=30,c=10)
Returning an error on this just seems a bit unnecessary, since the required inputs a and b have been specified. Is it possible to avoid this bad behaviour?
An easy solution would be just to stop specifying c, but that doesn't answer why R behaves like this. Is there another way to solve this?
Change the definition of multiply to take additional unknown arguments:
multiply <- function(a, b, ...) {
# Original code
}
The R.utils package has a function called doCall which is like do.call, but it does not return an error if unused arguments are passed.
multiply <- function(a, b) a * b
# these will fail
multiply(a = 20, b = 30, c = 10)
# Error in multiply(a = 20, b = 30, c = 10) : unused argument (c = 10)
do.call(multiply, list(a = 20, b = 30, c = 10))
# Error in (function (a, b) : unused argument (c = 10)
# R.utils::doCall will work
R.utils::doCall(multiply, args = list(a = 20, b = 30, c = 10))
# [1] 600
# it also does not require the arguments to be passed as a list
R.utils::doCall(multiply, a = 20, b = 30, c = 10)
# [1] 600
One approach (which I can't imagine is good programming practice) is to add the ... which is traditionally used to pass arguments specified in one function to another.
> multiply <- function(a,b) a*b
> multiply(a = 2,b = 4,c = 8)
Error in multiply(a = 2, b = 4, c = 8) : unused argument(s) (c = 8)
> multiply2 <- function(a,b,...) a*b
> multiply2(a = 2,b = 4,c = 8)
[1] 8
You can read more about ... is intended to be used here
You could use dots: ... in your function definition.
myfun <- function(a, b, ...){
cat(a,b)
}
myfun(a=4,b=7,hello=3)
# 4 7
I had the same problem as you. I had a long list of arguments, most of which were irrelevant. I didn't want to hard code them in. This is what I came up with
library(magrittr)
do_func_ignore_things <- function(data, what){
acceptable_args <- data[names(data) %in% (formals(what) %>% names)]
do.call(what, acceptable_args %>% as.list)
}
do_func_ignore_things(c(n = 3, hello = 12, mean = -10), "rnorm")
# -9.230675 -10.503509 -10.927077
Since there are already a number of answers directly addressing the question, and R is often used by technically skilled non-programmers, let me quickly outline why the error exists, and advise against suppression workarounds.
The number of parameters is an important aspect defining a function. If the number of parameters mismatches, that's a good indication that there is a mismatch between the callers intent and what the function is about to do. For this reason, this would be a compilation error in many programming languages, including Java, Python, Haskell, and many others. Indeed, stricter type checking in many of these languages will also cause errors if types mismatch.
As a program grows in size, and code ages, it becomes harder to spot whether mismatches of this kind are intended or are genuine bugs. This is why an idea of "clean code" - simple to read code with no errors or warnings - is often a standard professional programmers work towards.
Accordingly, I recommend reworking the code to remove the unnecessary parameter. It will be simpler to understand and debug for yourself and others in the future.
Of course I understand that R users are often working on small scripts with limited lifespans, and the usual tradeoffs of large software engineering projects don't always apply. Maybe for your quick script, that will only be used for a week, it made sense to just suppress the error. However, it is widely observed (and I have seen in my own experience) that what code endures is rarely obvious at the time of writing. If you are pursuing open science, and publishing your code and data, it is especially helpful for that code to be useful in the future, to others, so they can reproduce your results.
A similar error is also thrown when using the select() function from the dplyr package and having loaded the MASS package too.
Minimal sample to reproduce:
library("dplyr")
library("MASS")
iris %>% select(Species)
will throw:
Error in select(., Species) : unused argument (Species)
To circumvent use:
library("dplyr")
library("MASS")
iris %>% dplyr::select(Species)
EXPLANATION:
When loading dplyr, a select function is defined and when loading MASS afterwards, the select function is overwritten. When the select function is called, MASS::select() is executed which needs a different number of arguments.
R has a function prod() which does multiplication really well. The example the asker gave works fine with the prod() function without returning an error.`
prod(a=20,b=30,c=10)
# 6000
In any case, an error highlighted is an opportunity to rectify it, so not a bad behaviour.