Fast Ai uses a very unconventional style of from fastai import * etc.
I for one do not like it so was painstakingly identifying each import in the chapter 2 of the fastai book but ran into the error
AttributeError: 'Learner' object has no attribute 'fine_tune'
However, when I then go and do
from fastbook import *
it works. This is a very odd behavior in that something is done to the cnn_learner class or the module that contains it making it have the fine_tune method if the above import is done.
I would like to avoid this style of coding, so what should I do to load the correct version of Learner?
I just faced the exact same issue. After looking at one of their tutorial I saw that the cnn learner is not imported from the expected package.
from fastai.vision.all import cnn_learner
# rather than
from fastai.vision.learner import cnn_learner
calling the fine_tune method then works as expected !
Fastai does a lot of monkey patching. Not only to its own imports but also to other libraries such as pathlib or torch. I personally don't like this style of coding either but it is what it is.
I would highly recommend creating a separate environment (e.g. through conda), install fastai there and use their from ... import *. I have tried to work around these imports in the past but since you don't know (unless you dig into source) where/what has been monkey patched, you will be running into missing attribute and similar errors all over the place.
Also, it doesn't play nice with some other libraries. I remember having hard time making it work with opencv due to package dependencies, where installing opencv broke some of the fastai's functionality (which I have only found later) due to overriding something that has been patched by fastai in some external library.
Related
Importing other packages' functions within one's own package is easy enough (example), but that requires having a hard dependency (Imports in the DESCRIPTION file) on that package. In my case, it is a suggested package, so I am getting a warning: '::' or ':::' import not declared on R CMD check.
So right now, I am recreating the functions locally, but this is problematic as when both packages are loaded, there is another warning The following objects are masked from [the other package]. They're the same, so it's not a big issue, but enough to be annoying. I can't imagine there's not a better practice than this?
In case we need actual code for demo, here is the problematic imports for my package: https://github.com/rempsyc/lavaanExtra/blob/main/R/save_as_x.R
Say I have a package that has 5 packages in Depends of the DESCRIPTION file and I have just realised it is not a good practice to have this many packages in Depends due to inevitable import clashes that are starting to pop up as the number of function imports are increasing. I'd like to move, say only package pkg to Imports but I have no clue which functions of pkg are being used in my package. Ideally, I should have unit tests with full coverage of the package source code and by simply removing pkg from the dependencies, I will identify the pkg-specific imports from the test errors of could not find function "foo". But unfortunately, I do not have that breadth of test coverage. I was wondering if there is a more efficient way than going through all the package code to identify these imports.
That is very straightforward. Change
Depends: pkgA, pkgB, pgC
to
Imports: pkgA, pkgB, pgC
and also add this to the NAMESPACE file:
import("pkgA")
import("pkgB")
import("pkgC")
which will globally import all exported symbols so you can continue as before.
You can also selectively import via
importFrom("pkgA", "func1", "func2", "func3")
and if you run R CMD check it will actually (very helpfully) tell you which functions need this. The second method is somewhat more precise but a little more work to set up.
And I don't think we have a tool to remove 'spurious imports'. Finding which imports may be unused may be something you have to check manually (but trying to remove one and seeing if it still builds + checks fine).
i have a piece of code where i'm using JuMPand Mambaand both of them export Model.
When i run the code first i get a warning : both Mamba and JuMP export "Model"; uses of it in module QuantumRelay must be qualified, therefore en Error is raised which is :
ERROR: UndefVarError: Model not defined
i need both of the packages Mamba for the MCMC simulation for simulationg draws from a probability distribution and the other for Linear programming.
you can find the package or the code on this link:
https://github.com/marouanehanhasse/Quantum_Relay
check the QuantumRelay module .
Apologizes in advance, because i couldn't post the code here since i'm still new on this community.
In Julia, the using and import keywords are used to bring bindings from another module into the current scope.
using M brings all exported bindings from M directly into scope. If M defines and exports a function my_function, you can use my_function directly in your code after the using statement.
import M imports only the binding M, so you will use M.my_function.
If you want to avoid name clashes as you have with Mamba and JuMP, import at least one of them, and then specify the qualified name, Mamba.Model, JuMP.Model. Subjectively, this also makes your code clearer to read for someone not familiar with both packages and what they export.
Details and other ways to use using and import can be found in the Julia documentation.
I am quite new to R but it seems, this question is closely related to the following post 1, 2, 3 and a bit different topic 4. Unfortunately, I have not enough reputation to comment right there. My problem is that after going through all the suggestions there, the code still does not work:
I included "Depends" in the description file
I tried the second method including a change of NAMESPACE (Not reproducable)
I created a example package here containing a very small part of the code which showed a bit different error ("J" not found in routes[J(lat1, lng1, lat2, lng2), .I, roll = "nearest", by = .EACHI] instead of 'lat1' not found in routes[order(lat1, lng1, lat2, lng2, time)])
I tested all scripts using the console and R-scripts. There, the code ran without problems.
Thank you very much for your support!
Edit: #Roland
You are right. Roxygen overwrites the namespace. You have to include #' #import data.table to the function. Do you understand, why only inserting Depends: data.table in the DESCRIPTION file does not work? This might be a useful hint in the documentation or did I miss it?
It was missleading that changing to routes <- routes[order("lat1", "lng1", "lat2", "lng2", "time")] helped at least a bit as this line was suddenly no problem any more. Is it correct, that in this case data.frame order is used? I will see how far I get now. I will let you know the final result...
Answering your questions (after edit).
Quoting R exts manual:
Almost always packages mentioned in ‘Depends’ should also be imported from in the NAMESPACE file: this ensures that any needed parts of those packages are available when some other package imports the current package.
So you still should have import in NAMESPACE despite the fact if you depends or import data.table.
The order call doesn't seems to be what you expect, try the following:
order("lat1", "lng1", "lat2", "lng2", "time")
library(data.table)
data.table(a=2:1,b=1:2)[order("a","b")]
In case of issues I recommend to start debugging by writing unit test for your expected results. The most basic way to put unit tests in package is just plain R script in tests directory having stopifnot(...) call. Be aware you need to library/require your package at the start of the script.
This is more in addition to the answers above: I found this to be really useful...
From the docs [Hadley-description](http://r-pkgs.had.co.nz/description.html und)
Imports packages listed here must be present for your package to
work. In fact, any time your package is installed, those packages
will, if not already present, be installed on your computer
(devtools::load_all() also checks that the packages are installed).
Adding a package dependency here ensures that it’ll be installed.
However, it does not mean that it will be attached along with your
package (i.e., library(x)). The best practice is to explicitly refer
to external functions using the syntax package::function(). This
makes it very easy to identify which functions live outside of your
package. This is especially useful when you read your code in the
future.
If you use a lot of functions from other packages this is rather
verbose. There’s also a minor performance penalty associated with
:: (on the order of 5$\mu$s, so it will only matter if you call the
function millions of times).
From the docs Hadley-namespace
NAMESPACE also controls which external functions can be used by your
package without having to use ::. It’s confusing that both
DESCRIPTION (through the Imports field) and NAMESPACE (through import
directives) seem to be involved in imports. This is just an
unfortunate choice of names. The Imports field really has nothing to
do with functions imported into the namespace: it just makes sure the
package is installed when your package is. It doesn’t make functions
available. You need to import functions in exactly the same way
regardless of whether or not the package is attached.
... this is what I recommend: list the package in DESCRIPTION so that it’s
installed, then always refer to it explicitly with pkg::fun().
Unless there is a strong reason not to, it’s better to be explicit.
It’s a little more work to write, but a lot easier to read when you
come back to the code in the future. The converse is not true. Every
package mentioned in NAMESPACE must also be present in the Imports or
Depends fields.
I am trying to load a matlab file with the R.matlab package. The problem is that it keeps loading indefinitely (e.g. table <- readMat("~/desktop/hg18_with_miR_20080407.mat"). I have a genome file from the Broad Institute (hg18_with_miR_20080407.mat).
You can find it at:
http://genepattern.broadinstitute.org/ftp/distribution/genepattern/dev_archive/GISTIC/broad.mit.edu:cancer.software.genepattern.module.analysis/00125/1.1/
I was wondering: has anyone tried the package and have similar issues?
(hopefully helpful, but not really an answer, though there was too much formatting for a comment)
I think you may need to get a friend with matlab access to save the file into a more reasonable format or use python for data processing. It "hangs" for me as well (OS X 10.9, R 3.1.1). The following works in python:
import scipy.io
mat = scipy.io.loadmat("hg18_with_miR_20080407.mat")
(you can see and work with the rg and cyto' crufty numpy arrays, but they can't be converted to JSON withjson.dumpsand evenjsonpickle.encodecoughs up a lung-full of errors (i.e. you won't be able to userPython` to get access to the object which was the original workaround I was looking for), so no serialization to a file either (and, I have to believe the resultant JSON would have been ugly to deal with).
Your options are to:
get a friend to convert it (as suggested previous)
make CSV files out of the numpy arrays in python
use matlab