How to get parameters of figure function (http://docs.bokeh.org/en/latest/docs/reference/plotting.html#bokeh.plotting.Figure)
I have checked the documentation of
bokeh.models.plots.Plot (http://docs.bokeh.org/en/latest/docs/reference/models/plots.html#bokeh.models.plots.Plot)
bokeh.models.widget.Widgets
and source code https://github.com/bokeh/bokeh/tree/master/bokeh
found no complete list of parameters of figure along the inheritance chain,
for example , how can I find parameter
x_axis_label='datetime'
from source code/ documentation
UPDATE:
All of these parameters are now fully documented, see keyword args for:
http://docs.bokeh.org/en/latest/docs/reference/plotting.html#bokeh.plotting.figure.figure
As of 0.10, there are a handful of "kwarg" parameters that we have not yet been able to automate the documentation of. There is no good way to find them programmatically, but you can see all of them here:
https://github.com/bokeh/bokeh/blob/master/bokeh/plotting.py#L41
Anything else is a standard property, that will show up in the automated reference docs. But the list of "extra" parameters boils down to:
x_range
y_range
x_axis_type
y_axis_type
x_minor_ticks
y_minor_ticks
x_axis_location
y_axis_location
x_axis_label
y_axis_label
If you could make a GitHub issue to request better docs automation around these parameters it would be appreciated.
Related
I'm wondering how the author of the following page define the "QuickTbl" function. I assumed that it’s a user-defined function, rather than a function included in a library. I could not find any definition for the QuickTbl function.
https://alphahive.wordpress.com/2014/09/25/asset-pricing-9b-regime-switching/
If you look at the blog above, in MarkovEstPlot, QuickTbl is used as follows:
gg.m.tbl <- QuickTbl(mean.tbl,title='Estimates of Mean')
This should be defined somewhere in the blog (or in a library), but I could not find any.
(I am now asking the author for his/her help as well).
install.packages("basictabler")
library(“basictabler”)
and replacing QuickTbl by qtbl seems to work now.
For the basictabler library including qtbl function, please refer to: https://cran.r-project.org/web/packages/basictabler/vignettes/v01-introduction.html
There's a bit of a preamble before I get to my question, so hang with me!
For an R package I'm working on I'd like to make it as easy as possible for users to partially apply functions inline. I had the idea of using the [] operators to call my partial application function, which I've name "partialApplication." What I aim to achieve is this:
dnorm[mean = 3](1:10)
# Which would be exactly equivalent to:
dnorm(1:10, mean = 3)
To achieve this I tried defining a new [] method for objects of class function, i.e.
`[.function` <- function(...) partialApplication(...)
However, R gives a warning that the [] method for function objects is "locked." (Is there any way to override this?)
My idea seemed to be thwarted, but I thought of one simple solution: I can invent a new S3 class "partialAppliable" and create a [] method for it, i.e.
`[.partialAppliable` = function(...) partialApplication(...)
Then, I can take any function I want and append 'partialAppliable' to its class, and now my method will work.
class(dnorm) = append(class(dnorm), 'partialAppliable')
dnorm[mean = 3](1:10)
# It works!
Now here's my question/problem: I'd like users to be able to use any function they want, so I thought, what if I loop through all the objects in the active environment (using ls) and append 'partialAppliable' to the class of all functions? For instance:
allobjs = unlist(lapply(search(), ls))
#This lists all objects defined in all active packages
for(i in allobjs) {
if(is.function(get(i))) {
curfunc = get(i)
class(curfunc) = append(class(curfunc), 'partialAppliable')
assign(i, curfunc)
}
}
Voilà! It works. (I know, I should probably assign the modified functions back into their original package environments, but you get the picture).
Now, I'm not a professional programmer, but I've picked up that doing this sort of thing (globally modifying all variables in all packages) is generally considered unwise/risky. However, I can't think of any specific problems that will arise. So here's my question: what problems might arise from doing this? Can anyone think of specific functions/packages that will be broken by doing this?
Thanks!
This is similar to what the Defaults package did. The package is archived because the author decided that modifying other package's code is a "very bad thing". I think most people would agree. Just because you can do it, does not mean it's a good idea.
And, no, you most certainly should not assign the modified functions back to their original package environments. CRAN does not like when packages modify the users search path unnecessarily, so I would be surprised if they allowed a package to modify other package's function arguments.
You could work around that by putting all the modified functions in an environment on the search path. But then you have to ensure that environment is always searched first, which means modifying the search path every time another package is loaded.
Changing arguments for functions in other packages also has the potential to make it very difficult for others to reproduce your results because they must have all your argument settings. Unless you always call functions with all their arguments specified, which defeats the purpose of what you're trying to do.
I had some difficulties training SyntaxNet POS tagger and parser training and I could find a good solution which I addressed in Answers section. if you have got stuck in one of the following problem this documentation really helps you:
the training, testing, and tuning data set introduced by Universal Dependencies were at .conllu format and I did not know how to change the format to .conll file and also after I found conllu-formconvert.py and conllu_to_conllx.plI still didn't have a clue about how to use them. If you have some problem like this the documentation has a python file named convert.py which is called in the main body of train.sh and [train_p.sh][5] to convert the downloded datasets to readble files for SyntaxNet.
whenever I ran bazel test, I was told to run bazel test on one of stackoverflow question and answer, on parser_trainer_test.sh it failed and then it gave me this error in test.log: path to save model cannot be found : --model_path=$TMP_DIR/brain_parser/greedy/$PARAMS/ model
the documentation splited train POS tagger and PARSER and showed how to use different directories in parser_trainer and parser_eval. even if you don't want to use the document it self you can update your files based on that.
3. for me training parser took one day so don't panic it takes time "if you do not use use gpu server" said disinex
I got one answer from github from Disindex and I found it very useful.
the documentation in https://github.com/dsindex/syntaxnet includes:
convert_corpus
train_pos_tagger
preprocess_with_tagger
As Disindex said and I quote: " I thought you want to train pos-tagger. if then, run ./train.sh"
In Julia, a lot of the Base and closer related functions are also written in pure Julia, and the code is easily avaible. One can skim through the repository or the local downloaded files, and see how the function is written/implemented. But I think there is allready some built in method that does that for you, so you can write in REPL or Jupyter Notebook something like:
#code functioninquestion()
and get something like:
functioninquestion(input::Type)
some calculations
return
end
without pagingh throug the code.
I just don't remember the method or call. I have read the Reflection/Introspection section of the Manual but I cannot seem to be able to use anything there. I've tried methods, methodswith, code_lowered, expand and cannot seem to make them give what I want-
This is not currently supported but probably will be in the future.
Though this may not be what the OP is looking for, #less is very convenient to read the underlying code (so I very often use it). For example,
julia> #less 1 + 2
gives
+(x::Int, y::Int) = box(Int,add_int(unbox(Int,x),unbox(Int,y)))
which corresponds to the line given by
julia> #which 1 + 2
+(x::Int64, y::Int64) at int.jl:8
#edit functioninquestion() will open up your editor to the location of the method given.
It probably wouldn't be to hard to take the same information used by #edit and use it to open the file and skip to the method definition, and then display it directly in the REPL (or Jupyter).
EDIT: While I was answering, somebody else mentioned #less, which seems to do exactly what you want already.
There is now another tool for this, https://github.com/timholy/CodeTracking.jl. It is part of Revise.jl (and works better when also using Revise). It should work inside Jupyter and with functions defined in the REPL, unlike #edit/#less.
I am trying to edit the R 'nnet' package. I have done some poking around, but am unfamiliar enough with R itself so as to not be able to make any headway.
I have tried trace("nnet", edit=TRUE), as outlined in a previous post here. This results in the editor being opened and displaying:
function (x, ...)
UseMethod("nnet")
I'm not entirely sure what to do with this...
I've also found that it is part of the "VR Bundle"; it has been suggested here that the source bundle be opened to view the code, but don't know a) how to go about doing this or b) if that would achieve anything as I would then need to modify and run the code..
My goal is to add/modify a parameter minIt that would insure a minimum number of epochs be achieved prior to termination of the training.
Thanks!
You can use:
methods(nnet)
[1] nnet.default nnet.formula
edit(nnet.formula)
As well as
trace("nnet.default", edit=TRUE)