Use of requests.Response(...) with parameters - python-requests

There is a component in the Runestone Book Foundations of Python Programming called response_with_catching. The code is available here. The component prevent repeated calls to an API by saving responses and returning cached data when possible.
If cached data is available for an API call, this code is executed:
return requests.Response(permanent_cache[cache_key], full_url)
What is the purpose of this line?
There is no clear description in the documentation on how request.response() is used with parameters.
For example, in one particular run the parameters permanent_cache and full_url are:
permanent_cache[cache_key] =
[{"word":"nappy","score":707,"numSyllables":2},
{"word":"scrappy","score":702,"numSyllables":2}]
full_url = https://api.datamuse.com/words?rel_rhy=happy&max=2
There is a problem when the line is execute, for example, running
full_url = "https://api.datamuse.com/words?rel_rhy=happy&max=2"
x = requests.Response([{"word":"nappy","score":707,"numSyllables":2},
{"word":"scrappy","score":702,"numSyllables":2}], full_url)
Throws the error
TypeError: __init__() takes 1 positional argument but 3 were given

After additional research into the error thrown by the line requests.response(...) used in the Runestone interactive book Foundations of Python Programming, it turns out that the book is not using the official python requests module. Rather, a restricted version is used, and the code listed for response_with_catching is not meant to be used in a full Python environment. This is not directly evident by reading comments in the code.

Related

'ConcatenatedDoc2Vec' object has no attribute 'docvecs'

I am a beginner in Machine Learning and trying Document Embedding for a university project. I work with Google Colab and Jupyter Notebook (via Anaconda). The problem is that my code is perfectly running in Google Colab but if i execute the same code in Jupyter Notebook (via Anaconda) I run into an error with the ConcatenatedDoc2Vec Object.
With this function I build the vector features for a Classifier (e.g. Logistic Regression).
def build_vectors(model, length, vector_size):
vector = np.zeros((length, vector_size))
for i in range(0, length):
prefix = 'tag' + '_' + str(i)
vector[i] = model.docvecs[prefix]
return vector
I concatenate two Doc2Vec Models (d2v_dm, d2v_dbow), both are working perfectly trough the whole code and have no problems with the function build_vectors():
d2v_combined = ConcatenatedDoc2Vec([d2v_dm, d2v_dbow])
But if I run the function build_vectors() with the concatenated model:
#Compute combined Vector size
d2v_combined_vector_size = d2v_dm.vector_size + d2v_dbow.vector_size
d2v_combined_vec= build_vectors(d2v_combined, len(X_tagged), d2v_combined_vector_size)
I receive this error (but only if I run this in Jupyter Notebook (via Anaconda) -> no problem with this code in the Notebook in Google Colab):
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [20], in <cell line: 4>()
1 #Compute combined Vector size
2 d2v_combined_vector_size = d2v_dm.vector_size + d2v_dbow.vector_size
----> 4 d2v_combined_vec= build_vectors(d2v_combined, len(X_tagged), d2v_combined_vector_size)
Input In [11], in build_vectors(model, length, vector_size)
3 for i in range(0, length):
4 prefix = 'tag' + '_' + str(i)
----> 5 vector[i] = model.docvecs[prefix]
6 return vector
AttributeError: 'ConcatenatedDoc2Vec' object has no attribute 'docvecs'
Since this is mysterious (for me) -> Working in Google Colab but not Anaconda and Juypter Notebook -> and I did not find anything to solve my problem in the web.
If it's working one place, but not the other, you're probably using different versions of the relevant libraries – in this case, gensim.
Does the following show exactly the same version in both places?
import gensim
print(gensim.__version__)
If not, the most immediate workaround would be to make the place where it doesn't work match the place that it does, by force-installing the same explicit version – pip intall gensim==VERSION (where VERSION is the target version) – then ensuring your notebook is restarted to see the change.
Beware, though, that unless starting from a fresh environment, this could introduce other library-version mismatches!
Other things to note:
Last I looked, Colab was using an over-4-year-old version of Gensim (3.6.0), despite more-recent releases with many fixes & performance improvements. It's often best to stay at or closer-to the latest versions of any key libraries used by your project; this answer describes how to trigger the installation of a more-recent Gensim at Colab. (Though of course, the initial effects of that might be to cause the same breakage in your code, adapted for the older version, at Colab.)
In more-recent Gensim versions, the property formerly called docvecs is now called just dv - so some older code erroring this way may only need docvecs replaced with dv to work. (Other tips for migrating older code to the latest Gensim conventions are available at: https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4 )
It's unclear where you're pulling the ConcatenatedDoc2Vec class from. A clas of that name exists in some Gensim demo/test code, as a very minimal shim class that was at one time used in attempts to reproduce the results of the original "Paragaph Vector" (aka Doc2Vec) paper. But beware: that's not a usual way to use Doc2Vec, & the class of that name I know barely does anything outside its original narrow purpose.
Further, beware that as far as I know, noone has ever reproduced the full claimed performance of the two-kinds-of-doc-vectors-concatenated approach reported in that paper, even using the same data/described-technique/evaluation. The claimed results likely relied on some other undisclosed techniques, or some error in the writeup. So if you're trying to mimic that, don't get too frustrated. And know most uses of Doc2Vec just pick one mode.
If you have your own separate reasons for creating concatenated feature-vectors, from multiple algorithms, you should probably write your own code for that, not limited to the peculiar two-modes-of-Doc2Vec code from that one experiment.

Using reticulate with targets

I'm having this weird issue where my target, which interfaces a slightly customized python module (installed with pip install --editable) through reticulate, gives different results when it's being called from an interactive session in R from when targets is being started from the command line directly, even when I make sure the other argument(s) to tar_make are identical (callr_function = NULL, which I use for interactive debugging). The function is deterministic and should be returning the exact same result but isn't.
It's tricky to provide a reproducible example but if truly necessary I'll invest the required time in it. I'd like to get tips on how to debug this and identify the exact issue. I already safeguarded against potential pointer issues; the python object is not getting passed around between different targets/environments (anymore), rather it's immediately used to compute the result of interest. I also checked that the same python version is being used by printing the result of reticulate::pyconfig() to screen. I also verified both approaches are using the same version of the customized module.
Thanks in advance..!

How can I publish an R function using a Plumber API? I want to call the function and publish it in the form of an HTTP response

I've been successful in publishing a code file using Plumber, but've been unsuccessful in all my attempts to call a function and publish it in the form of an HTTP response.
library(plumber)
r <- plumb(predictTest.R)
r$run
Shown above is the code I've been using to publish a single code file.
When I use the same syntax for a function like:
library(plumber)
r <- plumb(predictTest("India","Australia"))
r$run
The error I get is:
TypeError: Failed to fetch
How can I call a function and publish it as a HTTP response?
Look up the documentation on the plumb function (type ?plumb in R or view it here). You'll see that it expects
plumb(file, dir = ".")
I.e., the first argument is the filename of a R script file. That's why the first code example works and the second does't; you cannot provide plumb with the output from a function (unless that output returns a filename of an R file).
If you only want to expose a single function to plumber, isolate it in a file and use that. If you meant something else, ask a new question and provide more examples. And when you cannot disclose any of your data or source, try making a minimal working example with base R stuff that isn't proprietary to your company.
Finally, read the documentation at https://www.rplumber.io/docs/. You might be interested in chapter 8 and defining end-points.

How can I create a library in julia?

I need to know how to create a library in Julia and where I must keep it in order to call it later. I come from C and matlab, it seems there is no documentation about pratical programming in Julia.
Thanks
If you are new to Julia, you will find it helpful to realize that Julia has two mechanisms for loading code. Stating you "need to know how to create a library in Julia" would imply you most likely will want to create a Julia module docs and possibly a packagedocs. But the first method listed below may also be useful to you.
The two methods to load code in Julia are:
1. Code inclusion via the include("file_path_relative_to_call_or_pwd.jl")docs
The expression include("source.jl") causes the contents of the file source.jl to be evaluated in the global scope of the module where the include call occurs.
Regarding where the "source.jl" file is searched for:
The included path, source.jl, is interpreted relative to the file where the include call occurs. This makes it simple to relocate a subtree of source files. In the REPL, included paths are interpreted relative to the current working directory, pwd().
Including a file is an easy way to pull code from one file into another one. However, the variables, functions, etc. defined in the included file become part of the current namespace. On the other hand, a module provides its own distinct namespace.
2. Package loading via import X or using Xdocs
The import mechanism allows you to load a package—i.e. an independent, reusable collection of Julia code, wrapped in a module—and makes the resulting module available by the name X inside of the importing module.
Regarding the difference between these two methods of code loading:
Code inclusion is quite straightforward: it simply parses and evaluates a source file in the context of the caller. Package loading is built on top of code inclusion and is quite a bit more complex.
Regarding where Julia searches for module files, see docs summary:
The global variable LOAD_PATH contains the directories Julia searches for modules when calling require. It can be extended using push!:
push!(LOAD_PATH, "/Path/To/My/Module/")
Putting this statement in the file ~/.julia/config/startup.jl will extend LOAD_PATH on every Julia startup. Alternatively, the module load path can be extended by defining the environment variable JULIA_LOAD_PATH.
For one of the simplest examples of a Julia module, see Example.jl
module Example
export hello, domath
hello(who::String) = "Hello, $who"
domath(x::Number) = x + 5
end
and for the Example package, see here.
Side Note There is also a planned (future) library capability similar to what you may have used with other languages. See docs:
Library (future work): a compiled binary dependency (not written in Julia) packaged to be used by a Julia project. These are currently typically built in- place by a deps/build.jl script in a project’s source tree, but in the future we plan to make libraries first-class entities directly installed and upgraded by the package manager.

Load AND execute R source file [duplicate]

This question already has an answer here:
ggplot's qplot does not execute on sourcing
(1 answer)
Closed 9 years ago.
Consider a source file of this form:
# initialize the function
f = function() {
# ...
}
# call the function
f()
In python, the import function would load and executes the file; however in R, the source command does initialize functions defined by the source file, but does not call the functions if they are called in the file.
Is there any R command (or option) to import and execute the file?
Thanks for your help.
?source states:
Description:
‘source’ causes R to accept its input from the named file or URL
or connection. Input is read and ‘parse’d from that file until
the end of the file is reached, then the parsed expressions are
evaluated sequentially in the chosen environment.
Therefore,
source is the function you are looking for,
I refute your claim - source works for me as per the documentation
If you are not seeing the documented behaviour then there must be a different problem that you are not telling us.
How are you deducing that source is not executing your f?
I can think of one scenario that you may not be aware of, which is documented in ?source, namely:
Details:
Note that running code via ‘source’ differs in a few respects from
entering it at the R command line. Since expressions are not
executed at the top level, auto-printing is not done. So you will
need to include explicit ‘print’ calls for things you want to be
printed (and remember that this includes plotting by ‘lattice’,
FAQ Q7.22).
Not seeing the output from objects called by name or grid graphics-based plots not showing are symptoms of auto-printing not being active as the evaluation is not occurring at the top-level. You need to explicitly print() objects, including grid-based graphics like Lattice and ggplot2 figures.
Also note the print.eval argument, which will default to
> getOption("verbose")
[1] FALSE
which may also be of use to you.

Resources