Saving API as environment variable (for setting up monkeylearn in R) - r

I would like to use the named entity recognition functions in the monkeylearn package in R.
As part of the setting up process, we need to do the following:
"To get an API key for MonkeyLearn, register at http://monkeylearn.com/. Note that MonkeyLearn supports registration through GitHub, which makes the registration process really easy. For ease of use, save your API key as an environment variable as described at http://stat545.com/bit003_api-key-env-var.html. You might also want to use the usethis::edit_r_environ() function to modify .Renviron.
All functions of the package will conveniently look for your API key using Sys.getenv("MONKEYLEARN_KEY") so if your API key is an environment variable called “MONKEYLEARN_KEY” you don’t need to input it manually.
Please also create a “MONKEYLEARN_PLAN” environment variable indicating whether your Monkeylearn plan is “free”, “team”, “business” or “custom”. If you do not indicate it by default it will be “free” with a message. If your plan is “custom” you’ll need a third environment variable “MONKEYLEARN_RATE” indicating the maximum amount of requests per minute that you can make to the API. If you do not indicate it, by default it will be 120 with a message."
I've gotten the API key, but as a layman, I couldn't understand the guide on saving the api key as environment variable. Could anyone provide a step by step guide for Windows please?
Thank you.

monkeylearn maintainer here, sorry for the unclear docs.
It means you need to add two lines in an R startup file called .Renviron.
MONKEYLEARN_KEY="blablablayourkey"
MONKEYLEARN_PLAN="free"
The last line of .Renviron has to be an empty line.
To open .Renviron to edit it, you can use usethis::edit_r_environ(). After adding the two lines with your key and plan, don't forget to restart R.
Read more about startup files in general in this resource.
I hope this helps. Note that I don't look up Stack Overflow very often as opposed to rOpenSci's forum

Related

Optimising network connections of firebase cloud function

Firebase documentation recommends including code snippet given at (https://firebase.google.com/docs/functions/networking#https_requests) to optimize the networking, but few details are missing. Like
How exactly does this help?
Are we supposed to call the function defined as per of recommendation
or include this snippet deploy?
Any documentation around this would be of great help.
this is an example showing you how you would make this request, the key part in this example is the agent field which by nature isn't normally managed within your app. By injecting a reference to it manually, you are able to micro-manage it and it's events directly.
As for the second question, it really depends on your cloud function needs - some users set it in a global object that they manage with all cloud functions but it's on a by-use case basis. but ultimately isn't required.
You can read more about HTTP.Agent's and their usage below:
https://nodejs.org/api/http.html
https://www.tabnine.com/code/javascript/functions/http/Agent
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent

Workaround for output=graph with bazel cquery

I'm trying to obtain a dependency graph (either as an image or in text-form) from a bazel cquery. According to the documentation, the option --output=graph is currently only supported by bazel query, but not by cquery. Unfortunately, in our project it's not possible to use query since it fetches some external dependencies with restricted access. Only using a config (with cquery) prevents fetching these restricted dependencies.
Is there a work-around to somehow get a graph-like structure from cquery? The default output is just a flattened list which seems to contain no information on the inter-dependencies between the targets.
If the inter-dependencies can somehow be printed, I guess it would be quite easy to reconstruct an image from it.
The following works: Using query instead of cquery and appending the flag --keep_going to ignore errors caused by external dependencies that cannot be fetched by everybody. Then --output=graph can be used.
Note: The result might be different from a configured cquery, but for our purposes, it doesn't matter much.

Where can I get a complete list of variables that PHP automatically makes available to my script by looking at the output of the phpinfo() function?

I'm using PHP 7.2.3 on my machine that runs on Windows 10.
I've installed PHP using latest version of XAMPP.
I come across following text from PHP Manual :
$_SERVER is just one variable that PHP automatically makes available
to you. A list can be seen in the Reserved Variables section of the
manual or you can get a complete list of them by looking at the output
of the phpinfo() function.
In the above text from PHP manual it has been clearly said that I can see the complete list of such variables which PHP automatically makes available to my script.
When I observed the output of phpinfo(); I could only see the entire array of $_SERVER[] superglobal variable. I couldn't see any other such predefined superglobal variables in the output of phpinfo();
Can I say this is a mistake in PHP manual?
Or can I say the manual is saying the thing rightly but I'm not able to get it and see the other predefined superglobal variables?
Please someone help me out in this regard.
Thank You.
There is a function called get_defined_vars() which does exactly what you want.
There are even more of them:
get_defined_constants - Gets all constants
get_defined_functions - Gets all functions
get_defined_vars - Gets all variables
get_declared_classes - Gets all declared classes
I couldn't see any other such predefined superglobal variables in the output
Screw what i said about this earlier. You are right, doc says
phpinfo() is also a valuable debugging tool as it contains all EGPCS (Environment, GET, POST, Cookie, Server) data.
It should be noted, that you find GET, POST and COOKIE in $_REQUEST and not in their array respectivly.
If you only want to get the super globals from phpinfo, try following:
phpinfo(INFO_VARIABLES);

Prefix/Group Symfony Console Commands of an dependency

With symfony console commands you can prefix/group each one by putting in the setName("group:command"), and this is great.
myown
myown:cool
myown:foo
myown:bar
But the problem is that some external dependencies dont use this format. Ex: Phinx Migrations.
Since I'm importing an dependency that has a console command called migrate, I dont want for it to show without prefix/group. Not just because I might have another command called migrate or just for readability. I don't event know if we have 2 with same command name which one will show (need to check).
My question is: Is there any away for me to force a group commands from an external depencency to be inside an prefix/group?
You can achieve that, I wouldn't recommend but that would be the approach:
Create a compiler Pass which removes the definitions of the Commands you don't like
Register again all those commands while setting the names you like
If you need information on compiler passes:
https://symfony.com/doc/current/components/dependency_injection/compilation.html#components-di-compiler-pass
Maybe there's an easier way which I'm not aware, but for now that's my answer to you, I can't post all the code because that would be a lot of code and if you do it maybe you can update with the solution.
Good luck

Can I append or overwrite some bytes to an existing object in Openstack Swift?

I need to append some bytes to an existing object stored in Openstack Swift, say like a log file object and constantly append new logs to it. Is this possible?
Moreover, can I change (overwrite) some bytes (specify with offset and length) to an existing object?
I believe ZeroVM (zerovm.org) would be perfect for doing this.
Disclaimer: I work for Rackspace, who owns ZeroVM. Opinions are mine and mine alone.
tl;dr: There's no append support currently in Swift.
There's a blueprint for Swift append support: https://blueprints.launchpad.net/swift/+spec/object-append. It doesn't look very active.
user2195538 is correct. Using ZeroVM + Swift (using the ZeroCloud middleware for Swift) you could get a performance boost on large-ish objects by sending deltas to a ZeroVM app and process them in place. Of course you still have to read/update/write the file, but you can do it in place. You don't need to pipe the entire file over the network, which could/would be costly for large files.
Disclaimer: I also work for Rackspace, and I work on ZeroVM for my day job.

Resources