I am trying to limit access to one nootebook using supportLinkedSandbox=true parameter as it is described here:
http://dev.evernote.com/doc/articles/app_notebook.php
It seems this paramerter has no effect on sandbox environment.
https://www.sandbox.evernote.com/OAuth.action?oauth_token=...]&preferRegistration=true&supportLinkedSandbox=true
What could be wrong?
In order to use supportLinkedSandbox=true, the notebook sandboxing feature needs to be enabled for a particular API key. You can request enabling this feature by writing to devsupport at evernote dot com.
Related
i'm writing to you to ask a little support about an action that we are currently developing using Action Builder and webhook library #assistant/conversation for Nodejs.
In particular, we would invoke our webhook's logic using the device location.
We have understand that we must ask the user permission to access on device location using something like this:
https://developers.google.com/assistant/actionssdk/reference/rest/Shared.Types/PermissionValueSpec
but in all Github examples provided by Google nothing is specified.
Furthermore, we tried to integrate the PermissionValueSpec in a slot using the notification actions.type.Notifications, but the returned value of that slot is
PermissionLocation --> ALREADY_GRANTED without any other information about the device coordinates.
We've read a lot of documentation and also looked for some example to support our develop but nothing was found.
How we can get the current device location?
Thank you in advance!
I have created a brand new free tier project, cloned Puppeteer Firebase Functions demo repository and only changed the default project name in .firebaserc file.
When I run the simple test or version functions I get the correct result. When I open the .com/screenshot page without any parameter I get correct ("Please provide a URL...") response.
But when I try any url, i.e. .com/screenshot?url=https://en.wikipedia.org/wiki/Google I get Error: net::ERR_NAME_RESOLUTION_FAILED at https://en.wikipedia.org/wiki/Google thrown in response.
I tried looking for any name resolution errors related to Puppeteer but I could not find anything. Could this be a problem of using free tier?
The free Spark payment plan restricts all outgoing connections except those API endpoints that are fully controlled by Google. As a result, I expect that puppeteer would not be able to make any outgoing connections to external web sites.
I have an experiment in AzureML which has a R module at its core. Additionally, I have some .RData files stored in Azure blob storage. The blob container is set as private (no anonymous access).
Now, I am trying to make a https call from inside the R script to the azure blob storage container in order to download some files. I am using the httr package's GET() function and properly set up the url, authentication etc...The code works in R on my local machine but the same code gives me the following error when called from inside the R module in the experiment
error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list
Apparently this is an error from the underlying OpenSSL library (which got fixed a while ago). Some suggested workarounds I found here were to set sslversion = 3 and ssl_verifypeer = 1, or turn off verification ssl_verifypeer = 0. Both of these approaches returned the same error.
I am guessing that this has something to do with the internal Azure certificate / validation...? Or maybe I am missing or overseeing something?
Any help or ideas would be greatly appreciated. Thanks in advance.
Regards
After a while, an answer came back from the support team, so I am going to post the relevant part as an answer here for anyone who lands here with the same problem.
"This is a known issue. The container (a sandbox technology known as "drawbridge" running on top of Azure PaaS VM) executing the Execute R module doesn't support outbound HTTPS traffic. Please try to switch to HTTP and that should work."
As well as that a solution is on the way :
"We are actively looking at how to fix this bug. "
Here is the original link as a reference.
hth
I have been searching for an answer on MS, SE and Google and cannot find it. I want to use the GRS option for Azure Storage (Cloud Block Blobs) but I cannot figure out how to properly do that.
I created my storage object in Azure and chose the GRS option.
I get that I have a primary and secondary connection string and know how to get that from the Azure portal.
What I do not know, in ASP.NET 4.0, is how to set both connection strings in the CloudBlockClient and gracefully handle the primary storage being unavailable.
--What exception is thrown and where, when primary is unavailable? Is this thrown when I create the client, or when I try to get a blob reference?
-- How do I then use the secondary?
Do I have to just test for any old exception and then try using the secondary connection string in a new CloudBlockClient if the primary does not work? Or is there anything in the API for this. I would think there would be but I cannot find it.
None of the "How to use Azure Storage" tutorials I have seen go into this. Most of the documentation seems to date from before mid-2014 when this feature became generally available.
This blog post should help you. In short if you want to read from both primary and secondary you want to enable RA-GRS - essentially read access from the secondary. If you are using out storage client libraries you can also enable a retry policy that will first try to read from a primary and then from the secondary if the first read fails.
My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.