AWS PDF upload through http post - http

I am new to AWS and I am trying to upload a pdf document to S3 trough an AWS API. I am using an HTML form with a post method. The action of the form is the URL of the deployed API. The API is integrated with a lambda function. My question is how can I extract the uploaded file to proceed within the lambda function, to perform some processing before uploading to S3. Is it even possible?
I have tried the instructions found in this post:
Passing HTTP Post from AWS API GW to Lambda
However, I return the event from the lambda function and this is what I get:
{file: file.pdf , acl:private,
success_action_redirect: http://localhost/, AWSAccessKeyId:my_aws_key}
The file I uploaded is called file.pdf.
Any guidance will be appreciated.

A pdf file is a binary format. API Gateway does not currently support binary data. We know that binary data does not work and there are no workarounds to make it work reliably. A number of customers have requested that we add binary support to API Gateway and it is prioritized on our backlog.

Related

Call a REST API from Kusto function

I have a logs endpoint rest url that I want to call and get the contents by calling a function. In a simplified way, create function like below.
create function getData(url:string)
{
let data = curl GET url;
print data
}
//Call it.
getData("<some rest url here>")
The documentation from Microsoft seems to talk about Kusto's own APIs not not how to call an external API. Am I missing something?
The documentation you reference relates to calling Kusto service REST APIs.
Kusto query language is a query language, not a open-ended programming platform.
Call-outs to external sources such as SQL Azure are possible, but subject to certain restrictions, primarily security-oriented by nature.
See external data operator, sql_request plugin, and callout policy articles.

Puppeteer name resolution error on Firebase Cloud Functions

I have created a brand new free tier project, cloned Puppeteer Firebase Functions demo repository and only changed the default project name in .firebaserc file.
When I run the simple test or version functions I get the correct result. When I open the .com/screenshot page without any parameter I get correct ("Please provide a URL...") response.
But when I try any url, i.e. .com/screenshot?url=https://en.wikipedia.org/wiki/Google I get Error: net::ERR_NAME_RESOLUTION_FAILED at https://en.wikipedia.org/wiki/Google thrown in response.
I tried looking for any name resolution errors related to Puppeteer but I could not find anything. Could this be a problem of using free tier?
The free Spark payment plan restricts all outgoing connections except those API endpoints that are fully controlled by Google. As a result, I expect that puppeteer would not be able to make any outgoing connections to external web sites.

How to knit dynamic reports with Google Analytics (rga)

I'm using rga to get some data from Google Analytics. From the repo:
The principle of this package is to create an instance of the API Authentication, which is a S4/5-class (utilizing the setRefClass). This instance then contains all the functions needed to extract data, and all the data needed for the authentication and reauthentication. The class is in essence self sustaining.
The package creates and saves a local instance using:
rga.open(instance="ga", where="~/ga.rga")
When I try to knit, however, I get an error that the ga object (what would be the instance) is not found. The code works when I run the chunks in RStudio, however—I believe the error is related to this aspect:
[The command above] will check if the instance is already created, and if it is, it'll prepare the token. If the instance is not created [...] it will redirect the client to a browser for authentication with Google.
My guess is that knitr can't perform that last step and so, the object is never created.
How can I make this work? I'm thinking that there might be a way to load the local ga.rga file to bypass browser authentication.
You can bypass browser authentication by passing the client id and client secret key that you can get it from Google API console. Saving a local auth file in the dev env is always risky. You can try this code, this uses Google API and also saves the local instance -
rga.open(instance = "ga",
client.id = "<contains apps.googleusercontent.com>",
client.secret =<your secret key>, where ="~/ga.rga" )
Also ensure that desktop option setting is enabled in Google API console

Execute R Script on AWS via API

I have an R package that I would like to host through Amazon Web Services that will be accessible via an API. The script should take a couple of input values and return the R output in json format. Also, the API should be able to handle multiple requests simultaneously.
So for example, call http://sampleapi.com/?location=USA?state=Florida. That would then run the R package and return the output data to the calling application.
Has anyone done this before or know of resources you can point me to that would explain how to do so? Thanks!
Thanks for all the suggestions. I decided to use Ruby for the API with the rinruby and rails-api gems and will host that through AWS Elastic Beanstalk. See this question for how I am setting it up - Ruby API - Accept parameters and execute script

how do you connect and retrieve data from graphite (whisper)

Is there an R package to connect to graphite (whisper)?
Seems I am looking the same thing. For now I see only that ways:
Using jsnonlite within R to access graphite render URL API and get json or csv formatted data.
Get whisper data from via whisper-fetch (example usage described in russian IT blog (its automatically translated to English by google)

Resources