Paw Extensions : Dynamic Value based on URI - paw-app

I have an API that includes an account ID as part of the url (e.g. /account/7319310/report) where 7319310 is then account ID.
There are different credentials for each account, stored in MySQL although they could be stored in another manner if it made it easier.
I'd like Paw to automatically use the correct credentials based on the account parameter in the URI (it's always the second element) - is this possible?

In paw you can use a regex Dynamic to extract the data you need from the url:
Paw does not have a direct connection to MySQL, you can make http request from a custom value but you would need a server running to push these request to the server. A better option would be to save the credentials into a flat json file.
{
"1234334": {
"key1": 123456,
"key2": 345211
}
}
With this saved you can load this json file in a Custom Dynamic Value:
Here you can embed the extracted user id by using the regex dynamic value. inline in the code. Paw will reload the file on every request so you could set up a cron job to dump your database to this JSON file.

Related

Firebase emulator hitting DB via the REST feature

I’m trying to setup the emulator so I can develop the firebase functions safely before deploying them. I just noticed that some REST calls I’m doing now fails - anybody know if it is not possible to use the REST feature of the RealTime DB https://firebase.google.com/docs/reference/rest/database
I'm trying to hit it with this URL
http://localhost:9000/?ns=<PROJECT ID>-default-rtdb/development/DISHES.json
because this is what I set the firebaseConfig.databaseURL to (suggested here by Google)
Bonus info: If I try to do a GET to the URL via postman it creates another database called fake-server (http://localhost:4000/database/fake-server: null) 🤔
According to RFC 3986, the path must come before the query parameters in URLs. Your URL should be instead written as:
http://localhost:9000/development/DISHES.json?ns=<PROJECT ID>-default-rtdb
Note how the corrected URL has the query parameter appended to the very end. (The URL you've provided in the question will be parsed as having one query parameter ns with the value of <PROJECT ID>-default-rtdb/development/DISHES.json, which is not a valid namespace name. That explains the errors you've seen.)
FYI, it looks like you're constructing the URL by concatenating the string databaseURL with the path -- this may lead to surprising results as you've seen above. Considering using a URL parser / formatter in your language / framework of choice instead, which handles URL parts correctly. For example, in JavaScript you can use the snippet below:
const url = new URL(databaseURL); // Parse URL (including query params, if any)
url.pathname = '/development/DISHES.json';
yourFetchingLogic(url.toString()); // Reconstruct the URL with path replaced

How to call Neo4j database api in R Studio

Please I created a Neo4j database instance, and I am trying to call it in R Studio, using the neo4r and neo4jshell packages. After running the api call, I still get a 404even though I correctly specified the url, username, and password. Please find my code below:
library(neo4r)
library(neo4jshell)
myTwitter <- neo4j_api$new(
url = "http://54.152.83.7:7474",
user = "neo4j",
password = "mypassword"
)
myTwitter$ping()
When I run the last line of code, I get the 404 instead of 200, which obviously means my api call was not successful. Please I would appreciate your helpful suggestions. Thank you
HTTP endpoints were changed since version 4 of Neo4J
Neo4j v3 had endpoint http://localhost:7474/db/data
Neo4j v4 uses http://localhost:7474/db/{databaseName}/tx instead of it.
Seems like Neo4j library for R needs to be updated...
I'm not familiar with R but you could try to use available HTTP client for R that supports Basic authentication to send POST requests to Neo4J API with JSON payload. I also see you use http schema which means your credentials will be sent as plain text through the network, which is not good.
Payload for such requests should be in form of:
{
"statements": [
{
"statement": "MATCH(n) RETURN n"
}
]
}
(adjust Cypher query to your needs)
Response will be JSON object with data section containing actual results.

Automate exporting CSV report in Kibana

I am trying to automate the csv export in Kibana. I know we can always send POST request to generate the report but the file will be available in reporting tab and not downloaded automatically.
Is there any way by which an application can automatically download the file and save it locally i.e. without any manual intervention.
I am trying to make an application which will automatically download the report file weekly for a particular object.
Send the Post Request to generate CSV report.
It will return a response as below:
{
"path": "/api/reporting/jobs/download/kiivr09200121bb65cdzn8p3",
"job": {
"id": "kiivr09200121bb65cdzn8p3",
.............
}
We can easily download the file using the url in path variable.
For e.g. if Kibana is running in localhost:5601
We can download it by the following url:
http://localhost:5601/api/reporting/jobs/download/kiivr09200121bb65cdzn8p3.
We need to set "kbn-xsrf" as true in headers, We also need to provide username and password in Basic Authorization incase Kibana needs authorization.

How to upload files or images on hasura graphql engine

Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?
to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)
If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.

cosmos db, generate authentication key on client for Azure Table endpoint

Cosmos DB, API Azure Tables, gives you 2 endpoints in the Overview blade
Document Endpoint
Azure Table Endpoint
An example of (1) is
https://myname.documents.azure.com/dbs/tempdb/colls
An example of (2) is
https://myname.table.cosmosdb.azure.com/FirstTestTable?$filter=PartitionKey%20eq%20'car'%20and%20RowKey%20eq%20'124'
You can create the authorization code for (1) on the client using the prerequest code from this Postman script: https://github.com/MicrosoftCSA/documentdb-postman-collection/blob/master/DocumentDB.postman_collection.json
Which will give you a code like this:
Authorization: type%3Dmaster%26ver%3D1.0%26sig%3DavFQkBscU...
This is useful for playing with the rest urls
For (2) the only code I could find to generate a code that works was on the server side and gives you a code like this:
Authorization: SharedKey myname:JXkSGZlcB1gX8Mjuu...
I had to get this out of Fiddler
My questions
(i) Can you generate a code for case (2) above on the client like you can for case (1)
(ii) Can you securely use Cosmos DB from the client?
If you go to the Azure Portal for a GA Table API account you won't see the document endpoint anymore. Instead only the Azure Table Endpoint is advertised (e.g. X.table.cosmosdb.azure.com). So we'll focus on that.
When using anything but direct mode with the .NET SDK, our existing SDKs when talking to X.table.cosmosdb.azure.com endpoint are using the SharedKey authentication scheme. There is also a SharedKeyLight scheme which should also work. Both are documented in https://learn.microsoft.com/en-us/rest/api/storageservices/authentication-for-the-azure-storage-services. Make sure you read the sections specifically on the Table Service. The thing to notice is that a SharedKey header is directly tied to the request it is associated with. So basically every request needs a unique header. This is useful for security because it means that a leaked header can only be used for a limited time to replay a specific request. It can't be used to authorize other requests. But of course that is exactly what you are trying to do.
An alternative is the SharedKeyLight header which is a bit easier to implement as it just requires a date and the a URL.
But we don't have externalized code libraries to really help with either.
But there is another solution that is much friendly to things like Fiddler or Postman, which is to use a SAS URL as defined in https://blogs.msdn.microsoft.com/windowsazurestorage/2012/06/12/introducing-table-sas-shared-access-signature-queue-sas-and-update-to-blob-sas/.
There are at least two ways to get a SAS token. One way is to generate one yourself. Here is some sample code to do that:
var connectionString = "DefaultEndpointsProtocol=https;AccountName=tableaccount;AccountKey=X;TableEndpoint=https://tableaccount.table.cosmosdb.azure.com:443/;";
var tableName = "ATable";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tableName);
await table.CreateIfNotExistsAsync();
SharedAccessTablePolicy policy = new SharedAccessTablePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1000),
Permissions = SharedAccessTablePermissions.Add
| SharedAccessTablePermissions.Query
| SharedAccessTablePermissions.Update
| SharedAccessTablePermissions.Delete
};
string sasToken = table.GetSharedAccessSignature(
policy, null, null, null, null, null);
This returns the query portion of the URL you will need to create a SAS URL.
Another, code free way, to get a SAS URL is to go to https://azure.microsoft.com/en-us/features/storage-explorer/ and download the Azure Storage Explorer. When you start it up it will show you the "Connect to Azure Storage" dialog. In that case:
Select "Use a connection string or a shared access signature URI" and click next
Select "Use a connection string" and paste in your connection string from the Azure Portal for your Azure Cosmos DB Table API account and click Next and then click Connect in the next dialog
In the Explorer pane on the left look for your account under "Storage Accounts" (NOT Cosmos DB Accounts (Preview)) and then click on Tables and then right click on the specific table you want to explore. In the right click dialog you will see an entry for "Get Shared Access Signature", click on that.
A new dialog titled "Generate Shared Access Signature" will show up. Unfortunately so will an error dialog complaining about "NotImplemented", you can ignore that. Just click OK on the error dialog.
Now you can choose how to configure your SAS, I usually just take the defaults since that gives the widest access permission. Now click Create.
The result will be a dialog with both a complete URL and a query string.
So now we can take that URL (or create it ourselves using the query output from the code) and create a fiddler request:
GET https://tableaccount.table.cosmosdb.azure.com/ATable?se=2018-01-12T05%3A22%3A00Z&sp=raud&sv=2017-04-17&tn=atable&sig=X&$filter=PartitionKey%20eq%20'Foo'%20and%20RowKey%20eq%20'bar' HTTP/1.1
User-Agent: Fiddler
Host: tableaccount.table.cosmosdb.azure.com
Accept: application/json;odata=nometadata
DataServiceVersion: 3.0
To make the request more interesting I added a $filter operation. This is an OData filter that lets us explore the content. Note, btw, to make filter work both the Accept and DataServiceVersion headers are needed. But you can use the base URL (e.g. without the filter parameter) to make any of the REST API calls on a specific table.
Do be aware that the SAS token is scoped to an individual table. So higher level operations won't work with this SAS token.

Resources