OpenLink Virtuoso - Uploading RDF file into a named graph via HTTP API - virtuoso

Relevant docs: http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtRDFInsert
I'm trying to upload an RDF file in Turtle format into Virtuoso with a named graph.
With Sesame/Fuseki/4Store this is a simple API call, but I gather from the docs that I have to upload the file into a user's DAV folder. I managed to do this by making a HTTP PUT request to /DAV/home/{user}/rdf_sink/{randomlygeneratedfilename}.ttl, however I can't seem to specify the name of the graph to upload the data into.
Any ideas?

HTTP PUT with the sparql-graph-crud-auth URL works, and has a graph-uri query parameter: http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtGraphProtocolCURLExamples

Related

Is there an endpoint to batch get urn:li:digitalmediaAsset in the LinkedIn API?

We are doing a rest/posts?author={MY_ORG} request against the LinkedIn Api (version 202211). Some of the posts returned contain content referenced with urn:li:digitalmediaAsset for which we need the download URL.
When I encounter urn:li:image or urn:li:video I can do a BATCH get to fetch additional details about the assets. I'd like to do the same thing for urn:li:digitalmediaAsset. I haven't seen an endpoint for that - does it exist?
I understand, that I can use a projection here but, I'd like to align with the code that I have for images and videos if the endpoint exists. In other words, I am looking for an alternative to using projections.

Injecting an api key to google script's UrlFetchApp for an HTTP Request

The Problem
Hi. I'm trying to use Google Script's "UrlFetchApp" function to pull data from an email marketing tool (Klaviyo). Klaviyo's documentation, linked here, provides an API endpoint via an HTTP request.
I'm trying to use Google Script's "UrlFetchApp" to pull data from Klaviyo's API. Unfortunately, Klaviyo requires a key parameter in order to access data, but Google doesn't document if (or where) I can add a custom parameter (note, it should look something like: "api_key=xxxxxxxxxxxxxxxx". Note, it's quite easy for me to pull data into my terminal using the api_key parameter, but ideally I'd have it pulled via google scripts and added to a google sheet appropriately. If I can get JSON into google scripts, I can work with the data to output it how i want.
KLAVIYO'S EXAMPLE REQUEST FOR TERMINAL
curl https://a.klaviyo.com/api/v1/metrics -G \
-d api_key=XX_XXXXXXXXXXXXXXX
THIS OUTPUTS CORRECT DATA IN JSON
Note: My ultimate end goal is to pipe the data into Google data studio on a recurring basis for reporting. I thought i'd get the data into a csv for download / upload into google data studio on a recurring basis. If I'm thinking about this the wrong way, let me know.
Regarding the -G flag, from the curl man pages (emphasis mine):
When used, this option will make all data specified with -d, --data,
--data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data
will be appended to the URL with a '?' separator.
Given that the default HTTP method for UrlFetchApp.fetch() is "GET", your request will be very simple:
UrlFetchApp.fetch("https://a.klaviyo.com/api/v1/metrics?api_key=XX_XXXXXXXXXXXXXXX");

What status code to use when certifying an upload?

I'm working on a direct-to-S3 upload service that operates in two parts described below. This service would not be used by browsers, but would be a RESTful API used by other software clients.
Make a request to an endpoint which certifies and validates the upload, returning an upload URL if all's well.
Make a PUT request to the URL returned from #1 to actually do the upload to S3.
How should the server structure the response for the first endpoint?
The first option I am considering would be to use GET and return a status code 302 with a Content-Location header containing the URL to upload to. However, the intent behind the redirect descriptions in the spec seems to be focussed on redirecting after a form submission.
The other option I'm considering is to use POST for the first endpoint and returning a Location header with the URL, as described here:
If a resource has been created on the origin server, the response
SHOULD be 201 (Created) and contain an entity which describes the
status of the request and refers to the new resource, and a Location
header. RFC 2616 #9.5
Please advise on what other people have used in such circumstances?
I think it mainly depends on whether your API itself will have a resource referencing the uploaded file or not. The only one with knowledge of the uploaded file is the S3 itself or your API has something referencing it?
If the first case where only S3 knows about it, then it's OK to use the GET if it acts merely as a generator for the upload parameters, including the URI.
If the second case, then it shouldn't be a GET, since you're changing something on your side. Yes, you should make a POST, but the Location header should be used to return the URI for the created resource that references the uploaded file. That resource may have the upload URI and it could act like a state-machine, tracking if the file is uploaded or not. To avoid the need for clients to GET that resource before being able to upload, you may return the upload URI in the Link header, with a rel reflecting that purpose.

cdn behavior for multiple urls mapping to the same result file

Assume that I have a http service which serves some contents, and I want to place cdn in front of it to serve cached content. the issue is, the url can take parameters, and mulitiple parameters maps to one result file, will cdn be efficient in this case? will cdn cache a different copy of the file for each of the strings that map to the same file?
For example:
http://myservice.com/getlogfile?time=10000
to
http://myservice.com/getlogfile?time=19999
all above maps to log.1
The Akamai cache configuration can be altered to adhere to specified rules. If you would like for one file to be cached for all query strings, simply set your Akamai configuration to ignore query strings. The Akamai Knowledge base found within the Akamai control panel contains a document called 'Edge Server Configuration Guide' that explains this configuration in detail. Start with the section called 'Ignore Query Strings' (page 104).

How do I prevent unauthorized attempts to access a specific file type?

This is really a couple of questions about preventing unauthorized attempts to access a specific file type. Here go the questions:
How do I prevent users from directly requesting a type of file? Do I write an HTTP handler?
After preventing a direct download, can my app still explicitly serve that file type? How?
The way to do this is to:
Put all your tif files in a non publicly accessible location
Create an IHttpHandler to serve these tif files based on authentication (or whatever limitation you choose).
(Optional) Set up a rewrite rule so that all tif requests go through your IHttpHandler. This creates nice URL's again.

Resources