Please I created a Neo4j database instance, and I am trying to call it in R Studio, using the neo4r and neo4jshell packages. After running the api call, I still get a 404even though I correctly specified the url, username, and password. Please find my code below:
library(neo4r)
library(neo4jshell)
myTwitter <- neo4j_api$new(
url = "http://54.152.83.7:7474",
user = "neo4j",
password = "mypassword"
)
myTwitter$ping()
When I run the last line of code, I get the 404 instead of 200, which obviously means my api call was not successful. Please I would appreciate your helpful suggestions. Thank you
HTTP endpoints were changed since version 4 of Neo4J
Neo4j v3 had endpoint http://localhost:7474/db/data
Neo4j v4 uses http://localhost:7474/db/{databaseName}/tx instead of it.
Seems like Neo4j library for R needs to be updated...
I'm not familiar with R but you could try to use available HTTP client for R that supports Basic authentication to send POST requests to Neo4J API with JSON payload. I also see you use http schema which means your credentials will be sent as plain text through the network, which is not good.
Payload for such requests should be in form of:
{
"statements": [
{
"statement": "MATCH(n) RETURN n"
}
]
}
(adjust Cypher query to your needs)
Response will be JSON object with data section containing actual results.
Related
I am trying to download data in R via an API using OData protocol. I am new to APIs so please bear with me.
I am trying to send a GET command using the following from the API specs:
https://address/v1/meter?$filter=GroupGUID eq guid’ID’
where address is replaced by the URL and ID is replaced by the ID of the meter.
The API uses basic authorization, so I am using the following code from the httr library:
MyUrl <- "https://address/v1/meter?$filter=GroupGUID eq guid'ID'"
MyData <- GET(MyUrl, authenticate("Username", "access code", "basic"))
This gives an status code 400 - The request is badly formed
The authentication works, when I am not using the $filter command.
I have been in touch with the developers of the API and they have confirmed that my GET command is correct. But they are not familiar with R, so they cannot help. I suspect that it may be the space before and after “eq” that is causing the problem.
Can anyone help or point me to a description of using the Odata protocol in R?
Best regards, Rune
I finally figured it out. If I replace space " " with "%20", it works.
This solution is also described her: passing odata $filter in httr GET request
using autodesk forge API, I am trying to create a webhook over folder using the following information; unfortunately, I am receiving the following response:
{
"id":"xxxx-xxxx-xxx-xxxxx",
"status":400,
"code":"VALIDATION_ERROR",
"detail":["Payload is not valid for serialization"]
}
URL:
https://developer.api.autodesk.com/webhooks/v1/systems/data/events/dm.folder.added/hooks (for specific folder added event) or: https://developer.api.autodesk.com/webhooks/v1/systems/data/hooks (for all events). Both are returning the same error
Header:
{
"Content-Type":"application/json",
"Authorization":"<MY_TOKEN>",
"x-ads-region":"US"
}
Data:
{
"callbackUrl":"<MY_DOMAIN>:<MY_PORT>/callback",
"scope":{
"folder":"urn:adsk.wipprod:fs.folder:co.xxxxxxxxxxxx-xxxxx"
}
}
Troubleshooting:
I've tried different folders, root and non-root. I can access all the folders i tried using the api
I am sure that my account is US region
I've tried to add hubId and/or projectId, but I received the same error
<MY_DOMAIN>:<MY_PORT>/callback is configured and working fine
Headers and Data are serializing and de-serializing normally using json loads & dumps
Any suggestion/help?
Answering myself :)
I've discovered that my issue is not related to Forge API, it's a general one related to python Requests. The payload (data) of Requests cannot be nested dictionary, only 1 level dictionary is accepted, nested ones will fail. the solution is to stringify the dict (json.dumps) and use that string as request payload.
Cosmos DB, API Azure Tables, gives you 2 endpoints in the Overview blade
Document Endpoint
Azure Table Endpoint
An example of (1) is
https://myname.documents.azure.com/dbs/tempdb/colls
An example of (2) is
https://myname.table.cosmosdb.azure.com/FirstTestTable?$filter=PartitionKey%20eq%20'car'%20and%20RowKey%20eq%20'124'
You can create the authorization code for (1) on the client using the prerequest code from this Postman script: https://github.com/MicrosoftCSA/documentdb-postman-collection/blob/master/DocumentDB.postman_collection.json
Which will give you a code like this:
Authorization: type%3Dmaster%26ver%3D1.0%26sig%3DavFQkBscU...
This is useful for playing with the rest urls
For (2) the only code I could find to generate a code that works was on the server side and gives you a code like this:
Authorization: SharedKey myname:JXkSGZlcB1gX8Mjuu...
I had to get this out of Fiddler
My questions
(i) Can you generate a code for case (2) above on the client like you can for case (1)
(ii) Can you securely use Cosmos DB from the client?
If you go to the Azure Portal for a GA Table API account you won't see the document endpoint anymore. Instead only the Azure Table Endpoint is advertised (e.g. X.table.cosmosdb.azure.com). So we'll focus on that.
When using anything but direct mode with the .NET SDK, our existing SDKs when talking to X.table.cosmosdb.azure.com endpoint are using the SharedKey authentication scheme. There is also a SharedKeyLight scheme which should also work. Both are documented in https://learn.microsoft.com/en-us/rest/api/storageservices/authentication-for-the-azure-storage-services. Make sure you read the sections specifically on the Table Service. The thing to notice is that a SharedKey header is directly tied to the request it is associated with. So basically every request needs a unique header. This is useful for security because it means that a leaked header can only be used for a limited time to replay a specific request. It can't be used to authorize other requests. But of course that is exactly what you are trying to do.
An alternative is the SharedKeyLight header which is a bit easier to implement as it just requires a date and the a URL.
But we don't have externalized code libraries to really help with either.
But there is another solution that is much friendly to things like Fiddler or Postman, which is to use a SAS URL as defined in https://blogs.msdn.microsoft.com/windowsazurestorage/2012/06/12/introducing-table-sas-shared-access-signature-queue-sas-and-update-to-blob-sas/.
There are at least two ways to get a SAS token. One way is to generate one yourself. Here is some sample code to do that:
var connectionString = "DefaultEndpointsProtocol=https;AccountName=tableaccount;AccountKey=X;TableEndpoint=https://tableaccount.table.cosmosdb.azure.com:443/;";
var tableName = "ATable";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tableName);
await table.CreateIfNotExistsAsync();
SharedAccessTablePolicy policy = new SharedAccessTablePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1000),
Permissions = SharedAccessTablePermissions.Add
| SharedAccessTablePermissions.Query
| SharedAccessTablePermissions.Update
| SharedAccessTablePermissions.Delete
};
string sasToken = table.GetSharedAccessSignature(
policy, null, null, null, null, null);
This returns the query portion of the URL you will need to create a SAS URL.
Another, code free way, to get a SAS URL is to go to https://azure.microsoft.com/en-us/features/storage-explorer/ and download the Azure Storage Explorer. When you start it up it will show you the "Connect to Azure Storage" dialog. In that case:
Select "Use a connection string or a shared access signature URI" and click next
Select "Use a connection string" and paste in your connection string from the Azure Portal for your Azure Cosmos DB Table API account and click Next and then click Connect in the next dialog
In the Explorer pane on the left look for your account under "Storage Accounts" (NOT Cosmos DB Accounts (Preview)) and then click on Tables and then right click on the specific table you want to explore. In the right click dialog you will see an entry for "Get Shared Access Signature", click on that.
A new dialog titled "Generate Shared Access Signature" will show up. Unfortunately so will an error dialog complaining about "NotImplemented", you can ignore that. Just click OK on the error dialog.
Now you can choose how to configure your SAS, I usually just take the defaults since that gives the widest access permission. Now click Create.
The result will be a dialog with both a complete URL and a query string.
So now we can take that URL (or create it ourselves using the query output from the code) and create a fiddler request:
GET https://tableaccount.table.cosmosdb.azure.com/ATable?se=2018-01-12T05%3A22%3A00Z&sp=raud&sv=2017-04-17&tn=atable&sig=X&$filter=PartitionKey%20eq%20'Foo'%20and%20RowKey%20eq%20'bar' HTTP/1.1
User-Agent: Fiddler
Host: tableaccount.table.cosmosdb.azure.com
Accept: application/json;odata=nometadata
DataServiceVersion: 3.0
To make the request more interesting I added a $filter operation. This is an OData filter that lets us explore the content. Note, btw, to make filter work both the Accept and DataServiceVersion headers are needed. But you can use the base URL (e.g. without the filter parameter) to make any of the REST API calls on a specific table.
Do be aware that the SAS token is scoped to an individual table. So higher level operations won't work with this SAS token.
I'm trying to build a web application for a customer. It implements a simple remote search on this site:
https://www.handelsregister.de/rp_web/mask.do?Typ=n
All I need to do is insert some value in the input field labeled Company or keywords, perform the search and get the HTTP response.
The problem is I am not familiar with this kind of architecture; I have always worked with APIs that have URLs, etc. Is it possible to perform the above operation automatically by programming?
Probably. You could send POST requests and parse the response. Here is a basic example in Python with the module requests :
import requests
query = "test"
post_fields = {'suchTyp':'n',
'registerArt':'',
'registerNummer':'',
'registergericht':'',
'schlagwoerter':query,
'schlagwortOptionen':2,
'ergebnisseProSeite':100,
'btnSuche':'Rechercher'}
response = requests.post("https://www.handelsregister.de/rp_web/search.do", data=post_fields)
print(response.status_code)
print(response.text)
I have the following issue when trying to connect to the documentDB web API with R and PostMan.
In the DocumentDB documentation the way to ask something to the web API is to compose an Authorization header with base64 hash.
In R I'm trying to compute the signature and test the header directly with postman.
But I get every time a http 401.
Here is my R code:
toHash <- enc2utf8("get\ncolls\ndbs/toto/colls/testtoto\nsun, 08 may 2016 06:43:05 gmt\n\n")
hash <- hmac(key, toHash, "sha256")
base64(hash)
the "key" is the primary key got from the portal.
And then, following the Azure documentation, my header is:
type=master&ver=1.0&sig=< thebase64(hash) >
I'm pasting that into PostMan with the headers x-ms-version, date and x-ms-date.
But it'is not working..
I'm stuck now, does anyone have an idea? Am I using a wrong R function? A wrong key, is there a way to get more information about the mismatch?
The web api response is :
{
"code": "Unauthorized",
"message": "The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ncolls\ndbs/toto/colls/testtoto\nsun, 08 may 2016 06:43:05 gmt\n\n'\r\nActivityId: fadbfc0b-e298-418a-b56c-8114699fff91"
}
I found what was wrong by myself.
The token given in the Azure portal is base64 encoded. So It is mandatory to decode it:
RCurl::base64Decode(key, mode="raw")
in order to use it with the digest::hmac function. It is also mandatory to specify raw = TRUE within this hmac function.