Why "Invalid snak data" on updating wikibase? - wikidata

I am trying to learn how to update values on Wikidata using the API. Login and csrf cookies work ok, but when I try to update a value I get an invalid-snak error.
The request body looks like this:
POST /w/api.php HTTP/1.1
Accept-Encoding: gzip
Content-Length: 178
User-Agent: Mojolicious (Perl)
Host: test.wikidata.org
Content-Type: application/x-www-form-urlencoded
Cookie: [omitted]
action=wbcreateclaim&bot=1&entity=Q3345&format=json&property=P9876&snaktype=value&token=[omitted]&value=%7B%22entity-type%22%3A%22Q1917%22%7D
and the response is:
{
"error": {
"code": "invalid-snak",
"info": "Invalid snak data.",
"messages": [
{
"name": "wikibase-api-invalid-snak",
"parameters": [],
"html": {
"*": "Invalid snak data."
}
}
],
"*": "See https://test.wikidata.org/w/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."
},
"servedby": "mw1386"
}
I've tried various ways to succeed with an update by changing the value - no results. The only update I succeeded with is one with snaktype=novalue - which would confirm that the issue is with the snak vaue alone.
So the question is, what's the right way to make an update to a snakvalue?

The problem is that you are stating value={"entity-type":"Q1917"}, but Q1917 is not an entity-type!
You should instead state value={"entity-type":"item","numeric-id":1917}.
For deepen the topic, see the Wikidata API's documentation.

Related

Getting a 400 Error when doing a simple text post

Trying to migrate our calls away from the v2/ugcPosts endpoint to the new /rest/posts endpoint. I started off trying to create a text-only post as per the Text-Only Post Creation Sample Request
I copied and pasted the request body, changed the organization ID to my own, and included Authorization: Bearer {My Authorization Token}, X-Restli-Protocol-Version: 2.0.0, Content-Type: application/json, LinkedIn-Version:202207 as headers.
POST https://api.linkedin.com/rest/posts
{
"author": "urn:li:organization:***",
"commentary": "Sample text Post",
"visibility": "PUBLIC",
"distribution": {
"feedDistribution": "NONE",
"targetEntities": [],
"thirdPartyDistributionChannels": []
},
"lifecycleState": "PUBLISHED",
"isReshareDisabledByAuthor": false
}
However, when trying to post, I keep getting this 400 error.
{
"errorDetailType": "com.linkedin.common.error.BadRequest",
"code": "MISSING_REQUIRED_FIELD_FOR_DSC",
"message": "Field /adContext/dscAdAccount is required when the post is a Direct Sponsored Content, but missing in the request",
"errorDetails": {
"inputErrors": [
{
"description": "Field /adContext/dscAdAccount is required when the post is a Direct Sponsored Content, but missing in the request",
"input": {
"inputPath": {
"fieldPath": "/adContext/dscAdAccount"
}
},
"code": "MISSING_REQUIRED_FIELD_FOR_DSC"
}
]
},
"status": 400
}
I see that lifecycleState, distribution, visibility, commentary, author are all required, but adContext should be an optional field. I'm not sure what part of my request is indicating I'm trying to make a Direct Sponsored Content post - can someone take a look?
I've already tried
Removing the targetEntities and thirdPartyDistributionChannels parameters.
Removing isReshareDisabledByAuthor and the above.
Changing the feedDistribution to MAIN_FEED and NONE
Creating the post via the /v2/ugcPosts endpoint - meaning the authorization token and organization urn are correct

Amazon Advertising API: ASINs report request returns “Missing campaign type”

Request to API-Endpoint:
POST https://advertising-api-eu.amazon.com/v2/asins/report
Official documentation:
https://advertising.amazon.com/API/docs/en-us/reference/sponsored-products/2/reports
Headers:
Authorization: Bearer Atza|xxxxxxxxxxxxxxxxxxxxx
Content-Type: application/json
Amazon-Advertising-API-ClientId: xxxxxxxxxxxxxxxxxxxxxxxxxx
Amazon-Advertising-API-SCOPE: xxxxxxxxxxxxxxxxxxxxxxx
Request:
{
"segment": "query",
"reportDate":"20200201",
"metrics": "campaignName,campaignId,adGroupName,adGroupId,keywordId,keywordText,asin,otherAsin,currency,matchType,attributedUnitsOrdered30d,attributedUnitsOrdered30dOtherSKU,attributedSales30dOtherSKU"
}
Response:
{
"code": "400",
"details": "Missing campaign type",
"requestId": "7Q8PMWM2618KAS0VEG87"
}
Question:
I think the error message is misleading (because i checked the documentation twice and because of my former experiences with the API).
But what is the real error? How can i get a ASINs report?
I asked Amazon Support and they replied:
"Asin report would need campaignType in the payload to succeed the operation. We have a documentation update pending on this. Please be noted that query segmentation is only allowed for keyword, target and productAds report. An example payload:-
GET https://advertising-api.amazon.com/v2/asins/report"
{
"reportDate":"20200201",
"campaignType": "sponsoredProducts",
"metrics":"campaignName,
campaignId,
adGroupName,
adGroupId,
keywordId,
keywordText,
asin,
otherAsin,
currency,
matchType,
attributedUnitsOrdered30d,
attributedUnitsOrdered30dOtherSKU,
attributedSales30dOtherSKU"
}

Ad Targeting - Find Entities by URNs API ClassCastException error message

I have some problem with the "Find Entities by URNs" API in order to retrieve the metadata and value information for a collection of URNs.
If I use the URL described in the doc (Sample request) with a valid access token:
https://api.linkedin.com/v2/adTargetingEntities?q=urns&urns=List(urn%3Ali%3AfieldOfStudy%3A100990,urn%3Ali%3Aorganization%3A1035,urn%3Ali%3Aseniority%3A9)&locale=(language:en,country:US)&oauth2_access_token=<a-valid-token>
I receive the message:
{
"serviceErrorCode": 0,
"message": "java.lang.ClassCastException",
"status": 500
}
Anyone have experience the same issue? Any idea how to fix it?
Also: how can i contact for technical support as in this case?
UPDATE:
I made some try and I fix using the following version:
https://api.linkedin.com/v2/adTargetingEntities?q=urns&urns=urn%3Ali%3AfieldOfStudy%3A100990&urns=urn%3Ali%3Aorganization%3A1035&urns=urn%3Ali%3Aseniority%3A9&locale.language=it&locale.country=IT&oauth2_access_token=<a-valid-token>
BUT the locale/language translation is not working. Could be this a working solutions?
From the support team:
Our docs are missing 1 critical piece of information. Whenever using
LIST and encoded URNs in the URL, we expect an additional header 'x-restli-protocol-version: 2.0.0'
The correct API call would be Request:
curl -X GET \
'https://api.linkedin.com/v2/adTargetingEntities?q=urns&urns=List(urn%3Ali%3Aindustry%3A1,urn%3Ali%3Aseniority%3A9)&locale=(language:it,country:IT)'
\
-H 'x-restli-protocol-version: 2.0.0' \
-H 'Authorization: Bearer <Token>'
Response:
{
"elements": [
{
"facetUrn": "urn:li:adTargetingFacet:industries",
"name": "Difesa e spazio",
"urn": "urn:li:industry:1"
},
{
"facetUrn": "urn:li:adTargetingFacet:seniorities",
"name": "Partner",
"urn": "urn:li:seniority:9"
}
],
"paging": {
"count": 2147483647,
"links": [],
"start": 0
}
}
Yes, it does provide a response in locale.
Hope this can help other guys in the future

CosmosDB UserPermission with ResourcePartitionKey not Enforced

I'm trying to implement the behaviour described in this CosmosDB document with the additional ResourcePartitionKey restrictions on the User Permissions to constrain a resource token to only accessing documents that belong to the specified partition key however I can't make it work.
With the SQL REST API, I receive no errors with the POST to create the UserPermission object with the resourcepartitionkey keypair and both the initial returned object as well as subsequent GET's also show the "resourcepartitionkey" present when fetching the resource token.
Using the resource token against the appropriate collection specified in the permission, I can list all documents in the collection. When using a "x-ms-documentdb-partitionkey" header, I can target any paritionkey I like. Without "x-ms-documentdb-partitionkey" header, it simply returns the whole collection.
The collection is a brand new, Unlimited, 1000 RU's with a partition key of '/rpk'. Post creation query of the collection shows the partition key configured as follows
"partitionKey": {
"paths": [
"\/rpk"
],
"kind": "Hash"
}
Below is the User Permission returned during creation showing the "resourcepartitionkey" present
{
"resource": "dbs/dbName/colls/collectionName/",
"id": "read-collection",
"resourcepartitionkey": "rpk1",
"permissionMode": "read",
"_rid": "lH9FACGGKwAhslfihB0pAA==",
"_self": "dbs\/lH9FAA==\/users\/lH9FACGGKwA=\/permissions\/lH9FACGGKwAhslfihB0pAA==\/",
"_etag": "\"0000ba07-0000-0000-0000-5b7418770000\"",
"_ts": 1534335095,
"_token": "type=resource&ver=1&sig=<resource token signature>"
}
The following is the request for documents using the resource token above. I would expect this to fail due to the missing "x-ms-documentdb-partitionkey" header against a partitioned collection but it both succeeds and proceeds to return records from all partition keys in the collection (only 2 in my test dataset)
GET https://accountname.documents.azure.com/dbs/dbName/colls/collectionName/docs HTTP/1.1
authorization: type%3dresource%26ver%3d1%26sig<resource token signature>
x-ms-version: 2017-02-22
x-ms-max-item-count: -1
x-ms-date: Wed, 15 Aug 2018 12:11:35 GMT
User-Agent: Mozilla/5.0 (Windows NT; Windows NT 10.0; en-AU) WindowsPowerShell/5.1.17134.165
Content-Type: application/json
Host: accountname.documents.azure.com
Response Body from request above showing documents from the partition keys rpk1 and rpk2 even though the user permission is configured to rpk1.
{
"_rid": "lH9FAKbDh4c=",
"Documents": [
{
"id": "blue",
"rpk": "rpk1",
"_rid": "lH9FAKbDh4cCAAAAAAAAAA==",
"_self": "dbs\/lH9FAA==\/colls\/lH9FAKbDh4c=\/docs\/lH9FAKbDh4cCAAAAAAAAAA==\/",
"_etag": "\"ec012ca1-0000-0000-0000-5b73ab440000\"",
"_attachments": "attachments\/",
"_ts": 1534307140
},
{
"id": "red",
"rpk": "rpk2",
"_rid": "lH9FAKbDh4cDAAAAAAAAAA==",
"_self": "dbs\/lH9FAA==\/colls\/lH9FAKbDh4c=\/docs\/lH9FAKbDh4cDAAAAAAAAAA==\/",
"_etag": "\"ec012da1-0000-0000-0000-5b73ab580000\"",
"_attachments": "attachments\/",
"_ts": 1534307160
}
],
"_count": 2
}
I'm assuming I've missed something obvious, or using an incorrect value for 'resourcepartitionkey' in the UserPermission but I can't determine what. Any thoughts greatly appreciated.
After many more hours of trial and error, I have finally resolved my issue which is caused during the POST creation of the user permission.
Firstly, whilst the creation of the user permission will validate the name "resourcePartitionKey", it does not check case sensitivity. With the incorrect sensitivity, the returned UserPermission object has the value present but does not provide any security controls (dangerous situation #1)
Secondly, the input value is not validated for being of type array. Once again it is accepted and returned to you in the user permission object but again provides no security control (dangerous situation #2)
A full working example is below where the permission ID is called "read-collection" and the resourcePartitionKey is configured to 'rpk1' finally exhibits the expected behaviour of requiring "x-ms-documentdb-partitionkey" for the request and only returns values from the specified partition key.
POST https://accountname.documents.azure.com/dbs/dbName/users/userName/permissions HTTP/1.1
authorization: type%3dmaster%26ver%3d1.0%26sig<signature>
x-ms-version: 2017-02-22
x-ms-date: Thu, 16 Aug 2018 04:09:44 GMT
User-Agent: Mozilla/5.0 (Windows NT; Windows NT 10.0; en-AU) WindowsPowerShell/5.1.17134.165
Content-Type: application/json
Host: accountname.documents.azure.com
Content-Length: 215
{
"resource": "dbs/dbName/colls/collectionName/",
"id": "read-collection",
"resourcePartitionKey": [
"rpk1"
],
"permissionMode": "read"
}
If someone knows where to log a DCR or bug for the CosmosDB SQL Rest API, please let me know as without the proper validation during resource permission creation, resource tokens may be distributed to low trust clients that can gain unexpected full access to collection data.

Google Calendar Api bad request 400 event over developer console

I've created project and registered it using https://code.google.com/apis/console
(I've choosen "other" application type).
Then I got "Client ID" for installed applications.
Then I went to console and created event, and authorised using oauth 2.
https://developers.google.com/google-apps/calendar/v3/reference/events/insert
Everything worked fine.
POST https://www.googleapis.com/calendar/v3/calendars/primary/events?key={YOUR_API_KEY}
Content-Type: application/json
Authorization: xxx
X-JavaScript-User-Agent: Google APIs Explorer
{
"end": {
"dateTime": "2013-1-16T10:00:00.000-07:00"
},
"start": {
"dateTime": "2013-1-16T10:00:00.000-07:00"
}
}
Response
200 OK
- Show headers -
{
"kind": "calendar#event",
"etag": "\"WANTVF5ixxZ04U_VtQ0AZ3MbAlM/Z2NhbDAwMDAxMzY1NjU0MzAwNTk1MDAw\"",
"id": "2nsuis19mkp2q0uef54tl5nk68",
"status": "confirmed",
"htmlLink": "https://www.google.com/calendar/event?eid=Mm5zdWlzMTlta3AycTB1ZWY1NHRsNW5rNjggaXZhbjEzMzEzM0Bt",
"created": "2013-04-11T04:25:00.000Z",
"updated": "2013-04-11T04:25:00.595Z",
"creator": {
"email": "ivan133133#gmail.com",
"displayName": "ivan rozhcov",
"self": true
},
"organizer": {
"email": "ivan133133#gmail.com",
"self": true
},
"start": {
"dateTime": "2013-01-16T21:00:00+04:00"
},
"end": {
"dateTime": "2013-01-16T21:00:00+04:00"
},
"iCalUID": "2nsuis19mkp2q0uef54tl5nk68#google.com",
"sequence": 0,
"reminders": {
"useDefault": true
}
}
Then I copied id and updated this event.
https://developers.google.com/google-apps/calendar/v3/reference/events/update update
First time I got 200 response.
PUT https://www.googleapis.com/calendar/v3/calendars/primary/events/2nsuis19mkp2q0uef54tl5nk68?key={YOUR_API_KEY}
Content-Type: application/json
Authorization: Bearer ya29.AHES6ZRmo6pdxj8pY4NmzdI1estRNB-v87XV7xQHgyhrWHk2rzs3Ke8
X-JavaScript-User-Agent: Google APIs Explorer
{
"end": {
"dateTime": "2013-1-16T10:00:00.000-07:00"
},
"start": {
"dateTime": "2013-1-16T10:00:00.000-07:00"
}
}
But when I tried this again, I always got 400 error, error text is written below.
https://developers.google.com/google-apps/calendar/v3/reference/events/update update
400 Bad Request
cache-control: private, max-age=0
content-encoding: gzip
content-length: 123
content-type: application/json; charset=UTF-8
date: Wed, 10 Apr 2013 12:49:29 GMT
expires: Wed, 10 Apr 2013 12:49:29 GMT
server: GSE
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Invalid Value"
}
],
"code": 400,
"message": "Invalid Value"
}
}
Can anyone explain, if that's google.api bug, or maybe I'm mistaken somwhere?
I've tryed it from differret account and PC (throught Google APIs Explorer and python library with same result)
Today I've tried to reproduce that bug. But everring works fine now.
I've created an issue in google code
http://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=3371&thanks=3371&ts=1365599378
Still without answer.
I think it's was temporaru bug.
400 almost always means a simple syntax error in your auth URL. The most common cause is that you’ve either failed to URL-escape your scope or redirect, or alternatively URL-escaped it more than once.
Check this: Google Calendar API - can only update event once
and this: Google Calendar api v3 re-update issue
Sequence number needs to be increased when you change date/time of event, that is the reason why you are getting 400 - Bad request.
This is how I solved it using java client library.
mGoogleCalendarEvent contains updated information about event. First, get sequence number of this event from GoogleCalendar:
mGoogleCalendarEvent.setSequence(mService.events().get(mCalendarId,mGoogleCalendarEvent.getId()).execute().getSequence());
And then perform update:
mService.events().update(mCalendarId, mGoogleCalendarEvent.getId(),mGoogleCalendarEvent).execute();
I used patch instead of update and it works fine by now. But don't realy understand why update succeded first time and failed second and other times. Total mystery.
Discovered the exact same problem when accessing the Google Calendar API with the Google APIs Client Library for Javascript. It happens if I try to update the start date of a calendar event. Updating the end data is no problem. Really weird.
Fortunately, I was able to fix the issue by replacing update with patch too.

Resources