Meteor: Match error: Failed Match.OneOf or Match.Optional validation (websocket) - meteor

I have a website that uses Meteor 0.9. I have deployed this website on OpenShift (http://www.truthpecker.com).
The problem I'm experiencing is that when I go to a path on my site (/discover), then sometimes (though not always), the data needed are not fetched by Meteor. Instead I get the following errors:
On the client side:
WebSocket connection to 'ws://www.truthpecker.com/sockjs/796/3tfowlag/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
And on the server side:
Exception from sub rD8cj6FGa6bpTDivh Error: Match error: Failed Match.OneOf or Match.Optional validation
at checkSubtree (packages/check/match.js:222)
at check (packages/check/match.js:21)
at _.extend._getFindOptions (packages/mongo-livedata/collection.js:216)
at _.extend.find (packages/mongo-livedata/collection.js:236)
at Meteor.publish.Activities.find.user [as _handler] (app/server/publications.js:41:19)
at maybeAuditArgumentChecks (packages/livedata/livedata_server.js:1492)
at _.extend._runHandler (packages/livedata/livedata_server.js:914)
at _.extend._startSubscription (packages/livedata/livedata_server.js:764)
at _.extend.protocol_handlers.sub (packages/livedata/livedata_server.js:577)
at packages/livedata/livedata_server.js:541
Sanitized and reported to the client as: Match failed [400]
Can anyone help me to eliminate this error and get the site working? I'd be very grateful!
Tony
P.S.: I never got this error using localhost.
EDIT:
The line causing the problem the problem is this (line 41):
return Activities.find({user: id}, {sort: {timeStamp: -1}, limit:40});
One document in the activities collection looks like this:
{
"user" : "ZJrgYm34rR92zg6z7",
"type" : "editArg",
"debId" : "wtziFDS4bB3CCkNLo",
"argId" : "YAnjh2Pu6QESzHQLH",
"timeStamp" : ISODate("2014-09-12T22:10:29.586Z"),
"_id" : "sEDDreehonp67haDg"
}
When I run the query done in line 41 in mongo shell, I get the following error:
error: { "$err" : "Unsupported projection option: timeStamp", "code" : 13097 }
I don't really why this is though. Can you help me there as well? Thank you.

Make sure that you are passing an integer to skip and limit. Use parseInt() if need be.

You have a document on your website that does not match your check validation.
The validation you have is in app/server/publications.js:41
So the attribute in question exists in some way like Match.optional(Match.oneOf(xx)) but the document's attribute is neither of the values in Match.oneOf
You would have to go through your documents for the collection causing this and remove or correct the attribute causing this to match your check statement.
Update for your updated question.
You're running Meteor commands in the meteor mongo/mongo shell. The error you get is unrelated to the problem in Meteor, to sort in the mongo shell you would do activities.find(..).sort(), instead of activities.find(.., { sort : {..}). This is unrelated to the issue
The issue is most-likely that your id is not actually a string. Its supposed to be sEDDreehonp67haDg for the document you're looking for. You might want to use the debugger to see what it actually is.

I don't think you can use limit in client-side find queries. Removing limit from my query solves the problem. If you're looking for pagination, then you can either manually roll your own by passing a parameter to your Activities publication so that the limit is added to the server-side query along with an offset. There is also this pagination package.

Related

Firestore REST API batchGet returns status 400

I am trying to get data from Firestore in a batch using the batchGet method. But i get an error as below :
Error: JSON error response from server: [{"error":{"code":400,"message":"Document name "["projects/MYPROJECT/databases/(default)/documents/MYCOLLECTION/DOCUMENTID"]" lacks "projects" at index 0.","status":"INVALID_ARGUMENT"}}].status: 400
I have searched for any questions regarding this, I found this. They are trying t add documents and not using batchGet, however I have followed the solution. Still no luck.
Note: I am integrating Firebase to AppGyver.
I am new to Firebase and still learning. I need all the help I can get. Thank you in advance.
I have found what I did wrong.
In AppGyver you can only set your parameters as text or number only. While the documents parameter needs to be in array. Hence, I convert the array to string using ENCODE_JSON. But that was wrong.
The correct way is just send the documents in array without converting it. Use the formula :
Just hit "Save" even there is an error telling you that it is not a text.

Exception in callback of async function: TypeError: callback is not a function

I cant figure out why my insert query code located in server/main.js is causing this: TypeError: callback is not a function error message.
The following code is located at: server/main.js
var businessCard = [{PostedDate: moment().add(0, "days").format('MM/DD/YYYY'), sentBy: currentUserId, recipientName: recipientName }];
Next line is the insert query:
Messages.insert({businessCard: businessCard}, {multi:true});
When I run the code, no inserts into the Messages collection are carried out, neither do I get any error messages in the browser console,
however when I check the terminal I see the following error message:
When I comment out the insert query, the error message disappears, leading me to think there is something wrong in how I have written this insert code.
Kindly help me figure out what am doing wrong here.
Looking forward to your help
The error is because you use multi: true option, insert method does not have this option.

AmazonServiceException: Supplied AttributeValue is empty, must contain exactly one of the supported datatypes

I am trying to import data from dynamodb console interface, but unable to get success.
Data is
{"_id":{"s":"d9922db0-83ac-11e6-9263-cd3ebf92dec3"},"applicationId":{"S":"2"},"applicationName":{"S":"Paperclip"},"ip":{"S":"127.0.0.1"},"objectInfo":{"S":"elearning_2699"},"referalUrl":{"S":"backported data"},"url":{"S":""},"userAgent":{"S":""},"userEmail":{"S":"karthick.shivanna#test.com"},"userId":{"S":"508521"},"userName":{"S":"Karthik"},"created":{"S":"1486983137000"},"verb":{"S":"submitproject"},"dataVals":{"S":"{\"projectid\":5,\"name\":\"Test 1\",\"domain\":\"apparel\",\"submittype\":[\"Writeup\",\"Screenshots\"],\"passcriteria\":\"Percentage\",\"taemail\":\"bhargava.gade#test.com\",\"attemptNo\":1,\"submitDate\":1467784988}"},"eventTime":{"S":"1467784988000"}}
I am getting below error
Error: java.lang.RuntimeException:
com.amazonaws.AmazonServiceException: Supplied AttributeValue is
empty, must contain exactly one of the supported datatypes (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
Request ID: GECS2L57CG9ANLKCSJSB8EIKVRVV4KQNSO5AEMVJF66Q9ASUAAJG) at
org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.handleException(DynamoDBFibonacciRetryer.java:107)
at
org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.runWithRetry(DynamoDBFibonacciRetryer.java:83)
at
org.apache.hadoop.dynamodb.DynamoDBClient.writeBatch(DynamoDBClient.java:220)
at
org.apache.hadoop.dynamodb.DynamoDBClient.putBatch(DynamoDBClient.java:170)
at
org.apache.hadoop.dynamodb.write.AbstractDynamoDBRecordWriter.write(AbstractDynamoDBRecordWriter.java:91)
at
org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:844)
at
org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:596)
at org.apache.hadoop.dynamodb.tools.ImportMapper.map(ImportMapper.j
errorStackTrace
amazonaws.datapipeline.taskrunner.TaskExecutionException: Failed to
complete EMR transform. at
amazonaws.datapipeline.activity.EmrActivity.runActivity(EmrActivity.java:67)
at
amazonaws.datapipeline.objects.AbstractActivity.run(AbstractActivity.java:16)
at
amazonaws.datapipeline.taskrunner.TaskPoller.executeRemoteRunner(TaskPoller.java:136)
at
amazonaws.datapipeline.taskrunner.TaskPoller.executeTask(TaskPoller.java:105)
at
amazonaws.datapipeline.taskrunner.TaskPoller$1.run(TaskPoller.java:81)
at
private.com.amazonaws.services.datapipeline.poller.PollWorker.executeWork(PollWorker.java:76)
at
private.com.amazonaws.services.datapipeline.poller.PollWorker.run(PollWorker.java:53)
at java.lang.Thread.run(Thread.java:745) Caused by:
amazonaws.datapipeline.taskrunner.TaskExecutionException: Error:
java.lang.RuntimeException: com.amazonaws.AmazonServiceException:
Supplied AttributeValue is empty, must contain exactly one of the
supported datatypes (Service: AmazonDynamoDBv2; Status Code: 400;
Error Code: ValidationException; Request ID:
GECS2L57CG9ANLKCSJSB8EIKVRVV4KQNSO5AEMVJF66Q9ASUAAJG) at
org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.handleException(DynamoDBFibonacciRetryer.java:107)
at
org.apache.hadoop.dynamodb.DynamoDBFibonacciRetryer.runWithRetry(DynamoDBFibonacciRetryer.java:83)
at
org.apache.hadoop.dynamodb.DynamoDBClient.writeBatch(DynamoDBClient.java:220)
at
org.apache.hadoop.dynamodb.DynamoDBClient.putBatch(DynamoDBClient.java:170)
at
org.apache.hadoop.dynamodb.write.AbstractDynamoDBRecordWriter.write(AbstractDynamoDBRecordWriter.java:91)
at
org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.collect(MapTask.java:844)
at
org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:596)
at
org.apache.hadoop.dynamodb.tools.ImportMapper.map(ImportMapper.java:26)
at
org.apache.hadoop.dynamodb.tools.ImportMapper.map(ImportMapper.java:13)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:65) at
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170) Caused
by: com.amazonaws.AmazonServiceException: Supplied AttributeValue is
empty, must contain exactly one of the supported datatypes (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
Request ID: GECS2L57CG9ANLKCSJSB8EIKVRVV4KQNSO5AEMVJF66Q9ASUAAJG) at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1772)
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.batchWriteItem(AmazonDynamoDBClient.java:730)
at amazonaws.datapipeline.cluster.EmrUtil.runSteps(EmrUtil.java:286)
at
amazonaws.datapipeline.activity.EmrActivity.runActivity(EmrActivity.java:63)
Am I doing anything wrong?
Error: java.lang.RuntimeException: com.amazonaws.AmazonServiceException: Supplied AttributeValue is empty, must contain exactly one of the supported datatypes (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException This is the error you are getting.
Below are the possible reasons
DynamoDB does not support empty value, so you should remove those
fields (agree with #notionquest)
Field's value should have proper data type as per table
just updating here in case someone come across this again.Empty String and Binary attribute values are allowed
Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index.
I'm using Data pipeline with release label emr-5.23.0 and also encoutered the same problem. I solve it by using lower letter instead of capital letter for the Typing in the dynamo item. E.g instead of 'S' use 's', instead of 'N' use 'n'.
We have to go through step by step here. The above error occurred because the values for some of the attributes are empty. DynamoDB doesn't support empty value for the attributes.
Example: url, userAgent etc.
Please remove the empty attributes and try again. I can assure that the above issue will be resolved. However, something else could be wrong as well.
In my case , I got the same issue because of invalid parameter sends from
mapping template.
#set($inputRoot = $input.path('$'))
{
"userId": "$input.params('userId')",
"userEmail": "$input.params('userEmail')",
"userName": "$input.params('userName')",
"userPassword": "$input.params('userPassword')"
}
Here I sent extra parameter userId , that's why error occurred .
And for blank value , When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException exception.
Please check this document .
http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html
I hope it will help you.
I had the same problem running a restore pipeline in aws. After looking for a while I found the problem. The AMI version of the restore was different from the export one.
I have other pipelines which work fine. I still don't know why in one case it didn't. Basically, I checked the AMI versions, they were 3.8.0 for the export one and 3.9.0 for the restore one. I change the restore to 3.8.0 and it works.
Here you will find a better explanation.
My two cents.
Use camel case, For example:
{"id":{"S":"123xyz"},"ip":{"S":"127.0.0.1"},"attempt":{"N":"10"},"allowed":{"BOOL":true}}
to
{"id":{"s":"123xyz"},"ip":{"s":"127.0.0.1"},"attempt":{"n":"10"},"allowed":{"bOOL":true}}
This is also one of the reasons for above error.

R Bigquery Error : attempt to apply non-function API Data Failed to Parse

I am trying to do a simple Query with R and BigQueryR
https://github.com/cloudyr/bigQueryR:
I am not sure what is wrong but I keep getting an error (this might just be my lack of knowledge with R).
It is returning a list of projects and data sets correctly so I know it is connected.
I followed the guides on querying:
https://rdrr.io/cran/bigQueryR/man/bqr_query.html
bqr_query(projectId, datasetId, query, maxResults = 1000)
This is the command I put in:
result <- bqr_query("bigqyerytestproject2", "TestDataSet1", "SELECT * FROM TestTable3", maxResults = 1000)
and I get the error:
Error : attempt to apply non-function
Warning message:
In q(the_body = body, path_arguments = list(projects = projectId)) :
API Data failed to parse. Returning parsed from JSON content.
Use this to test against your data_parse_function.
But then I checked BigQuery and the query is going through successfully:
I am just connecting a small amount before I move a large data set but the results are:
[
{
"Name": "Season",
"Date": "2010-06-30",
"ID": "1"
},
{
"Name": "Test",
"Date": "2010-06-30",
"ID": "2"
}
]
Thanks in advance for your help
Depending on your app's needs, you might consider using bigrquery instead of bigQueryR. The difference is explained here.
Beyond that, I'd suggest filing an issue with the developer of the bigQueryR library.
I'll describe how I got past that same error in case it's helpful, though I think it might be a slightly different problem as mine was caused by a permissions error with Google Sheets.
To create the error, I can:
Create a Google Sheets spreadsheet
Use the spreadsheet as a source of a BigQuery table
Query the BigQuery table using bqr_query()
At this point an error will pop up due to insufficient permissions
After seeing the error I grant permission to edit the Google Sheet to my app
The error "Error : attempt to apply non-function" appears and I can't get rid of it.
If I don't query the table until after granting permission to my app, the error doesn't appear in the first place. So I just had to recreate the Google Sheet and BigQuery table.
Hope this helps!

getting 409 error when trying to call table.CreateIfNotExists() for the first time

When starting my program for the first time since the associated table has been deleted I get this error:
An exception of type 'Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll but was not handled in user code
Additional information: The remote server returned an error: (409) Conflict.
However if I refresh the crashed page the table will successfully create.
Here is the code just in case:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
Microsoft.WindowsAzure.CloudConfigurationManager.
GetSetting("StorageConnectionString"));
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("tableTesting");
table.CreateIfNotExists();
I don't really understand how or why I'd be getting a conflict error if there's nothing there.
These errors appear elsewhere in my code as well when I'm working with blob containers, but I can't reproduce them as easily.
If you look at the status codes here: http://msdn.microsoft.com/en-us/library/azure/dd179438.aspx, you will notice that you get 409 error code in two scenarios:
Table already exists
Table is being deleted
If I understand correctly, table.CreateIfNotExists() only handles the 1st situation but not the 2nd one. Please check if that is not the case in your situation. One way to check this would be to see details of Storage Exception. Somewhere you should get the code which would match with the link I mentioned above.
Also one important thing to understand is that when you delete the table, it is actually marked for deletion and is actually deleted through a background process (much like garbage collection). If you try to create a table between these two steps, you will get the 2nd error.

Resources