Karate multipart field; possible to pass in function? - automated-tests

I have some test cases where I need to upload a file and can give it a name. To cut down on time, what I want to do is upload the same file multiple times, but randomly generate a name for it with every pass.
What I have so far for a scenario:
* def randomFile =
"""
function randString(length,chars) {
var result = '';
for (var i = length; i > 0; --i) result += chars[Math.round(Math.random() * (chars.length - 1))];
return result;
}
"""
* def getFilename = randomFile(6, "abcdefgh")
Given url
And request ''
And multipart fields { "profile": "Smoke Test Uploads", "filename": getFilename, "url": "https://s3.file.foo.bar" }
When method post
Then status 201
When I look at my uploaded file, it has the filename getFilename
Is it possible for me to call a function within a post request like this, or some other way of doing it?

Use karate embedded expression
And string getFilename = java.util.UUID.randomUUID()
And multipart fields { "profile": "Smoke Test Uploads", "filename": #(getFilename), "url": "https://s3.file.foo.bar" }
Note: UUID.randomUUID() gives you a more convenient way of generating random
filenames, if this didn't work you can use your custom js function itself

Related

Karate Bearer token from Background is coming null in Scenarios [duplicate]

I am have some problems passing in the correct headers for my graphql endpoints
The use case in Postman:
call requestToken endpoint to obtain sessionToken value
requestToken response contains Key Value " and Token Value.
For subsequent calls, I set postman headers as:
Key = X_SESSION_TOKEN Value = Token Value
The user case in Karate
1st feature 'requestToken.feature' successfully calls and stores key + tokenValue
2nd feature successfully defines and prints the token value
here is my 2nd request:
Feature: version
Background:
* url 'http://api-dev.markq.com:5000/'
* def myFeature = call read('requestToken.feature')
* def authToken = myFeature.sessionToken
* configure headers = { 'X_SESSION_TOKEN': authToken , 'Content-Type': 'application/json' }
Scenario: get version
Given path 'query'
Given text query =
"""
query {
version
}
"""
And request { query: '#(query)' }
When method POST
Then status 200
And print authToken
And print response
I am not sure I send the headers right. Its coming back 200, but I keep getting a error 'token malformed' in the response message
Any suggestions? New at this, thanks!
Honestly this is hard to answer, a LOT depends on the specific server.
EDIT: most likely it is this change needed, explained here: https://github.com/intuit/karate#embedded-expressions
* configure headers = { 'X_SESSION_TOKEN': '#(authToken)' , 'Content-Type': 'application/json' }
2 things from experience:
should it be X-SESSION-TOKEN
add an Accept: 'application/json' header
And try to hardcode the headers before attempting call etc.
Here is an example that works for me:
* url 'https://graphqlzero.almansi.me/api'
* text query =
"""
{
user(id: 1) {
posts {
data {
id
title
}
}
}
}
"""
* request { query: '#(query)' }
* method post
* status 200

How do I add/delete array elements on a Firestore document via REST?

I want to use an array to store temperature readings from an Arduino on a Firestore database. My (probably terrible) way of thinking it out so far is to read the document, do my array operations on the Arduino, and send the whole array back to Firestore. I don't know how to write to Firestore via REST at all so I haven't implemented it yet. Here is my code:
void writeTemp(String url, int temperature) {
// writeTemp() appends the given temperature to an array. temperature[0]
// holds the oldest temperature while temperature[9] holds the first.
// When a new temperature is put in, the last one is taken out.
HTTPClient http;
http.begin(url);
int httpCode = http.GET();
// Gets the current temperature array from the provided URL.
String payload = http.getString();
Serial.println(httpCode); // Prints HTTP response code.
// Calculates the size of the JSON buffer. This is big enough for 11
// temperature values that are all 3 digits so as long as you're not using
// this on the Sun you're probably fine.
const size_t capacity = JSON_ARRAY_SIZE(11) + 14 * JSON_OBJECT_SIZE(1) +
JSON_OBJECT_SIZE(4) + 440;
DynamicJsonDocument doc(capacity); // Makes the JSON document
DeserializationError err = deserializeJson(doc, payload);
// Prints out the deserialization error if an error occurred
if (err) {
Serial.print("JSON DESERIALIZE ERROR: ");
Serial.println(err.c_str());
}
// Sets up the array from the JSON
JsonArray temperatureArray =
doc["fields"]["Temperature"]["arrayValue"]["values"];
// Creates a new array object to store the new temperature
JsonObject newTemp = temperatureArray.createNestedObject();
// Puts the new temperature in the new array object. For some reason,
// Firestore stores numbers as strings so the temperature is converted into
// a string.
newTemp["integerValue"] = String(temperature);
// Removes the first (oldest) array object.
temperatureArray.remove(0);
// Removes irrelevant data that we got from the Firestore request
doc.remove("name");
doc.remove("createTime");
doc.remove("updateTime");
String newJson;
serializeJson(doc, newJson);
Serial.println(newJson);
}
How would I send this new JSON back to Firestore? Am I even doing this right? I've heard of transactions, which sounds like the theoretically better way to do what I'm trying to do but I can't find any guides or readable documentation on how to do it. My database is in test mode right now so no need to worry about authentication.
The documentation for the Firestore REST API is here.
To create a document, you need to issue a POST Request to an URL with the following format:
https://firestore.googleapis.com/v1/{parent=projects/*/databases/*/documents/**}/{collectionId}
With an instance of a Document in the Request body.
To be more concrete, below is an example in a simple HTML page (using the Axios library to issue the HTTP Request). This code will create a new document in the collection1 Firestore collection.
Just save this file on your local disk, adapt the value of <yourprojectID> and opens this page in a browser, directly from your local disk.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
</head>
<body>
<script>
var firebaseProjectId = '<yourprojectID>';
var collectionId = 'collection1';
var url =
'https://firestore.googleapis.com/v1/projects/' +
firebaseProjectId +
'/databases/(default)/documents/' +
collectionId;
var writeObj = {
fields: {
name: {
stringValue: 'theName'
},
initialBudget: {
doubleValue: 1200
}
}
};
axios.post(url, writeObj).catch(function(error) {
console.log(error);
});
</script>
</body>
</html>
In order to update an Array in an existing document, you have to use a FieldTransform with appendMissingElements elements.
Excerpt of this doc on appendMissingElements elements:
appendMissingElements: Append the given elements in order if they are not already present in the current field value. If the field is not an array, or if the field does not yet exist, it is first set to the empty array.
You will find below an example of FieldTransform value containing appendMissingElements elements.
{
"transform": {
"document": "projects/" + firebaseProjectId + "/databases/(default)/documents/....,
"fieldTransforms": [
{
"setToServerValue": "REQUEST_TIME",
"fieldPath": "lastUpdate"
},
{
"appendMissingElements": {
"values": [
{
"stringValue": "...."
}
]
},
"fieldPath": "fieldName"
}
]
}
}
UPDATE FOLLOWING YOUR COMMENT
The following should work (tested positively):
var collectionId = 'SensorData';
var url =
'https://firestore.googleapis.com/v1/projects/' +
firebaseProjectId +
'/databases/(default)/documents:commit';
var writeObj = {
writes: {
transform: {
document:
'projects/' +
firebaseProjectId +
'/databases/(default)/documents/' +
collectionId +
'/Temperature',
fieldTransforms: [
{
setToServerValue: 'REQUEST_TIME',
fieldPath: 'lastUpdate'
},
{
appendMissingElements: {
values: [
{
integerValue: 25
}
]
},
fieldPath: 'Temperature'
}
]
}
}
};

Pattern for rich error handling in gRPC

What is the pattern for sending more details about errors to the client using gRPC?
For example, suppose I have a form for registering a user, that sends a message
message RegisterUser {
string email = 1;
string password = 2;
}
where the email has to be properly formatted and unique, and the password must be at least 8 characters long.
If I was writing a JSON API, I'd return a 400 error with the following body:
{
"errors": [{
"field": "email",
"message": "Email does not have proper format."
}, {
"field": "password",
"message": "Password must be at least 8 characters."
}],
}
and the client could provide nice error messages to the user (i.e. by highlighting the password field and specifically telling the user that there's something wrong with their input to it).
With gRPC is there a way to do something similar? It seems that in most client languages, an error results in an exception being thrown, with no way to grab the response.
For example, I'd like something like
message ValidationError {
string field = 1;
string message = 2;
}
message RegisterUserResponse {
repeated ValidationError validation_errors = 1;
...
}
or similar.
Include additional error details in the response Metadata. However, still make sure to provide a useful status code and message. In this case, you can add RegisterUserResponse to the Metadata.
In gRPC Java, that would look like:
Metadata.Key<RegisterUserResponse> REGISTER_USER_RESPONSE_KEY =
ProtoUtils.keyForProto(RegisterUserResponse.getDefaultInstance());
...
Metadata metadata = new Metadata();
metadata.put(REGISTER_USER_RESPONSE_KEY, registerUserResponse);
responseObserver.onError(
Status.INVALID_ARGUMENT.withDescription("Email or password malformed")
.asRuntimeException(metadata));
Another option is to use the google.rpc.Status proto which includes an additional Any for details. Support is coming to each language to handle the type. In Java, it'd look like:
// This is com.google.rpc.Status, not io.grpc.Status
Status status = Status.newBuilder()
.setCode(Code.INVALID_ARGUMENT.getNumber())
.setMessage("Email or password malformed")
.addDetails(Any.pack(registerUserResponse))
.build();
responseObserver.onError(StatusProto.toStatusRuntimeException(status));
google.rpc.Status is cleaner in some languages as the error details can be passed around as one unit. It also makes it clear what parts of the response are error-related. On-the-wire, it still uses Metadata to pass the additional information.
You may also be interested in error_details.proto which contains some common types of errors.
I discussed this topic during CloudNativeCon. You can check out the slides and linked recording on YouTube.
We have 3 different ways we could handle the errors in gRPC. For example lets assume the gRPC server does not accept values above 20 or below 2.
Option 1: Using gRPC status codes.
if(number < 2 || number > 20){
Status status = Status.FAILED_PRECONDITION.withDescription("Not between 2 and 20");
responseObserver.onError(status.asRuntimeException());
}
Option 2: Metadata (we can pass objects via metadata)
if(number < 2 || number > 20){
Metadata metadata = new Metadata();
Metadata.Key<ErrorResponse> responseKey = ProtoUtils.keyForProto(ErrorResponse.getDefaultInstance());
ErrorCode errorCode = number > 20 ? ErrorCode.ABOVE_20 : ErrorCode.BELOW_2;
ErrorResponse errorResponse = ErrorResponse.newBuilder()
.setErrorCode(errorCode)
.setInput(number)
.build();
// pass the error object via metadata
metadata.put(responseKey, errorResponse);
responseObserver.onError(Status.FAILED_PRECONDITION.asRuntimeException(metadata));
}
Option 3: Using oneof - we can also use oneof to send error response
oneof response {
SuccessResponse success_response = 1;
ErrorResponse error_response = 2;
}
}
client side:
switch (response.getResponseCase()){
case SUCCESS_RESPONSE:
System.out.println("Success Response : " + response.getSuccessResponse().getResult());
break;
case ERROR_RESPONSE:
System.out.println("Error Response : " + response.getErrorResponse().getErrorCode());
break;
}
Check here for the detailed steps - https://www.vinsguru.com/grpc-error-handling/
As mentioned by #Eric Anderson, you can use metadata to pass error detail. The problem with metadata is that it can contain, other attributes (example - content-type). To handle that you need to add custom logic to filter error metadata.
A much cleaner approach is of using google.rpc.Status proto (as Eric has mentioned).
If you can convert your gRPC server application to spring boot using yidongnan/grpc-spring-boot-starter, then you can write #GrpcAdvice, similar to Spring Boot #ControllerAdvice as
#GrpcAdvice
public class ExceptionHandler {
#GrpcExceptionHandler(ValidationErrorException.class)
public StatusRuntimeException handleValidationError(ValidationErrorException cause) {
List<ValidationError> validationErrors = cause.getValidationErrors();
RegisterUserResponse registerUserResponse =
RegisterUserResponse.newBuilder()
.addAllValidationErrors(validationErrors)
.build();
var status =
com.google.rpc.Status.newBuilder()
.setCode(Code.INVALID_ARGUMENT.getNumber())
.setMessage("Email or password malformed")
.addDetails(Any.pack(registerUserResponse))
.build();
return StatusProto.toStatusRuntimeException(status);
}
}
On the client-side, you can catch this exception and unpack the registerUserResponse as:
as
} catch (StatusRuntimeException error) {
com.google.rpc.Status status = io.grpc.protobuf.StatusProto.fromThrowable(error);
RegisterUserResponse registerUserResponse = null;
for (Any any : status.getDetailsList()) {
if (!any.is(RegisterUserResponse.class)) {
continue;
}
registerUserResponse = any.unpack(ErrorInfo.class);
}
log.info(" Error while calling product service, reason {} ", registerUserResponse.getValidationErrorsList());
//Other action
}
In my opinion, this can be a much cleaner approach provided you can run your gRPC server application as Spring Boot.
I was struggling with similar questions - so I decided to compile everything in a blog post
Here is a test in GoLang:
func TestNewStatusError_WhenBuildingFromStatus_WithDetails(t *testing.T) {
details1 := &errdetails.BadRequest{}
details1.FieldViolations = append(details1.FieldViolations, &errdetails.BadRequest_FieldViolation{
Field: "site_id",
Description: "bad format, not an UUID",
})
details2 := &common.ResourceNotFound{
Title: "site",
Description: "not found",
}
statusErr := status.New(codes.Internal, "something went wrong")
statusErrWithDetails, err := statusErr.WithDetails(details1, details2)
require.Nil(t, err)
assert.EqualValues(t, codes.Internal, statusErrWithDetails.Code())
assert.EqualValues(t, "something went wrong", statusErrWithDetails.Message())
assert.EqualValues(t, 2, len(statusErrWithDetails.Details()))
}
When rendering something similar would look like:
{
"code": 3,
"message": "SiteID not valid: bad uuid",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"field_violations": [
{
"field": "site_id",
"description": "Site ID not valid"
}
]
}
]
}

How to issue a subsequent API request only after the first was done?

I need to call an API to upload a photo, this API returns an ID of the photo. Then, I need to get that ID and use it as a parameter for another API.
The problem is, the second API gets called before the first API has a chance to complete (and return the ID). What can be done about this?
I'm using Alamofire 4 and Swift 3.
My code:
// First API - Upload file
func uploadFile(url: String, image: UIImage, callback: #escaping (JSON) -> ()) {
let imageData = UIImagePNGRepresentation(image)
let URL2 = try! URLRequest(url: url, method: .post, headers: header)
Alamofire.upload(multipartFormData: { (multipartFormData) in
multipartFormData.append(imageData!, withName: "photo", fileName: "picture.png", mimeType: "image/png")
}, with: URL2, encodingCompletion: { (result) in
switch result {
case .success(let upload, _, _):
upload.responseJSON
{
response in
switch response.result {
case .success(let value):
let json = JSON(value)
callback(json)
case .failure(let error):
print(error)
}
}
case .failure(let encodingError):
print(encodingError)
}
})
}
// Second API
func Post(url: String, parameters: [String:Any], callback: #escaping (JSON) -> ()) {
Alamofire.request(url, method: .post, parameters: parameters.asParameters(), encoding: ArrayEncoding(), headers: header).responseData { (response) in
switch response.result {
case .success(let value):
callback(json)
case .failure(let error):
print(error)
}
}
}
// Calling the First API
var uploadedImage: [String:String]!
uploadFile(url: baseUrl, image: image, callback: { (json) in
DispatchQueue.main.async {
uploadedImage = ["filename": json["data"]["photo"]["filename"].stringValue, "_id": json["data"]["photo"]["_id"].stringValue]
}
})
// Calling the Second API
Post(url: baseUrl, parameters: uploadedImage) { (json) in
DispatchQueue.main.async {
self.activityIndicator.stopAnimating()
}
}
In order to avoid the race condition that you're describing, you must be able to serialize the two calls somehow. The solutions that come to mind are either barriers, blocking calls, or callbacks. Since you're already using asynchronous (non-blocking) calls, I will focus on the last item.
I hope that a pseudocode solution would be helpful to you.
Assuming the 1st call is always performed before the 2nd, you would do something like:
firstCall(paramsForFirstOfTwo){
onSuccess {
secondCall(paramsForSuccess)
}
onFailure {
secondCall(paramsForFailure)
}
}
However, if the 1st call is optional, you could do:
if (someCondition){
// The above example where 1->2
} else {
secondCall(paramsForNoFirstCall)
}
If you must perform the 1st call a certain amount of times before the 2nd is performed, you can:
let n = 3; var v = 0; // assuming these are accessible from within completedCounter()
firstCall1(...){/* after running code specific to onSuccess or onFailure call c..C..() */}
firstCall2(...){/* same as previous */}
firstCall3(...){/* same as previous */}
function completedCounter(){
// v must be locked (synchronized) or accessed atomically! See:
// http://stackoverflow.com/q/30851339/3372061 (atomics)
// http://stackoverflow.com/q/24045895/3372061 (locks)
lock(v){
if (v < n)
v += 1
else
secondCall(paramsBasedOnResultsOfFirstCalls);
}
}

How can I do a replace with gridfs-stream?

I'm using this code to do a file update:
app.post("/UploadFile", function(request, response)
{
var file = request.files.UploadedFile;
var name = request.param("Name");
var componentId = request.param("ComponentId");
console.log("Uploading: " + name);
var parameters =
{
filename: name,
metadata:
{
Type: "Screenshot",
ComponentId: componentId
}
};
grid.files.findOne( { "metadata.ComponentId" : componentId }, function(error, existing)
{
console.log("done finding");
if (error)
{
common.HandleError(error);
}
else
{
if (existing)
{
console.log("Exists: " + existing._id);
grid.remove({ _id: existing._id }, function(removeError)
{
if (removeError)
{
common.HandleError(removeError, response);
}
else
{
SaveFile(file, parameters, response);
}
});
}
else
{
console.log("new");
SaveFile(file, parameters, response);
}
}
});
});
function SaveFile(file, parameters, response)
{
console.log("Saving");
var stream = grid.createWriteStream(parameters);
fs.createReadStream(file.path).pipe(stream);
}
Basically I'm checking for a file that has an ID stored in metadata. If it exists, I delete it before my save, and if not I just do the save. It seems to work only sporadically. I sometimes see two erroneous behaviors:
The file will be deleted, but not recreated.
The file will appear to be updated, but it won't actually be replaced until I call my code again. So basically I need to do two file uploads for it to register the replace.
It's very sketchy, and I can't really determine a pattern for if it's going to work or not.
So I'm assuming I'm doing something wrong. What's the right way to replace a file using gridfs-stream?
It's difficult to say for sure from just the code you've provided (i.e. you don't show how the response to the app.post is ultimately handled), but I see several red flags to check:
Your SaveFile function above will return immediately after setting up the pipe between your file and the gridFS store. That is to say, the caller of the code you provide above will likely get control back well before the file has been completely copied to the MongoDB instance if you are moving around large files, and/or if your MongoDB store is over a relatively slow link (e.g. the Internet).
In these cases it is very likely that any immediate check by the caller will occur while your pipe is still running, and therefore before the gridFS store contains the correct copy of the file.
The other issue is you don't do any error checking or handling of the events that may be generated by the streams you've created.
The fix probably involves creating appropriate event handlers on your pipe, along the lines of:
function SaveFile(file, parameters, response)
{
console.log("Saving");
var stream = grid.createWriteStream(parameters);
pipe = fs.createReadStream(file.path).pipe(stream);
pipe.on('error', function (err) {
console.error('The write of " + file.path + " to gridFS FAILED: ' + err);
// Handle the response to the caller, notifying of the failure
});
pipe.on('finish', function () {
console.log('The write of " + file.path + " to gridFS is complete.');
// Handle the response to the caller, notifying of success
});
}
The function handling the 'finish' event will not be called until the transfer is complete, so that is the appropriate place to respond to the app.post request. If nothing else, you should get useful information from the error event to help in diagnosing this further.

Resources