Are callbacks needed to ensure transaction-like behavior on insert? - meteor

Let's say I have a method which looks like:
commentsInsert: (comment) ->
Comments.insert comment, (err) ->
throw err if err
updateCommentCounts()
The goal here is to avoid calling updateCommentCounts if the insert failed. If this code runs only on the server could I skip the callback and the throw? For example:
commentsInsert: (comment) ->
Comments.insert comment
updateCommentCounts()
The meteor docs say:
On the server, if you don't provide a callback, then insert blocks until the database acknowledges the write, or throws an exception if something went wrong.
I'm assuming this means it will bail out of the function and return an error code to the caller. Is that right?

So to test this I added a unique index to the comments collection and did two identical inserts. The second insert threw an error (as expected), sent a 500 to the client, and bailed out of the method. So it seems as if the answer to my question is: yes, I can skip the callback.

Related

Exception when GetItemQueryIterator() can't find a matching document in Cosmos DB

So, I'm trying to query cosmos collection for a specific document, with the following line of code:
FeedIterator<dynamic> querieditem = container.GetItemQueryIterator<dynamic>(mysqlquery);
When there exists an item in the database, this goes without any problems. However, when there doesn't exist any match in the database, I get the following exception:
System.Private.CoreLib: Exception while executing function: TestFunction. Microsoft.Azure.Cosmos.Client: Response status code does not indicate success: NotFound (404); Substatus: 0; ActivityId:123123; Reason: (Message: {"Errors":["Resource Not Found. Learn more: https://aka.ms/cosmosdb-tsg-not-found"]}
Does this really mean, that I need to add a try/catch, in case GetItemQueryIterator() can't find anything in Cosmos? Given that it is necessary, why does this make sense?
Some of the methods in the SDK throw exceptions on 404 as a legacy holdover. The better alternative is to use the Stream variants of the methods, which don't throw and use HTTP status codes instead that can be used to evaluate success. You just need an extra step to deserialize the response stream yourself.
See the docs and examples for GetItemQueryStreamIterator

Axon - How to get #QueryHandler handle method to return an Optional<MyType>

Note:
The point of this question is not to just getting back a value that I ultimately want.
I can do that by simply not using Optional.
I would like an elegant solution so I could start returning Optional.
Explanation of what I tried to do:
I used the QueryGateway with a signature that will query my handler.
I broke out my code so you can see that on my CompletableFuture I will do a blocking get in order to retrieve the Optional that contains the object I really want.
Note that I am not looking for a class that holds my optional.
If this is not elegant then I may as well just do my null check.
The call to the query works, but I get the following error:
org.axonframework.axonserver.connector.query.AxonServerQueryDispatchException: CANCELLED: HTTP/2 error code: CANCEL
Received Rst Stream
AXONIQ-5002
58484#DESKTOP-CK6HLMM
Example of code that initiates the query:
UserProfileOptionByUserQuery q = new UserProfileOptionByUserQuery(userId);
CompletableFuture<Optional<UserProfile>> query =
queryGateway.query(q,ResponseTypes.optionalInstanceOf(UserProfile.class));
Optional<UserProfile> optional = query.get();
Error occurs on the query.get() invocation.
Example of my Query Handler:
#QueryHandler
Optional<UserProfile> handle(UserProfileOptionByUserQuery query, #MetaDataValue(USER_INFO) UserInfo userInfo) {
assertUserCanQuery(query, userInfo);
return userProfileRepository.findById(query.getUserId());
}
The query handler works fine.
Other efforts such as using OptionalResponseType would not initiate my query as desired.
I think the key lies with the exception you are receiving Stephen.
Just to verify for my own good, I've tested the following permutations when it comes to Optional query handling:
Query Handler returns Optional, Query Dispatcher uses OptionalResponeType
Query Handler returns MyType, Query Dispatcher uses OptionalResponeType
Query Handler returns Optional, Query Dispatcher uses InstanceResponeType
Added, I've tried out these permutations both with the SimpleQueryBus and Axon Server. Both buses on all three options worked completely fine for me.
This suggest to me that you should dive in to the AxonServerQueryDispatchException you are receiving.
Hence, I am going to give you a couple of follow up questions to further deduce what the problem is. I'd suggest to update you original question with the response(s) to them.
Do you have a more detailed stack trace per chance?
And, what versions of Axon Framework and Axon Server are you using?
Are you on the Standard Edition? Enterprise edition?
Does this behavior only happen for this exact Optional query handler you've shared with us?

In Disassembler pipeline component - Send only last message out from GetNext() method

I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.
I have extended FFDasm and called Disassembler(), then we have GetNext() which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out from GetNext() until Last message.
Please help if anyone have already implemented this requirement. Thanks!
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:
public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
base.Disassemble(pContext, pInMsg);
IBaseMessage message = base.GetNext(pContext);
while (message != null)
{
// Only store one message
if (this.messagesCount == 0)
{
// _message is a Queue<IBaseMessage>
this._messages.Enqueue(message);
this.messagesCount++;
}
message = base.GetNext(pContext);
}
}
catch (Exception ex)
{
// Manage errors
}
Then on GetNext method, you have the queue and you can return whatever you want:
public new IBaseMessage GetNext(IPipelineContext pContext)
{
return _messages.Dequeue();
}
The recommended approach is to publish messages after disassemble stage to BizTalk message box db and use a db adapter to insert into database. Publishing messages to message box and using adapter will provide you more options on design/performance and will decouple your DB insert from receive logic. Also in future if you want to reuse the same message for something else, you would be able to do so.
Even then for any reason if you have to insert from pipeline component then do the following:
Please note, GetNext() method of IDisassembler interface is not invoked until Disassemble() method is complete. Based on this, you can use following approach assuming you have encapsulated FFDASM within your own custom component:
Insert all disassembled messages in disassemble method itself and enqueue only the last message to a Queue class variable. In GetNext() message then return the Dequeued message, when Queue is empty return null. You can optimize the DB insert by inserting multiple rows at a time and saving them in batches depending on volume. Please note this approach may encounter performance issues depending on the size of file and number of rows being inserted into db.
I am calling DBInsert SP from GetNext()
Oh...so...sorry to say, but you're doing it wrong and actually creating a bunch of problems doing this. :(
This is a very basic scenario to cover with BizTalk Server. All you need is:
A Pipeline Component to Promote BTS.InterchageID
A Sequential Convoy Orchestration Correlating on BTS.InterchangeID and using Ordered Delivery.
In the Orchestration, call the SP, transform to SOAP, call the SOAP endpoint, whatever you need.
As you process the Messages, check for BTS.LastInterchagneMessage, then perform your close out logic.
To be 100% clear, there are no practical 'performance' issues here. By guessing about 'performance' you've actually created the problem you were thinking to solve, and created a bunch of support issues for later on, sorry again. :( There is no reason to not use an Orchestration.
As noted, 25K records isn't a lot. Be sure to have the Receive Location and Orchestration in different Hosts.

Determine whether a file was deleted or overwritten - Firebase Storage OnDelete()

Is there a way to determine whether a file was truly deleted, or whether it was updated (overwritten) when OnDelete() is triggered, other than calling file.exists()?
OnDelete() is known to be triggered when a file was deleted or overwritten - this is an expected behavior, mentioned in the documentation.
I use it in my app when a user deletes his profile picture. So when triggered, I can't tell whether the picture was deleted or updated, because it'd be triggered in both cases. So I have to manually check what happened, and this is exactly my problem.
I can obviously use file.exists() for the job, but I consider it an expensive call (or am I wrong?) because this is accessing the storage again, and it seems like there must be a way to avoid it, but I'm not sure.
If there is no way, can I safely assume that exists() will never precede the overwrite event, causing a false negative? (I'm worried it would operate prior the overwrite completion, find no existing file, and return false).
My solution (written in Go, but the principle behind it remains the same) for when not using versioning:
func main(ctx context.Context, event GCSEvent) error {
obj := bucket.Object(event.Name)
_, err = obj.Attrs(ctx) // _ => objAttrs
if err != nil {
if err.Error() == "storage: object doesn't exist" {
// This file is deleted, so execute a function
onFileDeleted()
return nil
} else {
// Something else went wrong, identify it
log.Fatalf("Getting attributes error: %v", err, e.Name[7:27])
}
}
// This means file was merely updated
return nil
}
func onFileDeleted() {
log.Println("File deleted")
}
And a short explanation:
Based on the filename, first get the reference to the object.
From the reference of the object, try reading the attributes.
If you encounter the error, check if that error is because the file doesn't exist.
If the error is correct, do what you want when the file is deleted.
According to the documentation, if you use a versioning bucket, OnDelete() is only triggered "when a version is permanently deleted (but not when an object is archived)".
In order to transform your bucket to a versioning one, follow this documentation: https://cloud.google.com/storage/docs/using-object-versioning#enable

Meteor publications auth error or this.ready

In my publish methods should I be throwing an error if an unauthed user tries to subscribe to a publication or should i be returning this.ready() . (in coffeescript)
this:
Meteor.publish "secretInfo", ->
return #error(new Meteor.Error(422, "Permission denied")) unless #userId
return Secrets.find({})
or that:
Meteor.publish "secretInfo", ->
return #ready() unless #userId
return Secrets.find({})
The generally accepted solution is the latter:
return #ready() unless #userId
I try to avoid throwing errors in publications because the UI can (depending on how you implemented it) get stuck in a loading state unless your publishers eventually call ready() (either explicitly, or implicitly if you return a cursor or other valid value).
This is also pointed out in the guide:
In the case of a logged-out user, we explicitly call this.ready(), which indicates to the subscription that we’ve sent all the data we are initially going to send (in this case none). It’s important to know that if you don’t return a cursor from the publication or call this.ready(), the user’s subscription will never become ready, and they will likely see a loading state forever.

Resources