Firebase Transactions null or not updating the first time thru - firebase

I have clients connecting to the database with javascript.
I also have code running on my server and I'm trying to do a transaction following example as shown here:
https://firebase.google.com/docs/database/server/save-data#section-transactions
Here's a simplified structure of my data
users:
userguid
resource : "room1"
printer : "printer1"
resources
rooms
room1
printers
printer1
counter : 15
The web client would write a request to their own node under "users".
The server is watching for those request and updates the counter for that resource.
If i have the transaction watching for child added I get null for counter so I can't increment the number. If I also watch for child modified the I will get the correct counter value.
I understand from the documentation that the value in transaction can be null but I'm not sure how I can fix my use case to do what I need.
Basically I don't want the client touching the counter, I want the server to read and update that value.
I've gone thru this post
Firebase runTransaction not working
but I'm not clear on how to structure my code to deal with this.

Related

Getting Firebase timestamp before pushing data

I have a chat app powered by Firebase, and I'd like to get a timestamp from Firebase before pushing any data.
Specifically, I'd like to get the time that a user pushes the send button for a voice message. I don't actually push the message to Firebase until the upload was successful (so that the audio file is guaranteed to be there when a recipient receives the message). If I were to simply use Firebase.ServerValue.TIMESTAMP, there could be an ordering issue due to different upload durations. (A very short message following a very long one, for example.)
Is there anyway to ping Firebase for a timestamp that I'm not seeing in the docs? Thank you!
If you want to separate the click, from the actual writing of the data:
var newItemRef = ref.push();
uploadAudioAndThen(audioFile, function(downloadURL) {
newItemRef.set({
url: downloadURL,
savedTimestamp: Firebase.ServerValue.TIMESTAMP
});
});
This does a few things:
it creates a reference for the item item before uploading. This reference will have a push ID based on when the upload started. Nothing is written to the database at this point, but the key of the new location is determined.
it then does the upload and "waits for it" to complete.
in the completion handler of the upload, it writes to the new location it determine in step 1.
it writes the server timestamp at this moment, which is when the upload is finished
So you now have two timestamps. One is when the upload started and is encoded into the key/push id of the new item, the other is when the upload completed and is in the savedTimestamp property.
To get the 3 most recently started uploads that have already completed:
ref.orderByKey().limitToLast(3).on(...
To get the 3 most recently finished uploads:
ref.orderByChild('savedTimestamp').limitToLast(3).on(...

How to execute a query in eval.xqy file with app-server id

I need to run query with importing modules from pod.
Without importing modules if I run simple query with Database Id using below, it is working.
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$dataBaseId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
If I have import modules statements then it is throwing Error(File Not Found).
So I tried getting the app-server id and tried passing that instead of database-id as below,
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$serverId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
How to pass the server-id to make the query executing against particular app-server.
Is this MarkLogic 8 or earlier (I ask because rewrite options on 8 allow for dynamic switching of module databases before execution (among lots of other amazing goodies). This may be what you want because you can look at the query parameters at this point and build logic into the rewite rules.
Otherwise, Can you explain in more detail what you are trying to accomplish in the end. By the time your code ran, it was already executed in the context of a particular App server - so asking to execute against a another app server by analysing the query parameters is a bit too late (because you are already using the app server).
[edit] The following is in response to the comments since provided. This is a messy response because the actual ticket and comments are still not a completely clear picture. But if you stitch them together, then a problem statement does now exist for which I can respond.
The original author of the question confirmed via comments that they are "trying to hit an app server on a different node than the one that you actually posted to"
OK.. This is the response to that clarification:
That is not possible. Your request is already being processed by a thread on the node that you hit with your http request. Marklogic is a cluster, but it does not share threads (or anything else for that matter). Choices are:
a redirect to the proper node
possibly use the current node to make the request on your behalf.
But that ties up the first thread and the thread on the other node and has the HTTP communication overhead - and you need to have an app server listening for this purpose.
If this is a fire-and-forget type of situation, then you can hit any node and save the data/request in a document in the DB using a URI naming convention that indicates what app server it is for, and by way of insert triggers (with a URI-prefix for their server id), pick up the request from the DB and process it.

BizTalk TPE continuations and uncompleted activities

Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.

Symfony, Swift Mailer, CRON JOBS, & Shared Hosting Server

Before I tackle this solution, I wanted to run it by the community to get feedback.
Questions:
Is my approach feasible? i.e. can it even be done this way?
Is it the right/most efficient solution?
If it isn’t the right solution, what would be a better approach?
Problems:
Need to send mass emails through the application.
The shared hosted server only permits a maximum of 500 emails to be sent per hour before getting labeled a spammer
Server timeout while sending batch emails
Proposed Solution:
Upon task submittal (i.e. the user provides all necessary email information using a form and frontend template, selects the target audience, etc..), the action will then:
Determines how many records (from a stored db of contacts) the email will be sent to
If the number of records in #1 above is more than 400:
Assign a batch number to all these records in the DB.
Run a CRON job that:
Every hour, selects 400 records in batch “X” and sends the saved email template until there are no more records with batch “X”. Each time a batch of 400 is sent, it’s batch number is erased (so it won’t be selected again the following hour).
If there is an unfinished CRON JOB scheduled ahead of it (i.e. currently running), it will be placed in a queue.
Other clarification:
To send these emails I simply iterate over the SWIFT mailer using the following code:
foreach($list as $record)
{
mailers::sendMemberSpam($record, $emailParamsArray);
// where the above simply contains: sfContext::getInstance()->getMailer()->send($message);
}
*where $list is the list of records with a batch_number of “X”.
I’m not sure this is the most efficient of solutions, because it seems to be bogging down the server, and will eventually time out if the list or email is long.
So, I’m just looking for opinions at this point... thanks in advance.

Send notifcation if expected message did not arrive in BizTalk

I have a BizTalk receive port monitoring an FTP location. I expect a file to arrive at least once per day in that location and for BizTalk to pick it up and kick off an orchestration. This part is working fine.
However, sometimes the sender fails to send a message during a day, in which case I want an email to sent to notify the users that something is amiss.
I could solve this outside of BizTalk, by creating a daily job that looks in our database for processed files and makes sure there is at least one in any given day. However, I'd prefer to solve this "in line" with the BizTalk solution that is already in place, and not deploy a separate, unrelated job which will increase maintenance headaches.
Is there any functionality in BizTalk that would allow me to send a notification if a receive port doesn't receive something in a given timeframe?
Short answer: Not really.
The logic you want to implement would require a customised version of the FTP Adapter. Depends on how comfortable you are rolling up your sleeves and getting into the Adapter SDK.
If you wanted to keep your solution "Purely BizTalk", you could set up a secondary Orchestration using a SQL Receive Location tied to a stored procedure. This stored procedure executes regularly and looks for records in your "Processed File" table received in the past (business) day. If none are found, it fabricates a record and returns it via the SQL Receive Location. This would be your trigger to send the email notification.
One solution, not elegant though, is to have a secondary FILE receive location, with a schedule window, outside your cutoff time.
Failure scenario:
In this FILE receive location, you have an intelligent/dummy message conforming to the same schema as FTP receive. The intelligent part is to have one of the fields in the message telling us when was the last time we received the file from FTP. The rest of the content is dummy.
Within your orchestration, you check where you received your file from. If its the secondary receive location (using the context property BTS.ReceiveLocationName), you check the date field of this dummy/intelligent message and if it is in past 24 hours ( or similar logic) send an email notifying you did not receive the file from the upstream FTP process and also save a copy of the dummy message (received) back to the secondary FILE receive location unchanged.
Success Scenario:
Apart from normal processing, you save a copy of the dummy/intelligent message to the secondary FILE receive location, with the datetime field reflecting when you processed the file you received from FTP receive location.
Initialising:
You start with a dummy/intelligent message in the secondary FILE receive location with the datetime field value well in the past ( assuming we never received the file from FTP) or with yesterday's date ( assuming we received a file successfully from FTP the day before.)
Overview:
Your orchestration has two trigger points.
When you receive a file via FTP
A scheduled FILE receive location, triggered after the cut-off time.

Resources