The usual Date object on Cloudflare's workers, all return 1 jan,1970...
What is the proper way to get the current datetime in a workers' code?
Thanks,
G
The Date object only returns 1970-01-01 when executed at the global scope. If you use it during the event handler for a request, it will correctly return the current date.
let globalDate = Date.now(); // always zero
addEventListener("fetch", event => {
let localDate = Date.now(); // will return actual current date
})
Background
The reason for this is that Cloudflare Workers runs the global scope at an unspecified time. It might be on-demand when a request arrives, but it could be earlier. In theory, Workers could even execute the global scope only once ever, and then snapshot the state and start from the snapshot when executing on the edge. In order to ensure that such different implementation options do not affect the behavior of deployed workers, the Workers Runtime must ensure that the global scope's execution is completely deterministic. Among other things, that means Date.now() must always return the same value -- zero -- when executed at the global scope.
Related
I have written a flow which creates a transaction that outputs a new state (TransactionBuilder.signInitialTransaction), and then passes it to FinalityFlow to notarize/record/broadcast it. My client-application is starting this flow over RPC with CordaRPCOps.startFlowDynamic and waits for the returned CordaFutures getOrThrow(). This is rather slow, since FinalityFlow only returns once it has delivered the transaction to all other parties/nodes (in fact, if a remote-node is down it seems to never return).
I figured I can speed things up by letting my application only wait for FinalityFlow to have completed notarizeAndRecord(), as I should then have the tx/states in my nodes vault and I can safely assume that other nodes will eventually have this tx delivered and accept it. I implemented this using ProgressTracker, waiting only until FinalityFlow sets currentStep to BROADCASTING.
However, what I'm observing is that if I query the vault (using CordaRPCOps.vaultQueryByCriteria) for the new state very shortly after notarizeAndRecord has returned, I sometimes do not yet get it returned. Is this a bug or rather some deliberate asynchronous behavior where the database is not immediately written to ?
To work around this I then tried to synchronize with the vault inside my flow, in order to update the progressTracker only after the tx/state was actually written to the vault:
val stx = serviceHub.signInitialTransaction(tx)
serviceHub.vaultService.rawUpdates.subscribe {
logger.info("receiving update $it")
if(it.produced.any { it.ref.txhash == stx.id }) {
progressTracker.currentStep = RECORDED
}
}
subFlow(FinalityFlow(stx))
I can see the update in the node-logs, yet a subsequent vault-query by the RPC-Client (which also shows in the node-logs, after the update) for that very state still does not return anything if executed immediately afterwards...
I am running Corda v2.0.
I do not know whether vault writes are synchronous.
However, you can side-step this issue by creating an observable on the vault so that you are notified when the new state is recorded. Here's an example where we update a state using its linear ID, then wait for vault updates matching that linear ID:
proxy.startFlowDynamic(UpdateState::class.java, stateLinearId)
val queryCriteria = QueryCriteria.LinearStateQueryCriteria(linearId = listOf(stateLinearId))
val (snapsnot, updates) = proxy.vaultTrackBy<MyLinearState>(queryCriteria)
updates.toBlocking().subscribe { update ->
val newVaultState = update.produced.single()
// Perform action here.
}
I've spent a fair amount of time looking into the Realm database mechanics and I can't figure out if Realm is using row level read locks under the hood for data selected during write transactions.
As a basic example, imagine the following "queue" logic
assume the queue has an arbitrary number of jobs (we'll say 5 jobs)
async getNextJob() {
let nextJob = null;
this.realm.write(() => {
let jobs = this.realm.objects('Job')
.filtered('active == FALSE')
.sorted([['priority', true], ['created', false]]);
if (jobs.length) {
nextJob = jobs[0];
nextJob.active = true;
}
});
return nextJob;
}
If I call getNextJob() 2 times concurrently, if row level read blocking isn't occurring, there's a chance that nextJob will return the same job object when we query for jobs.
Furthermore, if I have outside logic that relies on up-to-date data in read logic (ie job.active == false when it actually is true at current time) I need the read to block until update transactions complete. MVCC reads getting stale data do not work in this situation.
If read locks are being set in write transactions, I could make sure I'm always reading the latest data like so
let active = null;
this.realm.write(() => {
const job = this.realm.pseudoQueryToGetJobByPrimaryKey();
active = job.active;
});
// Assuming the above write transaction blocked the read until
// any concurrent updates touching the same job committed
// the value for active can be trusted at this point in time.
if (active === false) {
// code to start job here
}
So basically, TL;DR does Realm support SELECT FOR UPDATE?
Postgresql
https://www.postgresql.org/docs/9.1/static/explicit-locking.html
MySql
https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html
So basically, TL;DR does Realm support SELECT FOR UPDATE?
Well if I understand the question correctly, the answer is slightly trickier than that.
If there is no Realm Object Server involved, then realm.write(() => disallows any other writes at the same time, and updates the Realm to its latest version when the transaction is opened.
If there is Realm Object Server involved, then I think this still stands locally, but the Realm Sync manages the updates from remote, in which case the conflict resolution rules apply for remote data changes.
Realm does not allow concurrent writes. There is at most one ongoing
write transaction at any point in time.
If the async getNextJob() function is called twice concurrently, one of
the invocations will block on realm.write().
SELECT FOR UPDATE then works trivially, since there are no concurrent updates.
I have a micro-service which involved in an OAuth 1 interaction. I'm finding myself in a situation where two runs of the Lambda functions with precisely the same starting states have very different outcomes (where state is considered the "event" passed in, environment variables, and "stageParameters" from the API Gateway).
Here's a Cloudwatch log that shows two back-to-back runs:
You can see that while the starting state is identical, the execution path changes pretty quickly. In the second case (failure case), you see the log entry "Auth state changed: null" ... that is very odd indeed because in fact this is logged before even the first line of code of the "handler" is executed. Here's the beginning of the functions handler:
export const handler = (event, context, cb) => {
console.log('EVENT:\n', JSON.stringify(event, null, 2));
So where is this premature logging entry coming from? Well, one must assume that it somehow is left over from prior executions. Let me demonstrate ... it is in fact an event listener that was setup in the prior execution. This function interacts with a Firebase DB and the first time it connects it sets the following up:
auth.signInWithEmailAndPassword(username, password)
.then((result) => {
auth.onAuthStateChanged(this.watchAuthState);
where the watchAuthState function is simply:
watchAuthState(user) {
console.log(`Auth state changed:\n`, JSON.stringify(user, null, 2));
}
This seems to mean that when I run the DB a second time I am already "initialized" with the Firebase DB but apparently the authentication has been invalidated. My number one aim is to just get back to a predictive state model and have it execute precisely the same each time.
If, there are sneaky ways to reuse cached state between Lambda executions in resource useful ways then I guess that too would be interesting but only if we can do that while achieving the predictive state machine.
Regarding the logs order, look at the ID that comes after each timestamp at the beginning of each line. I believe this is the invocation ID. In the two lines you have highlighted in orange, they are from different invocations of the function. The EVENT log is the first line to get logged from the invocation with ID ending in 754ee. The Auth state changed: null line is a log entry coming from the earlier invocation of the function with invocation ID ending in c40d5.
It looks like you are setting auth state to null at the end of an invocation, but the Firebase connection is global, so the second function invocation thinks the Firebase connection is already initialized, but then it throws errors because the authentication was nulled out.
My number one aim is to just get back to a predictive state model and
have it execute precisely the same each time.
Then you need to be aware of Lambda container reuse, and not use any global variables.
I have an EC2 instance running a small node script connecting to Firebase. Strangely enough, it happens quite often on a small instance that the set operation gets executed immeditely but the callback function only gets called much later (between 30s to 2 minutes). Do you see any reason why it would happen that way?
console.log('creating');
// Create workspace
rootRef.child('spaces').child(chid).set(req.space, function(error) {
var end = new Date().getTime();
var time = end - start;
console.log('- created', error, time);
});
The bug is directly related to node 0.11 (set() callback is only called the first name in my scenario). Just revert to 0.10.x and it's all fixed!
I've been facing the same issue. the "Set" callback is not being invoked at all. I noticed, however, that if I run a snippet code similar to yours in a standalone file, the callback is invoked very quickly.
It turned out that if you're installing listeners on the same Node you're calling the "set" function on (i.e., on('child_added'), on('child_removed') ... etc) and that Node has a huge number of records, it'll simply take ages.
I removed the listeners ( to test) and the "set" started to invoke the callback very quickly.
I hope this helps!
I was playing around with asynchronous features of .NET a little bit and came up with a situation that I couldn't really explain. When executing the following code inside a synchronous ASP.NET MVC controller
var t = Task.Factory.StartNew(()=>{
var ctx = System.Web.HttpContext.Current;
//ctx == null here
},
CancellationToken.None,
TaskCreationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext()
);
t.Wait();
ctx is null within the delegate. Now to my understanding, the context should be restored when you use the TaskScheduler.FromCurrentSynchronizationContext() task scheduler. So why isn't it here? (I can, btw, see that the delegate gets executed synchronously on the same thread).
Also, from msdn, a TaskScheduler.FromCurrentSynchronizationContext() should behave as follows:
All Task instances queued to the returned scheduler will be executed
through a call to the Post method on that context.
However, when I use this code:
var wh = new AutoResetEvent(false);
SynchronizationContext.Current.Post(s=> {
var ctx = System.Web.HttpContext.Current;
//ctx is set here
wh.Set();
return;
},null);
wh.WaitOne();
The context is actually set.
I know that this example is little bit contrived, but I'd really like to understand what happens to increase my understanding of asynchronous programming on .NET.
Your observations seem to be correct, it is a bit puzzling.
You specify the scheduler as "TaskScheduler.FromCurrentSynchronizationContext()". This associates a new "SynchronizationContextTaskScheduler". Now if you look into this class it uses:
So if the task scheduler has access to the same "Synchronization
Context" and that should reference
"LegacyAspNetSychronizationContext". So surely it appears that
HttpContext.current should not be null.
In the second case, when you use a SychronizationContext (Refer:MSDN Article) the thread's context is shared with the task:
"Another aspect of SynchronizationContext is that every thread has a
“current” context. A thread’s context isn’t necessarily unique; its
context instance may be shared with other threads."
SynchronizationContext.Current is provided by LegacyAspNetSychronizationContext in this case and internally has a reference to HttpApplication.
When the Post method has to invoke registered callback, it calls HttpApplication.OnThreadEnter, which ultimately results in setting of the current thread's context as HttpCurrent.Context:
All the classes referenced here are defined as internal in the framework and is making it a bit difficult to investigate further.
PS: Illustrating that both SynchornizationContext objects in fact point to "LegacyAspNetSynchronizationContext":
I was googling for HTTPContext info some time ago. And I found this:
http://odetocode.com/articles/112.aspx
It's about threading and HTTPContext. There is good explanation:
The CallContext provides a service extremely similar to thread local storage (except CallContext can perform some additional magic during a remoting call). Thread local storage is a concept where each logical thread in an application domain has a unique data slot to keep data specific to itself. Threads do not share the data, and one thread cannot modify the data local to a different thread. ASP.NET, after selecting a thread to execute an incoming request, stores a reference to the current request context in the thread’s local storage. Now, no matter where the thread goes while executing (a business object, a data access object), the context is nearby and easily retrieved.
Knowing the above we can state the following: if, while processing a request, execution moves to a different thread (via QueueUserWorkItem, or an asynchronous delegate, as two examples), HttpContext.Current will not know how to retrieve the current context, and will return null. You might think one way around the problem would be to pass a reference to the worker thread
So you have to create reference to your HTTPContext.Current via some variable and this variable will be adressed from other threads you will create in your code.
Your results are odd - are you sure there's nothing else going on?
Your first example ( with Task ) only works because Task.Wait() can run the task body "inline".
If you put a breakpoint in the task lambda and look at the call stack, you will see that the lambda is being called from inside the Task.Wait() method - there is no concurrency. Since the task is being executed with just normal synchronous method calls, HttpContext.Current must return the same value as it would from anywhere else in your controller method.
Your second example ( with SynchronizationContext.Post ) will deadlock and your lambda will never run.
This is because you are using an AutoResetEvent, which doesn't "know" anything about your Post. The call to WaitOne() will block the thread until the AutoResetEvent is Set. At the same time, the SynchronizationContext is waiting for the thread to be free in order to run the lambda.
Since the thread is blocked in WaitOne, the posted lambda will never execute, which means the AutoResetEvent will never be set, which means the WaitOne will never be satisfied. This is a deadlock.