Meteor.setTimeout() memory leak? - meteor

I've created a new project with just one file (server.js) on the server with this tiny piece of code that does nothing. But, after running it, my node process is using about 1Gb of memory. Does anyone know why?
for (var i = 1000000; i >= 0; i--) {
Meteor.setTimeout(function(){},1000);
};
Apparently Meteor.setTimeout() function does or uses something (closure?) that prevents GC from clearing memory after it has been executed. Any ideas?

Since you are calling this on the server side, Meteor.setTimeout is a lot more complex than it appears on the surface. Meteor.setTimeout wraps setTimeout with Meteor.bindEnvironment(), which is essentially binding the context of the current environment to the timeout callback. When that timeout triggers, it will pull in the context of when it was originally called.
A good example would be if you called a Meteor.method() on the server and used a Meteor.setTimeout() within it. Meteor.method() will keep track of the user who called the method. If you use Meteor.setTimeout() it will bind that environment to the callback for the timeout, increasing the amount of memory needed for an empty function().
As to why there isn't any garbage collection occurring on your server, it may not have hit it's buffer. I tried running your test and my virtual memory hit around 1.2gb, but it never went any higher, even after subsequent tests. Try running that code multiple times to see if memory consumption continues to increase linearly, or if it hits a ceiling and stops growing.

Related

How to replace WAIT, to avoid locked error and to free the memory

I have today a question about the usage of the WAIT.
I work with an internal source code quality team, in charge of reviewing your code and approve it. Unfortunately, the usage of WAIT UP TO x SECONDS instruction is now forbidden and not negotiable.
Issue
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK'.
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK_2'.
If I execute this pseudo-code, I will have an error (locked object) because I use shared objects (I use function modules without sync/async mode).
I can resolve this issue by the usage of WAIT.
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK'.
WAIT UP TO 5 SECONDS. " or 1, 2, 3, 4, 5, ... seconds <------------
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK_2'.
With this method (and std functions), the system will stop and wait for a specific time.
But, sometime the system need 1 second ... or more. We can't know the exact time needed.
But, If we execute this code inside a loop with a large number of objects, the system can wait during an infinite time until a memory dump.
(Impacted functions are related to VL32N, QA11, ... and their objects)
Need
The need is How to replace the WAIT instruction ?
We need to find a solution/function that has the same behavior as the WAIT UP TO, but that will not impact (or less) the memory level (dump, excessive consumption of resources, ...)
In fact we need something like the COMMIT WORK AND WAIT but with the result of a function and not the database.
Solutions ?
Use a loop with timestamp comparaison and use ENQUEUE_READ to get the list of locked objects and check if the needed object is in this list, until X secondes.
It seems that this solution need the same level of resource as the WAIT.
ENQUE_SLEEP seems to have the same behavior that WAIT on the memory (How to make an abap program pause?)
Refactor all the code already done, and use synchronous functions.
Anything else ? Any idea ? Is it even possible?
Thanks in advance :)
Why not just put a check for the lock in between the two function modules? You could put that inside of a loop and exit the loop as soon as the lock is cleared from FM 1.
I use ENQUE_SLEEP when I want to wait for a specified amount of time and then recheck something. For example you could wait 5 seconds and then check for the existence of the locks. If the objects are no longer locked, then proceed. If the locks are still there, sleep again. To avoid an infinite loop, you must have some limit on the number of times you are willing to sleep before you give up and log some kind of an error.
The problem with WAIT is it triggers an implicit commit. ENQUE_SLEEP will not do that.

How exactly works set_timeout() on a TcpStream?

I'm working with TcpStream. The basic structure I'm working with is :
loop {
if /* new data in the stream */ { /* handle it */ }
/* do a lot of other stuff */
}
So set_timeout() appears to be what I need, but I'm a little puzzled about how it works. The documentation says :
This function will set a timeout for all blocking operations (including reads and writes) on this stream. The timeout specified is a relative time, in milliseconds, into the future after which point operations will time out. This means that the timeout must be reset periodically to keep it from expiring.
So I would expect to have to reset the timeout each time before checking if new data is available, otherwise I would only have Err(TimeOut) after some time.
But it appears not to be the case : actually if I set a very low timeout (like 10 ms) once and for all, the loop does exactly what I want. It returns new data if there is some, and returns Err(TimeOut) if there is none.
Am I misunderstanding the documentation ? Is it safe for me to use this behavior ?
I would have expected it to work like a socket timeout, like you have as the property for sockets in most operating systems and which is available from with the programming languages with SO_TIMEOUT or similar things. With such socket timeout the timer will be started whenever you start a blocking operation on the socket, like read, write, connect. Either the operation will succeed within the time frame or the timer will be triggered and the operation fail because of a timeout. The timeout is a property of the socket and not of the operation, so there is no need to set it again before each operation.
But according to the documentation Rust implemented a completely different thing. If I interpret the documentation correctly they don't set a timeout per operation, but instead set a deadline for all operations of this type on the socket. That is, when the timer is set up to 10 seconds you can have multiple reads within this time but if there is still a read active after 10 seconds it will be stopped.
When one is used to work with socket timeouts in other languages this behavior is not the expected one and it looks like the Rust developers have similar objections to this (experimental) API. In https://github.com/rust-lang/rust/issues/15802 they suggest to rename these kind of functions from set..timeout to set..deadline to make the name reflect the behavior.

Qt application crashes when making 2 network requests from 2 threads

I have a Qt application that launches two threads from the main thread at start up. Both these threads make network requests using distinct instances of the QNetworkAccessManager object. My program keeps crashing about 50% of the times and I'm not sure which thread is crashing.
There is no data sharing or signalling occurring directly between the two threads. When a certain event occurs, one the threads signals the main thread, which may in turn signal the second thread. However, by printing logs, I am pretty certain the crash doesn't occur during the signalling.
The structure of both threads is as follows. There's hardly any difference between the threads except for the URL etc.
MyThread() : QThread() {
moveToThread(this);
}
MyThread()::~MyThread() {
delete m_manager;
delete m_request;
}
MyThread::run() {
m_manager = new QNetworkAccessManager();
m_request = new QNetworkRequest(QUrl("..."));
makeRequest();
exec();
}
MyThread::makeRequest() {
m_reply = m_manager->get(*m_request);
connect(m_reply, SIGNAL(finished()), this, SLOT(processReply()));
// my log line
}
MyThread::processReply() {
if (!m_reply->error()) {
QString data = QString(m_reply->readAll());
emit signalToMainThread(data);
}
m_reply->deleteLater();
exit(0);
}
Now the weird thing is that if I don't start one of the threads, the program runs fine, or at least doesn't crash in around 20 invocations. If both threads run one after the other, the program doesn't crash. The program only crashes about half the times if I start and run both the threads concurrently.
Another interesting thing I gathered from logs is that whenever the program crashes, the line labelled with the comment my log line is the last to be executed by both the threads. So I don't know which thread causes the crash. But it leads me to suspect that QNetworkAccessManager is somehow to blame.
I'm pretty blank about what's causing the crash. I will appreciate any suggestions or pointers. Thanks in advance.
First of all you're doing it wrong! Fix your threading first
// EDIT
From my own experience with this pattern i know that it may lead to many unclear crashes. I would start from clearing this thing out, as it may straighten some things and make finding problem clear. Also I don't know how do you invoke makeRequest. Also about QNetworkRequest. It is only a data structure so you don't need to make it on heap. Stack construction would be enough. Also you should remember (or protect somehow) from overwriting m_reply pointer. Do you call makeRequest more than once? If you do, then it may lead to deleting currently processed request after previous request finished.
What does happen if you call makeRequest twice:
First call of makeRequest assigns m_reply pointer.
Second call of makeRequest assigns m_reply pointer second time (replacing assigned pointer but not deleting pointed object)
Second request finishes before first, so processReply is called. deleteLater is queued at second
Somewhere in eventloop second reply is deleted, so from now m_reply pointer is pointing at some random (deleted) memory.
First reply finishes, so another processReply is called, but it operates on m_reply that is pointing a garbage, so every call at m_reply produces crash.
It is one of possible scenarios. That's why you don't get crash every time.
I'm not sure why do you call exit(0) at reply finish. It's also incorrect here if you use more then one call of makeRequest. Remember that QThread is interface to a single thread, not thread pool. So you can't call start() second time on thread instance when it is still running. Also if you're creating network access manager in entry point run() you should delete it in same place after exec(). Remember that exec() is blocking, so your objects won't be deleted before your thread exits.

Qt QSemaphore's release() does not immediately notify waiters?

I've written a Qt console application to try out QSemaphores and noticed some strange behavior. Consider a semaphore with 1 resource and two threads getting and releasing a single resource. Pseudocode:
QSemaphore sem(1); // init with 1 resource available
thread1()
{
while(1)
{
if ( !sem.tryAquire(1 resource, 1 second timeout) )
{
print "thread1 couldn't get a resource";
}
else
{
sem.release(1);
}
}
}
// basically the same thing
thread2()
{
while(1)
{
if ( !sem.tryAquire(1 resource, 1 second timeout) )
{
print "thread2 couldn't get a resource";
}
else
{
sem.release(1);
}
}
}
Seems straightforward, but the threads will often fail to get a resource. A way to fix this is to put the thread to sleep for a bit after sem.release(1). What this tells me is that the release() member does not allow other threads waiting in tryAquire() access to the semaphore before the current thread loops around to the top of while(1) and grabs the resource again.
This surprises me because similar testing with QMutex showed proper behavior... i.e. another thread hanging out in QMutex::tryLock(timeout) gets notified properly when QMutex::unlock() is called.
Any ideas?
I'm not able to fully test this out or find all of the suppporting links at the moment, but here are a few observations...
First, the documentation for QSemaphore.tryAcquire indicates that the timeout value is in milliseconds, not seconds. So your threads are only waiting 1 millisecond for the resource to become free.
Secondly, I recall reading somewhere (I unfortunately can't remember where) a discussion about what happens when multiple threads are trying to acquire the same resource simultaneously. Although the behavior may vary by OS and situation, it seemed that the typical result is that it is a free-for-all with no one thread being given any more priority than another. As such, a thread waiting to acquire a resource would have just as much chance of getting it as would a thread that had just released it and is attempting to immediately reacquire it. I'm unsure if the priority setting of a thread would affect this.
So, why might you get different results for a QSemaphore versus a QMutex? Well, I think a semaphore may be a more complicated system resource that would take more time to acquire and release than a mutex. I did some simple timing recently for mutexes and found that on average it was taking around 15-25 microseconds to lock or unlock one. In the 1 millisecond your threads are waiting, this would be at least 20 cycles of locking and unlocking, and the odds of the same thread always reacquiring the lock in that time are small. The waiting thread is likely to get at least one bite at the apple in the time that it is waiting, so you won't likely see any acquisition failures when using mutexes in your example.
If, however, releasing and acquiring a semaphore takes much longer (I haven't timed them but I'm guessing they might), then it's more likely that you could just by chance get a situation where one thread is able to keep reacquiring the resource repeatedly until the wait condition for the waiting thread runs out.

Qt: setting an override cursor from a non-GUI thread

A while ago I wrote a little RAII class to wrap the setOverrideCursor() and restoreOverrideCursor() methods on QApplication. Constructing this class would set the cursor and the destructor would restore it. Since the override cursor is a stack, this worked quite well, as in:
{
CursorSentry sentry;
// code that takes some time to process
}
Later on, I found that in some cases, the processing code would sometimes take a perceptible time to process (say more than half a second) and other times it would be near instantaneous (because of caching). It is difficult to determine before hand which case will happen, so it still always sets the wait cursor by making a CursorSentry object. But this could cause an unpleasant "flicker" where the cursor would quickly turn from the wait cursor to the normal cursor.
So I thought I'd be smart and I added a separate thread to manage the cursor override. Now, when a CursorSentry is made, it puts in a request to the cursor thread to go to the wait state. When it is destroyed it tells the thread to return to the normal state. If the CursorSentry lives longer than some amount of time (50 milliseconds), then the cursor change is processed and the override cursor is set. Otherwise, the change request is discarded.
The problem is, the cursor thread can't technically change the cursor because it's not the GUI thread. In most cases, it does happen to work, but sometimes, if I'm really unlucky, the call to change the cursor happens when the GUI thread gets mixed in with some other X11 calls, and the whole application gets deadlocked. This usually only happens if the GUI thread finishes processing at nearly the exact moment the cursor thread decides to set the override cursor.
So, does anyone know of a safe way to set the override cursor from a non-GUI thread. Keep in mind that most of the time, the GUI thread is going to be busy processing stuff (that's why the wait cursor is needed after all), so I can't just put an event into the GUI thread queue, because it won't be processed until its too late. Also, it is impractical to move the processing I'm talking about to a separate thread, because this is happening during a paint event and it needs to do GUI work when its done (figuring out what to draw).
Any other ideas for adding a delay to setting the override cursor would be good, too.
I don't think there is any other way besides a Signal-Slot connection going to the GUI thread followed by a qApp->processEvents() call, but like you said, this would probably not work well when the GUI thread is tied up.
The documentation for QCoreApplication::processEvents also has some recommended usages with long event processing:
This function overloads processEvents(). Processes pending events for
the calling thread for maxtime milliseconds or until there are no more
events to process, whichever is shorter.
You can call this function
occasionally when you program is busy doing a long operation (e.g.
copying a file).
Calling this function processes events only for the
calling thread.
If possible break up the long calls in the paint event and have it periodically check to see how long it has been taking. And in any of those checks, have it set the override cursor then from in the GUI Thread.
Often a QProgressBar can go a long way to convey the same information to the user.
Another option that could help quite a bit would be to render outside of the GUI thread onto a QImage buffer and then post it to the GUI when it is done.

Resources