I was investigating a memory leak with the application written in C# & C++. Once I have isolated it to couple of C++ components with PerfMon log and WinDbg/SOS debugging I tried to use UMDH (gflags enabled with +ust) to compare snapshots and find out which heap allocations were leaking memory.
At end the leak was found by a manual review of code. The sample code snippet below.
char *p = new char[size];
// use the pointer
delete p; <---- MEMORY LEAK
I was wondering why UMDH didn't catch this ? UMDH never reported this as an issue in the comparison log. Would WinDbg heap commmands would have helped to point out the leak ?
Related
Does anyone understand what this RocksDB error refers to ?
/column_family.cc:275: rocksdb::ColumnFamilyData::~ColumnFamilyData():
Assertion `refs_ == 0' failed. Aborted (core dumped)
This is an assertion failure raised by RocksDB, and it intentionally terminates the execution of the program.
In general, assertions are used by programmers to ensure certain invariants in the program. Assertions have some runtime overhead, and therefore can be completely disabled. Often they are compiled into development or debug builds, but are omitted for production builds.
When an assertion fails, the program execution is intentionally aborted immediately by calling std::abort. This may lead to your OS writing a core dump (as it obviously did as the above message reveals), but if and where core dumps are written depends on the OS configuration.
In case of this specific assertion, the destructor of rocksdb::ColumnFamilyData raised the assertion because it requires its refs_ member to have a value of 0. refs_ is a reference counter and it makes sense to assert that no references are actually held when the object's destructor is called.
From just looking at the destructor code, it is unclear whether this is a bug in the RocksDB library itself, or an error caused by using it the wrong way, e.g. destroying column family objects when they are still in use by other objects.
For reference, here's the code part that raised the assertion (currently on line 365 in file rocksdb/db/column_family.cc):
ColumnFamilyData::~ColumnFamilyData() {
assert(refs_.load(std::memory_order_relaxed) == 0);
If the error persists, it may be useful if you provide the code that uses RocksDB here. Otherwise it may be impossible to find the error source.
The core dump may also provide useful information, because it contains the stack trace of the code that actually invoked the object's destructor.
I noticed that all column_family.cc errors (core_dumped, memory_order_relaxed and etc) occur after incorrect rocksdb installation. In my vagrant script i found true way.
instead of use
https://github.com/facebook/rocksdb/blob/master/INSTALL.md
i create script
cd /opt
git clone https://github.com/facebook/rocksdb.git
cd rocksdb
git checkout tags/v4.1
PORTABLE=1 make shared_lib
export LD_LIBRARY_PATH=/opt/rocksdb
LD_LIBRARY_PATH add better to your environment path(.bash_rc or /etc/environment)
Assertion refs_ == 0 fails on ~ColumnFamilyData() means the reference count of a column family is not zero when the column family is deleted. Most likely you have some un-deleted column family handles before closing the DB. Note that all column family handles must be deleted before closing the DB. Otherwise the assertion will fail.
// Before delete DB, you have to close All column families by calling
// DestroyColumnFamilyHandle() with all the handles.
static Status Open(const DBOptions& db_options, const std::string& name,
const std::vector<ColumnFamilyDescriptor>& column_families,
std::vector<ColumnFamilyHandle*>* handles, DB** dbptr);
To fix such assertion failure, making sure you delete all column family handles before closing the DB.
Realm 0.95.0 sometimes crashes when loading the default Realm following a migration. This happens infrequently, and we haven't yet been able to reproduce it in a debugging environment. We are using Realm-Cocoa, but calling from a Swift endpoint.
var config = RLMRealmConfiguration.defaultConfiguration()
config.schemaVersion = 3
config.migrationBlock = { (migration, oldVersion) -> Void in
...
}
RLMRealmConfiguration.setDefaultConfiguration(config)
RLMRealm.defaultRealm()
Here is the backtrace
0x100313ae0 [void realm::util terminate<unsigned long, unsigned long>(char const*, char const*, long, unsigned long, unsigned long) ] (terminate.hpp:45)
...
...
0x10030c44c [realm::SharedGroup SharedGroup(realm::Replication&, realm::SharedGroup::DurabilityLevel, char const*) ] (group_shared.hpp:975)
0x1003073a0 [RLMRealm initWithPath:key:readOnly:inMemory:dynamic:error:] (RLMRealm.mm:235)
0x10030821c [RLMRealm realmWithConfiguration:error:] (RLMRealm.mm:400)
0x100307a98 [RLMRealm defaultRealm] (RLMRealm.mm:302)
...
Is there anything we can do to safeguard against this crash? Does the migration function need to be wrapped in an autoreleasepool block, as per issue #1589?
Whenever you see realm::util terminate in your stack trace, it's likely because an internal consistency assertion in Realm has failed, and generally indicates either a corrupt file or a bug in Realm itself. If you have access to the device logs (for example if you received this crash report using a service like Crashlytics or Hockey), you should see a message printed by the assertion failure.
The best thing you can do in these cases is to report the issue to the realm team (help#realm.io) with as much information as possible to allow us to reproduce the issue, and investigate further. We're generally pretty responsive.
I have 2 processes.
The first one creates a QSharedMemory, with a key.
The creation is successful, as no error is returned.
In the second process, I try and attach to the shared memory, having done setKey() with the same key name as the first process, and then try and attach() to the memory.
The attach() fails. Using errorString() on the shared memory, the following string is returned :
QSharedMemory::handle: doesn't exist
Platform is Windows.
What could I be missing here? Kindly advise, thanks.
Have you looked at the shared memory example?
http://doc-snapshot.qt-project.org/4.8/ipc-sharedmemory.html
Below are some code snippets from that example.
Here is what the first process does to put a buffer of "size" into the shared memory:
if (!sharedMemory.create(size)) {
ui.label->setText(tr("Unable to create shared memory segment."));
return;
}
sharedMemory.lock();
char *to = (char*)sharedMemory.data();
const char *from = buffer.data().data();
memcpy(to, from, qMin(sharedMemory.size(), size));
sharedMemory.unlock();
Here is what happens when the second process wants to access the shared memory:
if (!sharedMemory.attach()) {
ui.label->setText(tr("Unable to attach to shared memory segment.\n" \
"Load an image first."));
return;
}
QBuffer buffer;
QDataStream in(&buffer);
QImage image;
sharedMemory.lock();
buffer.setData((char*)sharedMemory.constData(), sharedMemory.size());
buffer.open(QBuffer::ReadOnly);
in >> image;
sharedMemory.unlock();
sharedMemory.detach();
ui.label->setPixmap(QPixmap::fromImage(image));
Note also that in the example, both processes must be running and still have their instance of QSharedMemory. Here is how it is described in the documentation:
Windows: QSharedMemory does not "own" the shared memory segment. When
all threads or processes that have an instance of QSharedMemory
attached to a particular shared memory segment have either destroyed
their instance of QSharedMemory or exited, the Windows kernel releases
the shared memory segment automatically.
Hope that helps.
Encountered same problem. Make sure that QSharedMemory object still lives when second binary tries to attach.
If you want to block ability to run 2 instances of same QT binary, just make QSharedMemory object using dynamic memory which will live until app exit.
I am trying to make a multi threaded Qt Application that uses QGLWidgets and I keep getting this error.(I am trying to paint from another thread using QPainter)
And it also looks like I have a huge memory leak because of it.
The error is "QGLContext::makeCurrent() : wglMakeCurrent failed: The operation completed successfully"
I believe this is related to a rather old issue from the Qt mailing list as described here. In short, if the thread calling makeCurrent() does not equal the thread where the device context was retrieved, GetDC() is called. As outlined in the linked thread, the problem is that ReleaseDC() is not called accordingly, resulting in a handle leak, and triggering Windows to return NULL in the call to GetDC() at some point, which makes wglMakeCurrent() fail. I don't know, however, why GetLastError() claims "The operation completed successfully" in this case.
I have an Qt application for symbian that receives gps data, stores it into a database and tries to post it to a server. First two steps work fine but continuous posting either crashes my application or kills my internet connection.
I have modified my application for debugging purposes so it only does post data to a server in every 10th second. Application runs fine for about 45-90min without any significant memory increase.
After that that I'll get an error from QNetworkReply saying "Cannot allocate memory".
Same time memory usage increases approximately 63500(bytes?).
On next upload I'll get reply that says "Invalid socket descriptor" and after that my QtCreator debug output is filled with "exception on 7 [will do setdefaultif(0) - hack]"
Anyone know what is going wrong here? I can't find errors from my upload code that could be causing this.
Here is my upload script.
void MainWindow::upload() {
//Content of postData below. Using same data on every upload now when tracking the bug
//[{"timestamp":"2010-10-01T17:10:27","latitude":62.1823321,"longitude":25.73226825,"user":6}]
QByteArray postData;
QNetworkRequest request;
request.setUrl(uploadUrl);
this->qnam->post(request, postData);
}
void MainWindow::serviceRequestFinished(QNetworkReply* reply) {
QByteArray bytes = reply->readAll();
if (reply->error() == QNetworkReply::NoError)
{
//nothing in here when debugging
} else {
qDebug() << "-------Reply error: " + reply->errorString();
}
reply->deleteLater();
updateHeapStats();
}
void MainWindow::updateHeapStats() {
#ifdef Q_OS_SYMBIAN
TInt mem, size, limit;
User::Heap().AllocSize(mem);
size = User::Heap().Size();
limit = User::Heap().MaxLength();
qDebug() << "**DEBUG MEMORY - > Memory: " << QString::number(mem);
qDebug() << "**DEBUG MEMORY - > Heap limit: " << QString::number(limit);
qDebug() << "**DEBUG MEMORY - > Heap size: " << QString::number(size);
#endif
}
Allmost forgot, I have tested this with Nokia N97mini, 5230 and 5800 and they all behave same way.
edit. Forgot to mention that when internet connection "dies" I still can see that 3G is on but connecting to internet with web-browser fails. When I close the application and try to connect to internet with browser it says "Web: Memory full, ..."(web requests from apps works fine) I'm using Nokia Energy profiler and it doesn't show any signs of memory being full. Even tested this and started 2 games, ovi maps and tons of other applications and they worked fine even though they consumed over 40MB of memory.
With the caveat that the only networking code I do in Qt is on a desktop platform and even then I need to look it up, I don't see anything obvious. I also know that in my own code deletelater() sometimes has a different idea of what "later" is than I do. I don't have time to look it up and may be wrong here, but I think deletelater() actually runs on the event thread, and if your event thread is always busy, when will it have time to delete the object? For debug purposes, I would replace deletelater() with delete (and really, there's no reason to use deletelater() unless you've got a parent/child relationship that you need to clean up, and there might be a way to manually remove the child from the parent so you don't need to worry about dangling pointers when you call delete).
I also don't know the accuracy of your memory consumption test. Does the allocated memory test refer to the current thread? The current process? Does a program received a "chunk" of memory from the heap that it simply managers on its own and it isn't permitted to use more than? I think you know this framework much better than I do; these are just some thoughts for you to try.