I am working with VC++ for some months now. I have never come across 'Stack Overflow' error until today when I try to pass a structure to function.
This is my code:
int bReadFileData(string sFile, struct FILE_DATA *File_Data);
const int MAX_CRASH_FILE_SIZE = 100000;
struct FILE_DATA
{
int SIZE;
int GOOD[MAX_CRASH_FILE_SIZE];
int BAD[MAX_CRASH_FILE_SIZE];
};
int bReadFileData(string sFile, struct FILE_DATA *File_Data)
{
File_Data->SIZE = 0;
if(PathFileExists(Convert.StringToCstring(sFile)) == 1)
{
string sLine = "";
int iLine = 0;
std::ifstream File(sFile);
while(getline(File, sLine))
{
if(sLine.find(":") != std::string::npos)
{
File_Data->CRASH_VALUES[iLine] = sLine.substr(0, sLine.find(":"));
File_Data->CRASH_VALUES[iLine] = sLine.substr(sLine.find(":") + 1, sLine.length());
}
else
{
File_Data->CRASH_VALUES[iLine] = (sLine);
}
iLine++;
}
File_Data->SIZE = iLine;
}
return 1;
}
`
From main function I am calling below method.
void ReadFiles()
{
FILE_DATA Files[3];
bReadFileData("C:\\Test1.txt", &Files[0]);
bReadFileData("C:\\Test2.txt", &Files[1]);
bReadFileData("C:\\Test3.txt", &Files[2]);
}
Is there any thing wrong in this code? Why stack overflow error is thrown(as soon as it enter ReadFiles()?
Why stack overflow error is thrown(as soon as it enter ReadFiles()?
That's because FILE_DATA[3] allocates too much bytes for stack memory. The size of stack memory is around 1Mb by default, while size of FILE_DATA[3] is around 2.4Mb (~ 800,000 x 3 bytes).
If you use the struct with large size, try to use heap memory as follows:
void ReadFiles()
{
FILE_DATA* Files = new FILE_DATA[3];
bReadFileData("C:\\Test1.txt", &Files[0]);
bReadFileData("C:\\Test2.txt", &Files[1]);
bReadFileData("C:\\Test3.txt", &Files[2]);
delete [] Files;
Files = nullptr;
}
This is not just bad, but terrible design. You should:
Use vector for GOOD and BAD
Use vector.push_back instead of File_Data->CRASH_VALUES[iLine] assignment.
Use dynamic allocation (if you don't use vector). If you must use dynamic allocation, I recommend using make_unique (C++11/14) instead of new, like this:
void ReadFiles()
{
auto Files = std::make_unique<FILE_DATA[]>(2);
bReadFileData("C:\\Test1.txt", &Files[0]);
bReadFileData("C:\\Test2.txt", &Files[1]);
bReadFileData("C:\\Test3.txt", &Files[2]);
// delete [] Files; - DONT NEED
// Files = nullptr;
}
If you can simply use vector, you can have this:
void ReadFiles()
{
FILE_DATA Files[3];
...
Related
I have an input QString that has HTML 4 entities, like õ that I’d like to decode. But I can’t find any facilities in Qt to do so. Is there a way to do so in Qt? If possible I’d like to avoid QTextDocument so I don’t have to bring in QtGui.
The HTML 4 entities are listed in this link:
https://www.w3schools.com/charsets/ref_html_entities_4.asp
Out of curiosity, I have looked a bit around.
I found this SO: How can i convert entity character(Escape character) to HTML in QT?. However, it uses QTextDocument (which is part of GUI) what OP wants to prevent.
The doc. of QTextDocument::setHtml() doesn't mention anything specific whether something is used which could be accessed directly (and is even part of the Qt core). Hence, I had a look into source code. I started with QTextDocument::setHtml() on woboq.org and followed the bread crumbs.
Finally, I ended up in qtbase/src/gui/text/qtexthtmlparser.cpp:
QString QTextHtmlParser::parseEntity()
{
const int recover = pos;
int entityLen = 0;
QStringRef entity;
while (pos < len) {
QChar c = txt.at(pos++);
if (c.isSpace() || pos - recover > 9) {
goto error;
}
if (c == QLatin1Char(';'))
break;
++entityLen;
}
if (entityLen) {
entity = QStringRef(&txt, recover, entityLen);
QChar resolved = resolveEntity(entity);
if (!resolved.isNull())
return QString(resolved);
if (entityLen > 1 && entity.at(0) == QLatin1Char('#')) {
entity = entity.mid(1); // removing leading #
int base = 10;
bool ok = false;
if (entity.at(0).toLower() == QLatin1Char('x')) { // hex entity?
entity = entity.mid(1);
base = 16;
}
uint uc = entity.toUInt(&ok, base);
if (ok) {
if (uc >= 0x80 && uc < 0x80 + (sizeof(windowsLatin1ExtendedCharacters)/sizeof(windowsLatin1ExtendedCharacters[0])))
uc = windowsLatin1ExtendedCharacters[uc - 0x80];
QString str;
if (QChar::requiresSurrogates(uc)) {
str += QChar(QChar::highSurrogate(uc));
str += QChar(QChar::lowSurrogate(uc));
} else {
str = QChar(uc);
}
return str;
}
}
}
error:
pos = recover;
return QLatin1String("&");
}
A table of named entities can be found in the same source file:
static const struct QTextHtmlEntity { const char name[9]; quint16 code; } entities[]= {
{ "AElig", 0x00c6 },
{ "AMP", 38 },
...
{ "zwj", 0x200d },
{ "zwnj", 0x200c }
};
Q_STATIC_ASSERT(MAX_ENTITY == sizeof entities / sizeof *entities);
These are bad news for OP:
The API of the QTextHtmlParser is private:
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
and it's part of Qt GUI.
If OP insists to prevent GUI dependencies, the only other chance I see is to duplicate the code (or just to re-implement it from scratch).
I find many answers to my question and they all work. My question is are they all equal in speed and memory. How can I tell what is faster and uses less memory. I don't normally use the Marshal and GCHandle classes. So I am totally green.
public static object RawDeserializer(byte[] rawData, int position, Type anyType)
{
int rawsize = Marshal.SizeOf(anyType);
if (rawsize > rawData.Length)
return null;
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.Copy(rawData, position, buffer, rawsize);
object retobj = Marshal.PtrToStructure(buffer, anyType);
Marshal.FreeHGlobal(buffer);
return retobj;
}
public static T RawDeserializer<T>(byte[] rawData, int position = 0)
{
int rawsize = Marshal.SizeOf(typeof(T));
if (rawsize > rawData.Length)
{
throw new DataMisalignedException("byte array is not the correct size for the requested type");
}
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.Copy(rawData, position, buffer, rawsize);
T retobj = (T)Marshal.PtrToStructure(buffer, typeof(T));
Marshal.FreeHGlobal(buffer);
return retobj;
}
public static T RawDeserializer<T>(byte[] bytes) where T : struct
{
T stuff;
GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned);
try
{
stuff = Marshal.PtrToStructure<T>(handle.AddrOfPinnedObject());
}
finally
{
handle.Free();
}
return stuff;
}
I am getting the desired results form all 3 implementations.
First and second are almost identical: difference is that you do not unbox (cast to T:struct) the result in the first example, I'd assume that you'll unbox it later though.
Third option does not copy memory to the unmanaged heap, it just pins it in manageed heap, so I'd assume it will allocate less memory and will be faster. I don't pretend to be a golden source of truth though, so just go and make performance testing of these options :) BenchmarkDotNet is a great framework for performance testing and may help you a lot.
Also third option could be more concise:
public static unsafe T RawDeserializer<T>(byte[] bytes) where T : struct
{
fixed (byte* p = bytes)
return Marshal.PtrToStructure<T>((IntPtr)p);
}
You need to change project settings to allow unsafe code though:
To do not be totally green, I'd strongly recommend to read a book CLR via C#, Chapter 21 'The Managed Heap and Garbage Collection'.
I'm trying to take an nginx buffer chain and work with it in some experimental code. In order to do so, I need to first flatten the chain into a single block of memory. Here's what I've got so far (actual production code is a bit different, so this is untested):
u_char *flatten_chain(ngx_chain_t *out) {
off_t bsize;
ngx_chain_t *out_ptr;
u_char *ret, *ret_ptr;
uint64_t flattenSize = 0;
out_ptr = out;
while (out_ptr) {
if(!out_ptr->buf->in_file) {
bsize = ngx_buf_size(out_ptr->buf);
flattenSize += bsize;
}
out_ptr = out_ptr->next;
}
ret = malloc(flattenSize);
ret_ptr = ret;
out_ptr = out;
while (out_ptr) {
bsize = ngx_buf_size(out_ptr->buf);
if(!out_ptr->buf->in_file) {
memcpy(ret_ptr, out_ptr->buf->pos, (size_t)bsize);
ret_ptr += bsize;
}
out_ptr = out_ptr->next;
}
return(ret);
}
However, it doesn't seem to work. Disclaimer: it's possible that it does work and my data is getting corrupted somewhere else... but while I look into that, can someone please confirm or deny that the above should work?
Thanks!
Consider these C functions:
#define INDICATE_SPECIAL_CASE -1
void prepare (long *length_or_indicator);
void execute ();
The prepare function is used to store a pointer to a delayed long * output variable.
It can be used in C like this:
int main (void) {
long length_or_indicator;
prepare (&length_or_indicator);
execute ();
if (length_or_indicator == INDICATE_SPECIAL_CASE) {
// do something to handle special case
}
else {
long length = lengh_or_indicator;
// do something to handle the normal case which has a length
}
}
I am trying to achieve something like this in Vala:
int main (void) {
long length;
long indicator;
prepare (out length, out indicator);
execute ();
if (indicator == INDICATE_SPECIAL_CASE) {
// do something to handle special case
}
else {
// do something to handle the normal case which has a length
}
}
How to write the binding for prepare () and INDICATE_SPECIAL_CASE in Vala?
Is it possible to split the variable into two?
Is it possible to avoid using pointers even though the out variable is written to after the call to prepare () (in execute ())?
The problem with using out is that Vala is going to generate lots of temporary variables along the way, which will make the reference wrong. What you probably want to do is create a method in your VAPI that hides all this:
[CCode(cname = "prepare")]
private void _prepare (long *length_or_indicator);
[CCode(cname = "execute")]
private void _execute ();
[CCode(cname = "prepare_and_exec")]
public bool execute(out long length) {
long length_or_indicator = 0;
prepare (&length_or_indicator);
execute ();
if (length_or_indicator == INDICATE_SPECIAL_CASE) {
length = 0;
return false;
} else {
length = lengh_or_indicator;
return true;
}
}
I wrote a program to test my binary tree and when I run it, the program seems to crash (btree.exe has stopped working, Windows is checking for a solution ...).
When I ran it through my debugger and placed the breakpoint on the function I suspect is causing it, destroy_tree(), it seemed to run as expected and returned back to the main function. Main, in turn, returned from the program but then the cursor jumped back to destroy_tree() and looped recusively within itself.
The minimal code sample is below so it can be ran instantly. My compiler is MinGW and my debugger is gdb (I'm using Code::Blocks).
#include <iostream>
using namespace std;
struct node
{
int key_value;
node *left;
node *right;
};
class Btree
{
public:
Btree();
~Btree();
void insert(int key);
void destroy_tree();
private:
node *root;
void destroy_tree(node *leaf);
void insert(int key, node *leaf);
};
Btree::Btree()
{
root = NULL;
}
Btree::~Btree()
{
destroy_tree();
}
void Btree::destroy_tree()
{
destroy_tree(root);
cout<<"tree destroyed\n"<<endl;
}
void Btree::destroy_tree(node *leaf)
{
if(leaf!=NULL)
{
destroy_tree(leaf->left);
destroy_tree(leaf->right);
delete leaf;
}
}
void Btree::insert(int key, node *leaf)
{
if(key < leaf->key_value)
{
if(leaf->left!=NULL)
insert(key, leaf->left);
else
{
leaf->left = new node;
leaf->left->key_value = key;
leaf->left->left = NULL;
leaf->left->right = NULL;
}
}
else if (key >= leaf->key_value)
{
if(leaf->right!=NULL)
insert(key, leaf->right);
else
{
leaf->right = new node;
leaf->right->key_value = key;
leaf->right->left = NULL;
leaf->right->right = NULL;
}
}
}
void Btree::insert(int key)
{
if(root!=NULL)
{
insert(key, root);
}
else
{
root = new node;
root->key_value = key;
root->left = NULL;
root->right = NULL;
}
}
int main()
{
Btree tree;
int i;
tree.insert(1);
tree.destroy_tree();
return 0;
}
As an aside, I'm planning to switch from Code::Blocks built-in debugger to DDD for debugging these problems. I heard DDD can display visually pointers to objects instead of just displaying the pointer's address. Do you think making the switch will help with solving these types of problems (data structure and algorithm problems)?
Your destroy_tree() is called twice, you call it once and then it gets called after the execution leaves main() from the destructor.
You may think it should work anyway, because you check whether leaf!=NULL, but delete does not set the pointer to NULL. So your root is not NULL when destroy_tree() is called for the second time,
Not directly related (or maybe it is) to your problem, but it's good practice to give structs a constructor. For example:
struct node
{
int key_value;
node *left;
node *right;
node( int val ) : key_val( val ), left(NULL), right(NULL) {}
};
If you do this, your code becomes simpler, because you don't need worry about setting the pointers when you create a node, and it is not possible to forget to initialise them.
Regarding DDD, it;'s a fine debugger, but frankly the secret of debugging is to write correct code in the first place, so you don't have to do it. C++ gives you a lot of help in this direction (like the use of constructors), but you have to understand and use the facilities it provides.
Btree::destroy_tree doesn't set 'root' to 0 after successfully nuking the tree. As a result, the destructor class destroy_tree() again and you're trying to destroy already destroyed objects.
That'll be undefined behaviour then :).
Once you destroy the root.
Make sure it is NULL so it does not try to do it again (from the destructor)
void Btree::destroy_tree(node *leaf)
{
if(leaf!=NULL)
{
destroy_tree(leaf->left);
destroy_tree(leaf->right);
delete leaf;
leaf = NULL; // add this line
}
}