ftruncate failed at the second time - unix

I'm trying to exceed the shared memory object after shm_open and ftruncate successfully at fisrt. Here is the code,
char *uuid = GenerateUUID();
int fd = shm_open(uuid, O_RDWR|O_CREAT|O_EXCL, S_IRUSR|S_IWUSR);
if(fd == -1) perror("shm_open");
size_t shmSize = sizeof(container);
int ret = ftruncate(fd, shmSize);
perror("ftruncate first");
ret = ftruncate(fd, shmSize * 2);
perror("ftruncate second");
It could pass the first ftruncate, but for the second ftruncate, it exceeds failed with errno=22, "Invalid argument".
I also tried to ftruncate the memory object after mmap, refer to the ftruncate's man page, the shared memory should be formatted as zero to the new length.
Besides, I also tried to ftruncate the memory object in the child process (This is an IPC topic among two processes), the ftruncate returns "Invalid fd, no such file or directory" but I could shm_open and mmap successfully in child process.
Any ideas? Thanks!

I think this is a known "feature" of shm_open(), ftruncate(), mmap().
You have to ftruncate() the first time through to give the shared memory a length, but subsequent times ftruncate() gives that error number 22, which you can simply ignore.

The used implementation seems to conform to an older specification where returning an error is an allowed behavior for ftruncate(fd, length) when length exceeds the previous length:
If the file previously was smaller than this size, ftruncate() shall
either increase the size of the file or fail.

Related

allocate memory using huge page and numa_tonode_memory giving "Bus Error"

I am trying to allocate a 2GB buffer using huge TLB page (1GB) and bind the memory region to a specific numa node.
To allocate the buffer using huge TLB page, I am using the following code:
shmid = shmget (IPC_PRIVATE, buf_size,
SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W);
buf = (uint64_t *) shmat (shmid, 0, 0);
Then, I called:
numa_tonode_memory (buf, buf_size, 3);
to move the buffer to a specific node.
When I run the program, as soon as I access buffer offset larger than 1GB, the program would stop with "Bus error (core dumped)".
Removing numa_tonode_memory would avoid the error, however, it would also destroy the purpose of allocating memory on a specific node.
I am wondering if there is any work around on this problem,
Thank you,

Prefix Sum with global memory and an error with local memory

I have a Mali GPU which does not support local memory at all.
Everytime I run code consisting of local memory it gives me some errors from the device.
So, I want to transfer my codes to a version that only uses global memory.
I was thinking if it is possible to run a prefix sum/parallel reduction algorithm using global memory only on GPU.
EDITED : I was debugging the error and found a strange thing that one particular line is giving the erorr.
I have e line like this:
`#define LOG_LSIZE 8`
`#define LSIZE_SHIFT_VALUE 4`
`#define LOG_NUM_BANKS 2`
`#define GET_CONFLICT_OFFSET(lid) ((lid) >> LOG_NUM_BANKS)`
`#define LSIZE 32`
`__local int lm_sum[2][LSIZE + LOG_LSIZE]`
`**lm_sum[lid >> LSIZE_SHIFT_VALUE][bi] += lm_sum[lid >> LSIZE_SHIFT_VALUE][ai]**`
lid is local id and I used qork groups size 32. I found that the highlighted line is the cause of the error. I tried using fixed values and found that I cannot use lm_sum on the right side of a statement. If I do, that gives me an error. For example, this line also gives me error:
int temp= lm_sum[0][0]
Any idea on what is going on?
Error:
`In initial.cpp***[14100.684249] Mali<ERROR, BASE_MMU>: In file: /home/jbmaster/work/01.LPD_OpenCL_RFS/01.arm_work_3_0_31/SEC_All_EVT0_TX013-BU-00001-r2p0-00rel0/TX013-BU-00001-r2p0-00rel0/driver/product/kernel/drivers/gpu/arm/t6xx/kbase/src/common/mali_kbase_mmu.c line: 1240 function:kbase_mmu_report_fault_and_kill
[14100.709724] Unhandled Page fault in AS0 at VA 0x00000002000EC1A0
[14100.709728] raw fault status 0x500003C3
[14100.709730] decoded fault status: SLAVE FAULT
[14100.709733] exception type 0xC3: TRANSLATION_FAULT
[14100.709736] access type 0x3: WRITE
[14100.709738] source id 0x5000
[14100.734958]
[14100.736432] Mali<ERROR, BASE_JD>: In file: /home/jbmaster/work/01.LPD_OpenCL_RFS/01.arm_work_3_0_31/SEC_All_EVT0_TX013-BU-00001-r2p0-00rel0/TX013-BU-00001-r2p0-00rel0/driver/product/kernel/drivers/gpu/arm/t6xx/kbase/src/common/mali_kbase_jm.c line: 899 function:kbase_job_slot_hardstop
[14100.761458] Issueing GPU soft-reset instead of hard stopping job due to a hardware issue
[14100.769517] `
Since lm_sum[0][0] doesn't work, the memory for the array is not allocated. You said your GPU doesn't support local memory. Well, you are trying to use lm_sum which is declared to be in local memory (__local int lm_sum[2][LSIZE + LOG_LSIZE]).

Querying large database using read.odbc.ffdf exhausts memory in R

I'm trying to query a large database and then save the returned result as an ff object using the read.odbc.ffdf function. The following code should allow me to pull in a thousand rows at a time, save the data as an ff object, and then move onto the next thousand rows, appending these to the previous file, so that I can preserve memory:
library(ff);
library(ffbase);
library(ETLUtils);
library(RODBC);
sqlcode <- "SELECT f.*
FROM table1 AS f;";
data <-read.odbc.ffdf(query = sqlcode,
odbcConnect.arg = list('data1; db=research'),
nrows = 1000,
next.rows = 1000,
BATCHBYTES = 100000;
dim(data);
However, when I do this, R eventually eats up all my RAM, and eventually the process terminates before the object "data" is completely filled. Inspecting "data" reveals the following error message:
read.odbc.ffdf 1.. () odbc-read=822.06secError in if (nrow(dat) == 0) { : argument is of length zero
Any ideas what this error message means? How can I query this database without exhausting my memory (4 GB RAM)? I expect the option "BATCHBYTES" in combination with "first.rows" and "next.rows" to keep my memory usage low (within 100,000 bytes, which should be enough for my system).
Am I just not understanding how function read.odbc.ffdf works with the first.rows, next.rows, and BATCHBYTES options?

Qt QSharedMemory Segmentation Faults after Several Successful Writes

I'm using QSharedMemory to store some data and want to subsequently append data to what is contained there. So I call the following code several times with new data. The "audioBuffer" is new data given to this function. I can call this function about 4-7 times ( and it varies ) before it seg faults on the memcpy operation. The size of the QSharedMemory location is huge so in the few calls that I do before seg faulting, there is no issue of memcpy copying data beyond it's boundaries. Also, m_SharedAudioBuffer.errorString() gives no errors up to the memcpy operation. Currently, I only have one process using this QSharedMemory segment. I also tried to write continually without appending each time and that works fine, so something is happening when I try to append more data to the shared memory segment. Any ideas? Thanks!
// Get the buffer size for the current audio buffer in shared memory
int bufferAudioDataSizeBytes = readFromSharedAudioBufferSizeMemory(); // This in number of bytes
// Create a bytearray with our data currently in the shared buffer
char* bufferAudioData = readFromSharedAudioBufferMemory();
QByteArray currentAudioStream = QByteArray::fromRawData(bufferAudioData,bufferAudioDataSizeBytes);
QByteArray currentAudioStreamDeepCopy(currentAudioStream);
currentAudioStreamDeepCopy.append(audioBuffer);
dataSize = currentAudioStreamDeepCopy.size();
//#if DEBUG
qDebug() << "Inserting audio buffer, new size is: " << dataSize;
//#endif
writeToSharedAudioBufferSizeMemory( dataSize ); // Just the size of what we received
// Write into the shared memory
m_SharedAudioBuffer.lock();
// Clear the buffer and define the copy locations
memset(m_SharedAudioBuffer.data(), '\0', m_SharedAudioBuffer.size());
char *to = (char*)m_SharedAudioBuffer.data();
char *from = (char*)audioBuffer.data();
// Now perform the actual copy operation to store the buffer
memcpy( to, from, dataSize );
// Release the lock
m_SharedAudioBuffer.unlock();
EDIT: Perhaps, this is due to my target embedded device which is very small. The available RAM is large when I am trying to write to shared memory, but I notice that in the /tmp directory ( which is only given 4Mb ) I have the following entries - the size is not nearly consumed in /tmp though so I'm not sure why I couldn't allocate more memory, also the QSharedMemory::create method never fails for my maximum size of 960000:
# cd /tmp/
# ls
QtSettings
lib
qipc_sharedmemory_AudioBufferData2a7d5f1a29e3d27dac65b4f350d76a0dfd442222
qipc_sharedmemory_AudioBufferSizeData6b7acc119f94322a6794cbca37ed63df07b733ab
qipc_systemsem_AudioBufferData2a7d5f1a29e3d27dac65b4f350d76a0dfd442222
qipc_systemsem_AudioBufferSizeData6b7acc119f94322a6794cbca37ed63df07b733ab
qtembedded-0
run
The problem seemed to be that I was using QByteArray's ::fromRawData on the pointer returned by the shared memory segment. When I copied that data explicitly using memcpy on this pointer, and then constructed my QByteArray using the copied data, then the seg faults stopped.

ActionScript 3.0 integer overflow?

I have an old AIR file that works fine. I tried to recompile it but the resulting airfile is buggy.
After digging in the code, i found that at some place strings are parsed to ints, and that the resulting int does not correspond to the string.
So i made a simple Actionscript file and executed the code:
var test:int = parseInt("3710835714");
and the variable will have the value
-584131582
So this looks like an overflow. But i'm surprised that the air file i have (which i didn't compile myself) runs just fine. So I wonder - does the internal representation of int somehow depend on what version of the Flex or AIR sdk libraries one is using for compiling?
//edit: it seems it boils down to this test:
var obj:Object = new Object();
obj.val="3710835714";
var test1:Boolean = (obj.val==-584131582);
var test2:Boolean = (int(obj.val)==-584131582);
this evaluates for me to
test1=false;
test2=true;
however - the this old AIR file seems to evaluate both cases to true. How can that be?
It is happening due to Give number exceeds ActionsScript Int limit
The int data type is stored internally as a 32-bit integer and comprises the set of integers from -2,147,483,648 (-231) to 2,147,483,647 (231 - 1),
and number 3,710,835,714 exceeds it by 1563352067
but your parse result shows compiler is considering it as Uint, whose Max limit is 4,294,967,295 i.e
-584131581 = 3,710,835,714 - 4,294,967,295
You should use Uint or Number for big whole-numbers/integers
this blog may helps you
ActionScript 3 Number data type problem with long integer values
Hopes that works

Resources