I've created a Custom Pipeline which will decode Excel file to XML file. These excel files have huge records. Excel file also has headers. I've used OpenXML for processing Excel Files to XML files.
I need to optimize the memory consumption.
VirtualStream Memory = new VirtualStream();
byte[] buffer;
buffer = System.Text.Encoding.UTF8.GetBytes
("<ns0:Catalogue xmlns:ns0=\"" + NameSpace + "\">\r\n");
Memory.Write(buffer, 0, buffer.Length);
----
buffer = System.Text.Encoding.UTF8.GetBytes("</ns0:Catalogue>\r\n");
Memory.Write(buffer, 0, buffer.Length);
Related
I am trying to extract .DAT file using Quazip library in Qt 5.14. I integrated this quazip library into my project and try to extract files using it. .txt file is get extracted but .DAT file is not get extracted. .DAT files are created but it does not contain any data.
Here is my source code
bool fileHelper::extractAll( QString folderPath, QString filePath ) {
QuaZip zip(filePath);
zip.open(QuaZip::mdUnzip);
bool isSuccess = false;
for(bool f=zip.goToFirstFile(); f; f=zip.goToNextFile())
{
// set source file in archive
QString filePath = zip.getCurrentFileName();
QuaZipFile zFile( zip.getZipName(), filePath );
// open the source file
zFile.open( QIODevice::ReadOnly );
// create a bytes array and write the file data into it
//QByteArray ba = zFile.read()
QByteArray ba = zFile.readAll();
// close the source file
zFile.close();
// set destination file
//QFile dstFile( getfileStoreRootDir()+filePath );
QFile dstFile( folderPath+filePath );
qDebug() << "dstFile :" << dstFile;
// open the destination file
dstFile.open( QIODevice::WriteOnly | QIODevice::Text );
// write the data from the bytes array into the destination file
dstFile.write( ba.data() );
//close the destination file
dstFile.close();
//mark extraction sucess
isSuccess = true;
}
zip.close();
return isSuccess; }
Please tell me am I doing something wrong or any other extra flag or something is required for it.
Below is the code snippet:
I am trying to upload a file having long as a datatype and storing that file size in a byte array.
long fileSize = uploadedFile.getSize();
byte techGuide[] = new byte[fileSize];
I got the build error:
error: incompatible types: possible lossy conversion from long to int
Please suggest what i am missing and what should i try?
Path path = uploadedFile.toPath(); // File.toPath.
Repair of your code:
// Not needed for readAllBytes.
long fileSize = Files.size(path);
if (fileSize > Integer.MAX) {
throw new IllegalArgumentException("File too large");
}
byte[] techGuide = new byte[(int)fileSize];
New code:
byte[] techGuide = Files.readAllBytes(path);
Arrays are limited by their int index. You would need to cast the fileSize to an int (and check an overflow). However Files.readAllBytes does that for you, throwing an OutOfMemoryError of > Integer.MAX - 8.
I'm trying to read an OpenCL kernel from a file "kernel.cl", but the kernel I read in ends up having unknown symbols at the end of the program once I have read it. The number of unknown symbols are the same as number of lines in the kernel file.
The code I am using to get the kernel:
FILE *fp;
char *source_str;
size_t source_size, program_size;
fp = fopen("kernel.cl", "r");
if (!fp) {
printf("Failed to load kernel\n");
return 1;
}
fseek(fp, 0, SEEK_END);
program_size = ftell(fp);
rewind(fp);
source_str = (char*)malloc(program_size + 1);
source_str[program_size] = '\0';
fread(source_str, sizeof(char), program_size, fp);
fclose(fp);
This code works on another project, so I don't know what's wrong. Also it seems to work if all the code in the kernel is on one line.
Any help would be appreciated, thanks! :)
The MSDN page for fopen() mentions that when files are opened with "r" as the mode string, some translations will happen with regards to line-endings. This means that the size of the file you query may not match the amount of data read by fread(). This explains why the number of invalid characters was equal to the number of lines in the file (and why it worked with all the code on one line).
The solution is to open the file with the "rb" flag:
fp = fopen("kernel.cl", "rb");
If using C++ is an option, take a look at the program::create_with_source_file() method provided by the Boost.Compute library. It simplifies the process of opening a file, reading the contents, and creating the OpenCL program object with the source code.
For example, you could simply do:
boost::compute::program my_program =
boost::compute::program::create_with_source_file("kernel.cl");
Using .NET 4.0, IIS 7.5 (Windows Server 2008 R2). I would like to stream out a binary content of about 10 MB. The content is already in a MemoryStream. I wonder if IIS7 automatically chunks the output stream. From the client receiving the stream, is there any difference between these two approaches:
//#1: Output the entire stream in 1 single chunks
Response.OutputStream.Write(memoryStr.ToArray(), 0, (int) memoryStr.Length);
Response.Flush();
//#2: Output by 4K chunks
byte[] buffer = new byte[4096];
int byteReadCount;
while ((byteReadCount = memoryStr.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, byteReadCount);
Response.Flush();
}
Thanks in advance for any help.
I didn't try your 2nd suggestion passing the original data stream. The memory stream was indeed populated from a Response Stream of a Web Request. Here is the code,
HttpWebRequest webreq = (HttpWebRequest) WebRequest.Create(this._targetUri);
using (HttpWebResponse httpResponse = (HttpWebResponse)webreq.GetResponse())
{
using (Stream responseStream = httpResponse.GetResponseStream())
{
byte[] buffer = new byte[4096];
int byteReadCount = 0;
MemoryStream memoryStr = new MemoryStream(4096);
while ((byteReadCount = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStr.Write(buffer, 0, byteReadCount);
}
// ... etc ... //
}
}
Do you think it can safely pass the responseStream to Response.OutputStream.Write() ? If yes, can you suggest an economic way of doing so? How to send ByteArray + exact stream length to Response.OutputStream.Write()?
The second option is the best one as ToArray will in fact create a copy of the internal array stored in the MemoryStream.
But, you can also preferably use memoryStr.GetBuffer() that will return a reference to this internal array. In this case, you need to use the memoryStr.Length property because the buffer returned by GetBuffer() is in general bigger than the stored actual data (it's allocated chunk by chunk, not byte by byte).
Note that it would be best to pass the original data as a stream directly to the ASP.NET outputstream, instead of using an intermediary MemoryStream. It depends on how you get your binary data in the first place.
Another option, if you serve the exact same content often, is to save this MemoryStream to a physical file (using a FileStream), and use Response.TransmitFile on all subsequent requests. Response.TransmitFile is using low level Windows socket layers and there's nothing faster to send a file.
I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB.
_fileStream = new FileStream();
_fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError);
_fileStream.addEventListener(Event.COMPLETE, loadComplete);
_fileStream.openAsync(myFile, FileMode.READ);
In looking at the documentation, it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large files).
Is there a more suitable class to use for reading large files? I really would like something like a buffered FileStream class that only loads the bytes from the files that are going to be read next.
I'm expecting that I may need to write a class that does this for me, but then I would need to read only a piece of a file at a time. I'm assuming that I can do this by setting the position and readAhead properties of the FileStream to read a chunk out of a file at a time. I would love to save some time if there is a class like this that already exists.
Is there a good way to process large files in AS3, without loading entire contents into memory?
You can use the fileStream.readAhead and fileStream.position properties to set how much of the file data you want read, and where in the file you want it to be read from.
Lets say you only want to read megabyte 152 of a gigabyte file. Do this;
(A gigabyte file consists of 1073741824 bytes)
(Megabyte 152 starts at 158334976 bytes)
var _fileStream = new FileStream();
_fileStream.addEventListener(Event.COMPLETE, loadComplete);
_fileStream.addEventListener(ProgressEvent.PROGRESS, onBytesRead);
_fileStream.readAead = (1024 * 1024); // Read only 1 megabyte
_fileStream.openAsync(myFile, FileMode.READ);
_fileStream.position = 158334976; // Read at this position in file
var megabyte152:ByteArray = new ByteArray();
function onBytesRead(e:ProgressEvent)
{
e.currentTarget.readBytes(megabyte152);
if (megabyte152.length == (1024 * 1024))
{
chunkReady();
}
}
function chunkReady()
{
// 1 megabyte has been read successfully \\
// No more data from the hard drive file will be read unless _fileStream.position changes \\
}
Can't you create a stream, and read a chunk of bytes at a given offset, a chunk at a time... so:
function readPortionOfFile(starting:int, size:int):ByteArray
{
var bytes:ByteArray ...
var fileStream:FileStream ...
fileStream.open(myFile);
fileStream.readBytes(bytes, starting, size);
fileStream.close();
return bytes;
}
and then repeat as required. I don't know how this works, and haven't tested it, but I was under the impression that this works.