using cell caching after creating the .xlsx file - phpexcel

I have one .xlsx file having 20 sheets in the file, size is approx 500kb.
while created the .xlsx file i did not used any caching method so my worksheets are created using 'cache_in_memory'.
I am running out of memory now(my server has approx 500mb ram).
Can I cache the worksheets' cells to disk when the memory is not available?
I read in the documentation that after creating the worksheet you cant change the caching method..
Please help me..i want to use disk when the memory is not available to php script..please tell me is that possible?

Caching isn't a feature of the Excel workbook, but of PHPExcel. Just because you created a workbook once without cell caching, doesn't mean you can't enable it when you read that workbook again.
You need to enable cell caching before either loading the workbook, or instantiating a new workbook within your script.

Related

Why is read_excel very slow while the excel file to read from R is also opened in the Excel?

The environment is:
R: 3.6.1
readxl version: ‘1.3.1’
When I close the Excel program, read_excel takes a second or 2, but when I have the file opened in Excel, then read_excel in R can take a few minutes.
I wonder why was that?
Some programs, like Excel, put access restrictions on files while the files are open. This prevents accidental conflicts from external changes to the same file while it is open.
I don't know why specifically it would affect other tools from reading the file and why the effect would manifest as slower speed instead of complete inability. Maybe Excel is trying to monitor the access to the file and compare it to the content it has loaded.

Julia: Loading an rds file using RData.jl takes up a huge amount of memory

I'm loading an R rds file into Julia with
using RData
objs = load(rds, convert=true)
The original rds file is ~3GB. When I run the load function about, the memory spikes to ~40GB.
Any ideas what's going on?
The rds files are actually compressed using gzip. Try unzipping your file and see how big it actually is (on Windows you could use 7-zip for that). The compression level for a dataframe easily could be around 80-90% so your numbers look fine.

ASP.NET MVC 4 Get file input stream before completely filling the server's memory

I am having hard time figuring out how to get the file InputStream from file upload Post request to server, before it gets completely loaded into memory.
This is not problematic for smaller files, but I am worried what happens after trying to upload a larger file (1 or more GB). I found a possible solution with using HttpContext.Request.GetBufferlessInputStream(true), but this stream includes the whole request not just the uploading file and if I use it to upload a file for example into the Azure Blob Storage or anywhere else I end up with the corrupted file. I also lose all the information about the file (file name, size, etc.).
Is there any convenient way of uploading a large file to server without filling its memory? I would like to get the stream and then use it to upload a file anywhere in chunks.
Thank you.
I used DevExpress UploadControl for a similar task. It supports large file upload by chunks. A temporary file is saved on a server hard drive and you can get it using FileSteam without full loading in server memory. It also supports direct upload to Azure, Amazon and Dropbox.
The same is true for their MVC Upload control.

OutOfMemory issue while creating XSSFWorkbook instance to read XSLX file

As per business functionality we need to read multiple excel files(both .xls and .xlsx format) at different locations in a multi thread environment. Each thread is responsible for reading a file. In order to test the performance, we have created 2 file sets in both .xls and .xlsx formats. One file set has just 20 row data while other file set contains 300,000 row data. We are able successfully read both files in .xls formats and load data into the table. Even for 20 row data .xlsx file, our source code is working fine.
But when the execution flow starts reading .xlsx file, application server is terminated abruptly. When I started tracing down the issue, I have been
facing a strange issue while creating XSSFWorkbook instance.Refer the code snippet below:
OPCPackage opcPackage = OPCPackage.open(FILE);
System.out.println("Created OPCPackage instance.");
XSSFWorkbook workbook = new XSSFWorkbook(opcPackage);
System.out.println("Created XSSFWorkbook instance.");
SXSSFWorkbook sxssfWorkbook = new SXSSFWorkbook(workbook, 1000);
System.out.println("Created SXSSFWorkbook instance.");[/code]
Output
Process XLSX file EXCEL_300K.xlsx start.
Process XLSX file EXCEL.xlsx start.
Created OPCPackage instance.
Created OPCPackage instance.
Created XSSFWorkbook instance.
Created SXSSFWorkbook instance.
Process XLSX file EXCEL.xlsx end.
For larger file set the execution hangs at
XSSFWorkbook workbook = new XSSFWorkbook(opcPackage);
causing heap space issue. Please do help me to fix this issue.
Thanks in advance.
Thanks,
Sankar.
After trying lot of solutions I found out that processing XLSX files require huge memory. But using POI 3.12 library has multiple advantages.
Processes excel files faster.
Has more API's to handle excel files like closing a working book, opening a excel file using File instance etc.

File uploading in asp.net

I'm wondering in what way HttpContext.Request keeps uploaded files in memory.
Does is it hold them in RAM only or writes them in some temp dir on the HDD?
How to control this process?
They write the file to a temp directory. You see almost no memory use increase when uploading a file. When using this control you can't choose any other way to do it.

Resources