Why srt file size and program output file(.prn) size is different ? - PROGRESS 4GL - openedge

I have written a program which caused srn file size increased in tmp folder and at certain point of time I got SYSTEM ERROR: I/O due to out of disk space( The srn file deleted automatically). So I have asked a team to increase the size from 14 GB to 19 GB in order to fix the system error.
After that I executed the same program in my home path and the output file is generated (Size is only 5GB) and then connection aborted. I am not sure how. Could somebody explain me what actually is happening while writing to srn file and output file and why the file size was different?

The SRT file is used internally by Progress as a scratch area when it needs to perform client-side activities such as accumulating or sorting a result set.
The amount of space needed varies with the size of the result set and can be heavily influenced by the efficiency of your query. Bad WHERE clauses or the use of functions, such as CAN-DO() which must be resolved on the client or scrolling dynamic queries all drive large SRT files.
Much of this is invisible and fairly arcane. You cannot easily predict the size of SRT file for a given query just by looking at it.
Your application output file size depends on your application what your code is doing. That output may be the result of many queries executed over a long period of time. It does not necessarily correspond to the size of the SRT file in any predictable way.

Related

Why are uncompressed binary objects in R acting compressed and slow to read?

I need to frequently read large objects, so I want the highest possible read rate. I save the objects with save(..., compress = F). Two things have me wondering if R is really saving the objects as binary, uncompressed files:
The bottleneck for both read and write rates is the CPU, running at 100%, not my disk's maximum rates.
The objects appear to be smaller on disk than in memory (e.g. 944,696,761 B on disk, 973,970,512 B in R according to object.size()).
Uncompressed files read at 75-85 MB/s, compared to ~30 MB/s for reading compressed files; this rate is the same from my external hard drive and my SSD, well under the capacity for each.
So my question is: does this make sense? Shouldn't the read/write run at the disk's limit? Is there another option that will?
Details:
The compress = F argument does, as expected, both increase the file size on disk (about threefold), and definitely does improve both write and read speed (20-fold and 2.5-fold, respectively); yet the CPU remains the bottleneck, grinding away at 100% for the duration of the read or write. This feels to me like a light compression.
For the files saved with compress = F, Nautilus's properties window reports the files' type to be "Binary (application/octet-stream)".
The read takes over than twice as long as the write (e.g. 50.5s vs. 22s for 4.3 GB). I don't know why that would be.
I have a new SSD (sequential non-cached read speed (hdparm -t) of at least 265 MB/s).

Is there a way to treat a cl_image as a cl_mem?

I've been working on speeding up some image processing code written in OpenCL, and I've found that for my kernel, buffers (cl_mem) are significantly faster than images (cl_image).
So I want to process my images as cl_mem, but unfortunately I'm stuck with an API that only spits out cl_images. I'm using a OS X API clCreateImageFromIOSurface2DAPPLE that creates an image for me.
Is there any way to take a cl_image and treat it as a cl_mem? When I've tried to do that I get an error when running my kernel.
I've tried copying the image to a buffer using clEnqueueCopyImageToBuffer but that's also too slow. Any ideas? Thanks in advance
PS: I believe my kernel operating on a buffer is much faster because I can do a vload4 and load 4 pixels at a time, vs read_imagei which does just one.
You cannot treat an OpenCL image as memory. The memory layout of an image is private to the implementation and should be considered unknown.
If your code creates the image, however, you could create a buffer and then use cl_khr_image2d_from_buffer. Otherwise write a kernel that copies the data from image to buffer and see if it is faster than clEnqueueCopyImageToBuffer (unlikely).

Is there a way to check how much memory an R statement is going to allocate?

I am tuning a data import script, and occasionally I find an approach puts too much into memory in one call (usually this is because I am writing inefficient code). The "failed to allocate" message is only sort of useful in that it tells you how much memory was needed (without an informative traceback) and only if the allocation fails. Regular profiling requires that enough memory be available for allocation (and contiguously placed), which changes depending on the circumstances under which the code is run, and is very slow.
Is there a function that simulates a call to see how much memory will be used, or otherwise efficiently profiles how much memory a line of R will need whether it succeeds or fails? Something that could wrap an existing line of code in a script like System.time would be ideal.
Edit: lsos() does not work for this because it only describes what is stored after a command is run. (see: Reserved memory of R is twice the size of an allocated array)

are there differences in browser loading speeds for different Images extension (jpg, png)?

I have an image that I want to convert to png / jpg.
The file size differences for each type is very small (photoshop save-for-web-and-devices optimized):
155 bytes for image.png
357 bytes for image.jpg
I don't care too much about the file type either, because there is no opacity.
Is there any difference in loading speeds for browsers between images types?
say, for example, firefox prefers .png files because they're uncompressed so it loads faster (just a thought).
thanks
The loading speed of an image depends on the size of the file and the connection speed of the user. When a site is loaded, a browser downloads a temporary copy of the page including images. You may not be able to tell the difference, but the JPG will take twice as long to load at 355 bytes. These files are so small that it really should not make a difference in the loading speed unless you had say 100 unique images on a single page.
You would need to test a very large sample to find the difference between rendering two different file types. Even with such a small size difference, the smaller file size would probably give you a greater difference than the file type.
The only way to know for sure would be to try it and measure it - then repeat a load of times to get a good sample of data.
If you are really looking to ultra-optimise, reducing the number of requests would give you more benefit than either the file size (based on the numbers you supplied) or the file type - so you could opt for image-sprites to reduce n images to 1 image.

Give CPU more power to plot in Octave

I made this function in Octave which plots fractals. Now, it takes a long time to plot all the points I've calculated. I've made my function as efficient as possible, the only way I think I can make it plot faster is by having my CPU completely focus itself on the function or telling it somehow it should focus on my plot.
Is there a way I can do this or is this really the limit?
To determine how much CPU is being consumed for your plot, run your plot, and in a separate window (assuming your on Linux/Unix), run the top command. (for windows, launch the task master and switch to the 'Processes' tab, click on CPU header to sort by CPU).
(The rollover description for Octave on the tag on your question says that Octave is a scripting language. I would expect it's calling gnuplot to create the plots. Look for this as the highest CPU consumer).
You should see that your Octave/gnuplot cmd is near the top of the list, and for top there is a column labeled %CPU (or similar). This will show you how much CPU that process is consuming.
I would expect to see that process is consuming 95% or more CPU. If you see that is a significantly lower number, then you need to check the processes below that, are they consuming the remaining CPU (some sort of Virus scan (on a PC), or DB or Server?)? If a competing program is the problem, then you'll have to decide if you can wait til it/they are finished, OR that you can kill them and restart later. (For lunix, use kill -15 pid or kill -11 pid. Only use kill -9 pid as a last resort. Search here for articles on correct order for trying to kill -$n)
If there are no competing processes AND it octave/gnuplot is using less than 95%, then you'll have to find alternate tools to see what is holding up the process. (This is unlikely, it's possible some part of your overall plotting process is either Disk I/O or Network I/O bound).
So, it depends on the timescale you're currently experiencing versus the time you "want" to experience.
Does your system have multiple CPUs? Then you'll need to study the octave/gnuplot documentation to see if it supports a switch to indicate "use $n available CPUs for processing". (Or find a plotting program that does support using $n multiple CPUs).
Realistically, if your process now takes 10 mins, and you can, by eliminating competing processes, go from 60% to 90%, that is a %50 increase in CPU, but will only reduce it to 5 mins (not certain, maybe less, math is not my strong point ;-)). Being able to divide the task over 5-10-?? CPUs will be the most certain path to faster turn-around times.
So, to go further with this, you'll need to edit your question with some data points. How long is your plot taking? How big is the file it's processing. Is there something especially math intensive for the plotting you're doing? Could a pre-processed data file speed up the calcs? Also, if the results of top don't show gnuplot running at 99% CPU, then edit your posting to show the top output that will help us understand your problem. (Paste in your top output, select it with your mouse, and then use the formatting tool {} at the top of the input box to keep the formatting and avoid having the output wrap in your posting).
IHTH.
P.S. Note the # of followers for each of the tags you've assigned to your question by rolling over. You might get more useful "eyes" on your question by including a tag for the OS you're using, and a tag related to performance measurement/testing (Go to the tags tab and type in various terms to see how many followers you're getting. One bit of S.O. etiquette is to only specify 1 programming language (if appropriate) and that may apply to OS's too.)

Resources