How does RStudio determine the console width, and why does it seem to be getting it consistently wrong? - r

I just discovered wid <- options()$width in RStudio, and it seems to be the source (or rather, much closer to the source) of much irritation in my everyday console usage. I should say up front that I'm currently on R 3.2.2, RStudio 0.99.491, on Linux Mint 17.3 (built over Ubuntu 14.04.3 LTS)
As I understand it, wid should be measured in characters -- if wid is equal to 52, say, then one should be able to fit the alphabet on the screen twice (given the fixed-width default font), but this doesn't appear to be the case:
As you can see, despite having wid equal to 52, I am unable to fit the alphabet twice -- I come up 6 characters short. I also note that this means it is not simply due to the presence of the command prompt arrow and space (> ).
The problem seems somewhat proportional -- if I have wid up to 78, I can only fit 70 characters; up to 104, 93, so wid is about 88% off pretty consistently (side note: this also suggests my assumption wid is measured in characters is probably right).
The problem that this engenders is that oftentimes console output overflows beyond its intended line, making the output ugly and hard to digest; take, for example, the simple snipped setDT(lapply(1:30, function(x) 1:3))[] which produces for me:
It seems clear to me that the output was attempted on a screen width which was not available in practice -- that internally, a larger screen width than actually exists was used for printing.
This leaves me with three questions:
How is options()$width determined?
Why is it so consistently wrong?
What can we do to override this error?

Found a post about this on Rstudio support and it seems the issue has to do with high DPI Displays; there is a claimed bug fix in RStudio version 0.99.878 (released just today! as luck would have it), according to the release notes:
Bug Fixes
...
Correct computation of getOption(“width”) on high DPI displays
Hope this helps anyone else experiencing this! I'm tempted to post about this on /r/oddlysatisfying B-)
Would love to see the relevant commit on the RStudio GitHub page if anyone can track it down (I had no luck).

Related

How to stop Atom from changing != into a different symbol

I have started using Atom to make clean screen shots of my code for presentation purposes, primarily because it has the indent guides for clear syntax visibility.
This is the first time I have used a program with a != operator. And Atom keeps changing the != into a weird looking crossed out elongated equal sign. I have looked into the preferences & packages, but I cannot find anything that might control symbols being altered.
Would anyone be able to point me to where this automatic alteration can be turned off?

Data corruption when uploading a texture (Direct3D11)

I have encountered the following issue in my Direct3D11-based application:
Sometimes (and on some machines) I get a texture which is seemingly corrupted, a couple of lines are just black. Something like this (showing one texture):
Sometimes it is as bad as this, sometimes it is only one or two lines. I made sure that it is not a rendering issue; all observations indicate that the texture-data is not correct.
So far this has been observed only under the following conditions:
we are creating an immutable texture (CPUAccessFlags=None, Usage=Default, BindFlags=ShaderResource)
using a laptop with "Intel HD Graphics 520" and "AMD Radeon R7 M360" with Windows 7 (with "AMD Enduro"-technology)
only with texture-formats "R16G16B16A16_UNorm", "R8G8B8A8_UNorm", "R32_Float", not with "R8_UNorm" and "R16_UNorm" (for the rest it is unknown to me whether they work or not)
and it seems to occur only intermittently (seems to be more likely if the GPU is quite busy)
I tried putting together a small sample which reproduces the issue: https://github.com/ptahmose/ImmutableTextureUploadTest
In this code, I am up-loading an immutable texture, then copy it to a staging texture, and then compare the data to what was up-loaded.
Now, with this repro-project I get the following:
If this repro-application is running alone on the machine, all is fine.
If I run another application which uses D3D11 on the machine, we get errors with this test-application.
I.e. this test-application reports errors if we run two instances of it (or, if I run the actual D3D11-based application).
Did somebody run across something similar? Can this issue be reproduced? Am I missing something? ...and of course: what are my options to solve/work around this issue?
Thanks for reading!

julia-client: can't render lazy

Could somebody please explain to me what this message might mean?
I have the Julia client running in Atom, and my code works properly and it gets me the results, but for some line executions(ctrl+enter) the instant eval gives me "julia-client: can't render lazy".
It appears that the behind the scenes the code is executed, but the inline evaluations prefers not to output anything.
The lines corresponding to these messages usually should return a 2 dimensional arrays or dataframes, and in Julia usually the type and the dimensions are printed in the eval, but for some specific lines it can't render.
I could not find similar reports anywhere else.
julia version 0.5.0-rc3
This is a problem with package versions being out of sync. It's you're on the Julia release (v0.5), this will be fixed with a Pkg.update(). In the future, this kind of question is better suited for the Juno discussion board

Give CPU more power to plot in Octave

I made this function in Octave which plots fractals. Now, it takes a long time to plot all the points I've calculated. I've made my function as efficient as possible, the only way I think I can make it plot faster is by having my CPU completely focus itself on the function or telling it somehow it should focus on my plot.
Is there a way I can do this or is this really the limit?
To determine how much CPU is being consumed for your plot, run your plot, and in a separate window (assuming your on Linux/Unix), run the top command. (for windows, launch the task master and switch to the 'Processes' tab, click on CPU header to sort by CPU).
(The rollover description for Octave on the tag on your question says that Octave is a scripting language. I would expect it's calling gnuplot to create the plots. Look for this as the highest CPU consumer).
You should see that your Octave/gnuplot cmd is near the top of the list, and for top there is a column labeled %CPU (or similar). This will show you how much CPU that process is consuming.
I would expect to see that process is consuming 95% or more CPU. If you see that is a significantly lower number, then you need to check the processes below that, are they consuming the remaining CPU (some sort of Virus scan (on a PC), or DB or Server?)? If a competing program is the problem, then you'll have to decide if you can wait til it/they are finished, OR that you can kill them and restart later. (For lunix, use kill -15 pid or kill -11 pid. Only use kill -9 pid as a last resort. Search here for articles on correct order for trying to kill -$n)
If there are no competing processes AND it octave/gnuplot is using less than 95%, then you'll have to find alternate tools to see what is holding up the process. (This is unlikely, it's possible some part of your overall plotting process is either Disk I/O or Network I/O bound).
So, it depends on the timescale you're currently experiencing versus the time you "want" to experience.
Does your system have multiple CPUs? Then you'll need to study the octave/gnuplot documentation to see if it supports a switch to indicate "use $n available CPUs for processing". (Or find a plotting program that does support using $n multiple CPUs).
Realistically, if your process now takes 10 mins, and you can, by eliminating competing processes, go from 60% to 90%, that is a %50 increase in CPU, but will only reduce it to 5 mins (not certain, maybe less, math is not my strong point ;-)). Being able to divide the task over 5-10-?? CPUs will be the most certain path to faster turn-around times.
So, to go further with this, you'll need to edit your question with some data points. How long is your plot taking? How big is the file it's processing. Is there something especially math intensive for the plotting you're doing? Could a pre-processed data file speed up the calcs? Also, if the results of top don't show gnuplot running at 99% CPU, then edit your posting to show the top output that will help us understand your problem. (Paste in your top output, select it with your mouse, and then use the formatting tool {} at the top of the input box to keep the formatting and avoid having the output wrap in your posting).
IHTH.
P.S. Note the # of followers for each of the tags you've assigned to your question by rolling over. You might get more useful "eyes" on your question by including a tag for the OS you're using, and a tag related to performance measurement/testing (Go to the tags tab and type in various terms to see how many followers you're getting. One bit of S.O. etiquette is to only specify 1 programming language (if appropriate) and that may apply to OS's too.)

R error allocMatrix

HI all,
I was trying to load a certain amount of Affymetrix CEL files, with the standard BioConductor command (R 2.8.1 on 64 bit linux, 72 GB of RAM)
abatch<-ReadAffy()
But I keep getting this message:
Error in read.affybatch(filenames = l$filenames, phenoData = l$phenoData, :
allocMatrix: too many elements specified
What's the general meaning of this allocMatrix error? Is there some way to increase its maximum size?
Thank you
The problem is that all the core functions use INTs instead of LONGs for generating R objects. For example, your error message comes from array.c in /src/main
if ((double)nr * (double)nc > INT_MAX)
error(_("too many elements specified"));
where nr and nc are integers generated before, standing for the number of rows and columns of your matrix:
nr = asInteger(snr);
nc = asInteger(snc);
So, to cut it short, everything in the source code should be changed to LONG, possibly not only in array.c but in most core functions, and that would require some rewriting. Sorry for not being more helpful, but i guess this is the only solution. Alternatively, you may wait for R 3.x next year, and hopefully they will implement this...
If you're trying to work on huge affymetrix datasets, you might have better luck using packages from aroma.affymetrix.
Also, bioconductor is a (particularly) fast moving project and you'll typically be asked to upgrade to the latest version of R in order to get any continued "support" (help on the BioC mailing list). I see that Thrawn also mentions having a similar problem with R 2.10, but you still might think about upgrading anyway.
I bumped into this thread by chance. No, the aroma.* framework is not limited by the allocMatrix() limitation of ints and longs, because it does not address data using the regular address space alone - instead it subsets also via the file system. It never hold and never loads the complete data set into memory at any time. Basically the file system sets the limit, not the RAM nor the address space of you OS.
/Henrik
(author of aroma.*)

Resources