awesome-wm: how do I change the tiling layout to prefer 3x2 (cols x rows) instead of 2x3? - awesome-wm

How can I change awesome to prefer more narrow windows over wide windows. I have a fancy 4k monitor with plenty of space, but for example when I have 6 windows open, all the layours prefer showing me a 2x3 layout instead of 3x2.
awful.tag.incncol( 1, nil, true)
This does nothing for me.
Thank you in advance!

Related

How reduce the noise of a image?

I am loading text from some images. With some of them, I am having problems, with this type of image
library(magick)
library(tesseract)
image_read(fichero.jpg) %>%
tesseract::ocr(engine = tesseract("eng")) %>%
cat()
Result
I am assuming (correct me if not) that tesseract fail because of the low quality of the image (it is a scanned document), and I donĀ“t know if there is a way to make the image better.
I tried also some convultion methods with several kernels, trying to reduce the noise of the photo, but it was worse.
Is there a way to handle this or I have to assume that is not possible to get the text in this quality-images?
Regards
Looking at this with the experience of a photographer rather than as a programmer, I would guess that the poor focus and camera jiggle make this image pretty well unreadable by most OCR options. I just used the OCR in Adobe Acrobat to play with it on my own PC and I could get "FECHA" to recognize, but not "NUMERO" and not any of the numbers.
I pulled it into a photo editor and messed around with the contrast, as sometimes it's possible to convert a grayscale image such as this to pure black-and-white and get rid of some of the fuzziness, but I couldn't produce a readable image in my quick-and-dirty experiment.
So realistically, you'll need images that are scanned/photographed with higher resolution and better contrast to get reliable OCR.
It looks like you are trying to create a cow from ground beef. The big problem is that JPEG is not suited for this type of non-photographic image. Your png looks fine because it is a lossless format.
If you don't want this problem, do not save the files as JPEG.

How to save a pdf in R with a lot of points

So I have to save a pdf plot with a lot of points in it. That is not a problem. The problem is that when I open it. It takes forever to plot all those points. How can I save this pdf in such a way that it doesn't have to draw point by point when someone opens it. I'm OK if the quality of the picture goes down a bit.
Here's a sample. I don't think this would crash your computer but be careful with the parameter length if you have an old machine. I am using many more points than that in my real problem by the way.
pdf("lots of points.pdf")
x <- seq(0,100, length = 100000)
y <- 0.00001 * x
plot(x, y)
dev.off()
I had a similar problem and there is a sound solution. The drawback is that this solution is not generic and does not involve programming (always bad).
For draft purposes, png or any other graphic format may be sufficient, but for presentation purposes this is often not the case. So the way to go is to combine vector graphics for fonts, axis etc and bitmap for your zillions of points:
1) save as pdf (huge and nasty)
2) load into illustrator or likewise ( must have layers )
3) separate points from all other stuff by dragging other stuff to new layer - save as A
4) delete other stuff and export points only as bitmap (png, jpg) and save as B
5) load B into A; scale and move B to exact overlap; delete vector points layer, and export as slender pdf.
done. takes you 30 minutes.
As said this has nothing to do with programming, but there is simply no way around the fact that as vector graphics each and every point (even those that are not visible, since covered by others) are single elements and its a pain handling pdfs with thousands of elements.
So there is need for postprocessing. I know ImageMagick can do alot, but AFAIK the above cant be done by an algorithm.
The only programming way to (partly) solve this is to eliminate those points that will not display because the are covered by others. But thats beyond me.
Only go this way if you really and desperately need extreme scalability, otherwise go with #Ben and #inform and use a bitmap --in whatever container you need it (png,pdf,bmp,jpg,tif, even eps).

How to do a ridiculously wide plot

I have a long time series of 10000 observations that I want to visualize. The problem is, if I just plot it normally the time-dimension will be squished and none of the fine detail of the time-series that I want to visualize will be apparent. For example:
plot((sin(1:10000/100)+rnorm(10000)/5),type='l')
What I would like is to somehow plot the following together side by side in one gigantically long plot without using par(mfrow=c(1,100)). I then want to export this very wide plot and simply scroll across to vizualise the whole series.
plot((sin(1:10000/100)+rnorm(10000)/5)[1:100],type='l')
plot((sin(1:10000/100)+rnorm(10000)/5)[101:200],type='l')
plot((sin(1:10000/100)+rnorm(10000)/5)[201:300],type='l')
.....
Eventually I would like to have 3 or 4 of these gigantically wide plots on top of each other with a par(mfrow=c(4,1)).
I know that the answer has something to do with the pin setting in par, but I keep getting Error in plot.new() : plot region too large. I'm guessing this has something to do with the interaction of pin with the other par parameters
Bonus points are awarded if we can get the pixel height and width exactly right. It is preferable that the plot doesn't skip random pixels due to the export sizing being imperfect.
Further bonus points if the image can be encoded in a .html. and viewed this way
An alternative that you might consider is svg, which will produce something of better quality than png/jpeg in any case.
Something like
svg(width = 2000, height = 7)
par(mfrow=c(4,1), mar = c(4, 4, 0, 2))
for (i in 1:4){
plot((sin(1:10000/100)+rnorm(10000)/5),type='l',
bty = "l", xaxs = "i")
}
dev.off()
will produce a very wide plot, just over 1MB in size, which renders quite nicely in Chrome.
Note the width and height are in inches here.
P.S. svg also offers the potential for interactive graphics. Just seen a nice example allowing the user to select a region of a long time series to zoom in on, see Figure 22 in Dynamic and Interactive R Graphics for the Web: The gridSVG Package, a draft paper by Paul Murrell and Simon Potter.
It could be a Cairo-specific problem, or it could be a lack of RAM on your machine. The following code works fine for me on a Windows 7 machine with 8GB RAM.
png("wide.png", width = 1e5, height = 500)
plot((sin(1:10000/100)+rnorm(10000)/5),type='l')
dev.off()
If I change the width to 1e6 pixels, then R successfully creates the file (it took about a minute), but no image viewing software that I have available can display an image that large.
I would go on some alternative route. First of all, what exactly is the point of viewing the entire plot at hi-res? If you're searching for some sort of anomalies or irregularities, well, that's what data processing is for :-) . Think about something like finding allx > 3sigma, or doing an FFT, etc.
Next, if you really want to examine the whole thing by eye, how about writing some R-TclTK code or using dynamicGraph or iplots or zoom to produce an interactive graph that you can scroll thru "live."
ETA: IIRC RStudio has tools for interactive graph scrolling and zoom as well.

Creating high-resolution figures in R

This is such a basic problem that's driving me crazy. When generating a figure in R it looks great on the screen. But when I try to generate it directly onto a file using png(), tiff(), etc. by setting the resolution to 300 and the width and height to reasonable values that would suit a journal paper well, there are 2 problems:
All lines are made super thick
All letters are in huge font.
This has been really annoying, I've tried playing with the pointsize option, it helps make the font size smaller, but the line widths are still thick and ugly. Can you please suggest what's going on wrong in R and how I can fix this? I've looked around and most solutions involve using other image processing software. I'd rather figure out why R does this when increasing the resolution and why it makes the figures so ugly. Here's an example:
png(file="test.png",width=5,height=5,units="cm",res=300)
plot(rnorm(1000),rnorm(1000),xlab="some text")
dev.off()
Thanks!
I think the issue is with the default point size (see parameter pointsize in ?png):
Here's what you had with the default of 12:
But if you lower it down to 6:
png(file="test.png",width=5,height=5,units="cm",res=300, pointsize=6)
plot(rnorm(1000),rnorm(1000),xlab="some text")
dev.off()
The way I understand it, a pointsize of 12 means that a text at cex=1 is 12/72th (i. e. 1/6th) of an inch. Your png being ca. 2 inches, your text is therefore 1/12th of the plot width with the default pointsize.

Create CSS sprites based on colour?

I have a large set of thumbnails I wish to display on a page (over 200). I'd like to use CSS sprites to load them to minimise the HTTP requests. I think putting all of them in one massive file is a bad idea, but splitting them into about 6 files of 40-50 thumbnails should work nicely.
All of the thumbnails are fairly low colour (can be reduced to 256 colours without quality drop), but in total all the thumbnails cover a lot more colours.
So, is there an easy way to group them based on their colour? Putting each group of files in separate folder is fine, since I can stitch together with ImageMagick or an online sprite tool later. But doing all of that in one (with CSS) would be nice too.
Update: the reason for grouping by colour:
The idea is to save more bandwidth. If I have 10 mostly-blue thumbnails, 10 green and 10 red, I can combine them to 3 images, reducing each to 256 colours. If I mix thumbnails then reducing to 256 colours makes the images poorer quality.
Firstly, I would suggest not worrying too much, and saving as a 24bit png. It may seem that the image is getting a lot bigger by doing this, but if the thumbnails are small you'll probably find that there is a large amount of bandwidth currently being using with just http headers that will go away that you can use to make your images look better.
However, if you want automate the process you could try working out the average colour (one way of doing something close to this is to resize them to 1x1, then look at the rgb colour for that pixel). Once you have a colour per images, convert to hsv and sort by hue. You can then bundle them based on that sort order. I've not actually tried this, but it may produce acceptable results.
Adjusting the number of images that get bundled will also effect the output quality. If it sucks when you put 30 images per file, try 25 and see how much difference it makes. Actually, might be smarter to think about the number of files...
Put them all into a single file.
Does it look bad, as there aren't enough colours?
add one extra file and split them equally across all the files. Goto step 2.
Well I did some testing by grabbing a sample by hand of one "tint" and comparing it to a montage created by just taking the first N images. There was only a difference of a few kilobytes, which was reduced to about 30 bytes after I found PNGcrush. Fanastic tool!
So in short, my crackpot idea has been disproven. :p
Now, this is nothing more than theoretical blabbering, but, I understand that animated GIFs have support for a distinct color palette per frame. Theoretically, you could place each image on a separate frame of the animation (leaving most of that frame transparent), and set the pause time between frames to 1ms. Set the animation to only go through once, and you could (potentially) have an effective CSS sprite with reduced to 256 colors per image.
Like I said, theoretical blabbering.

Resources