The .xnb files that the Monogame pipeline creates are way too big. I have a folder of images that makes up 8 mb. When passed through the pipeline and published for installing creates 2 gb of .xnb files... that's a little bit of a problem.
Example: An image (png) is 32,6 kB, gets turned into a .xnb.deploy worth 10,1 MB.
I'm not sure what to do about this. There must be something obvious I'm missing because I feel it shouldn't be this way at all, am I missing some compression setting in the pipeline and if so where do I find it? Thanks
Edit: Tried in both Debug and release, same result.
Edit 2: I've tried many of the solutions that I've found on the internet without any luck. I've tried to set the content pipeline settings to Compress - True but that did nothing.
I've tried to go through every image(around 500) and turning the TextureFormat to DxtCompressed, I had an error that I needed to have the ResizeToPower set as True, so I did that and tried again. This gave me the error "processor textureprocessor had unexpected failure" and I could not find a solution to that... so I gave up on that idea.
I also tried following this link http://rbwhitaker.wikidot.com/forum/t-904011/xnb-compression but could not find the "compress content pipeline output files" option anywhere.
Related
Since I started using breakpoints, I experience that setting breakpoints in an R file sometimes works and somestimes doesn't, in a seemingly unsystematic way. My most recent observation was that switching the filepath of an R file can influence the possibility to set breakpoints. More specifically, I was already debugging some files in filepath A where I was able to set breakpoints. In another file, which was in filepath B, I was not able to set breakpoints (I get the warning message that the package should be build and reloaded although I am not building a package. Nothing happens if I source the file). However, when I put that file from filepath B to filepath A, I could magically set breakpoints all of a sudden without any problems. I am really struggling to find systematic behavior in when setting breakpoints works and when not. Any help if much appreciated.
Seemingly out of nowhere, one of my Jupyter Notebook .ipynb files now takes forever to open, to the point where I receive Chrome notifications saying that the page is unresponsive. If I wait it out it does open, however. Once it does open, though, it fails to have any of its nbextensions applied. For example, I can now no longer use codefolding. The notebook also seems to have a fair amount of lag when I try to scroll or edit cells.
The file is 556 kB and does have its output cleared (as I've noticed that being an issue for others), though it does contain a few thousand lines of code. I'm able to open smaller files (391 kB) with far fewer lines of code quickly, with all my selected nbextensions active, and with no lag.
Why would this marginally larger file have such issues? What can I do about this?
EDIT:
I have noticed that when opening the file in question, my Anaconda Prompt outputs the following:
404 GET /nbextensions/widgets/notebook/js/extension.js
This error does not pop up when I'm running the smaller file. I'm confused as to why this error would be conditional on the file being ran.
EDIT 2:
The following link seems to be quite related to my issue:
https://github.com/ipython-contrib/jupyter_contrib_nbextensions/issues/822
Basically, it seems like the issue is simply that nbextensions don't play well with larger files. I'm going to try splitting up the code into multiple files and hope the core file becomes small enough for this to work out better.
See Edit 2.
Simple solution - just make the file smaller. Everything is working as expected now.
I have been using RDCOMClient for a while now to interact with vendor software. For the most part it has worked fine. Recently, however, I have the need to loop through many operations (several hundred). I am running into problems with the RDCOM.err file growing to a very large size (easily GBs). This file is put in C: with no apparent option to change that. Is there some way that I can suppress this output or specify another location for the file to go? I don't need any of the output in the file so suppressing it would be best.
EDIT: I tried to add to my script a file.remove but R has the file locked. The only way I can get the lock released is to restart R.
Thanks.
Setting the permissions to read only was going to be my suggested hack.
A slightly more elegant approach is to edit one line of the C code in the package in src/RUtils.h from
\#define errorLog(a,...) fprintf(getErrorFILE(), a, ##__VA_ARGS__); fflush(getErrorFILE());
to
\#define errorLog(a, ...) {}
However, I've pushed some simple updates to the package on github that add a writeErrors() function that one can use to toggle whether errors are written or not. So this allows this to be turned on and off dynamically.
So
library(RDCOMClient)
writeErrors(FALSE)
will turn off the error logging to the file.
I found a work around for this. I created the files C:\RDCOM.err and C:\RDCOM_server.err and marked them both as read-only. I am not sure if there is a better way to accomplish this, but for now I am running without logging.
When using dput (or dump) to share objects, I get output that is truncated very early. This also happens when dumping to a file.
I haven't been able to find the setting that governs this, but this didn't happen to me in the past and I'm not aware of having changed any settings. Unfortunately, I don't recall when exactly this started happening.
dput(rnorm(20))
c(0.178996565881475, -0.0979247582427519, -0.722093025014011,
0.88981201078104, 0.997508460579067, 0.416896899499781, 1.09045614607683,
...
I'm using Rstudio 1.1.442, I don't know if that is relevant. This setting in Rstudio does not affect the truncation of dput.
I'm building my project with Grunt, using grunt-contrib-less, grunt-contrib-concat, grunt-replace, and a few others. Currently, when I build, a ton of output flies through the command window, especially from grunt-replace as it runs on a lot of files.
What I would like is the equivalent of quiet mode in VS, where the only real output is whether the build succeeded, but verbose error information when errors occur.
I tried using grunt-verbosity and setting logs to hidden, but that seems to squash error output as well. I also tried setting grunt.log.write = function(){}, but that also causes errors to not be printed.
Has anyone dealt with a problem like this?