Reducing console output from GruntJS - gruntjs

I'm building my project with Grunt, using grunt-contrib-less, grunt-contrib-concat, grunt-replace, and a few others. Currently, when I build, a ton of output flies through the command window, especially from grunt-replace as it runs on a lot of files.
What I would like is the equivalent of quiet mode in VS, where the only real output is whether the build succeeded, but verbose error information when errors occur.
I tried using grunt-verbosity and setting logs to hidden, but that seems to squash error output as well. I also tried setting grunt.log.write = function(){}, but that also causes errors to not be printed.
Has anyone dealt with a problem like this?

Related

Why is Jupyter Notebook taking so long to load?

Seemingly out of nowhere, one of my Jupyter Notebook .ipynb files now takes forever to open, to the point where I receive Chrome notifications saying that the page is unresponsive. If I wait it out it does open, however. Once it does open, though, it fails to have any of its nbextensions applied. For example, I can now no longer use codefolding. The notebook also seems to have a fair amount of lag when I try to scroll or edit cells.
The file is 556 kB and does have its output cleared (as I've noticed that being an issue for others), though it does contain a few thousand lines of code. I'm able to open smaller files (391 kB) with far fewer lines of code quickly, with all my selected nbextensions active, and with no lag.
Why would this marginally larger file have such issues? What can I do about this?
EDIT:
I have noticed that when opening the file in question, my Anaconda Prompt outputs the following:
404 GET /nbextensions/widgets/notebook/js/extension.js
This error does not pop up when I'm running the smaller file. I'm confused as to why this error would be conditional on the file being ran.
EDIT 2:
The following link seems to be quite related to my issue:
https://github.com/ipython-contrib/jupyter_contrib_nbextensions/issues/822
Basically, it seems like the issue is simply that nbextensions don't play well with larger files. I'm going to try splitting up the code into multiple files and hope the core file becomes small enough for this to work out better.
See Edit 2.
Simple solution - just make the file smaller. Everything is working as expected now.

Monogame Pipeline file size too big

The .xnb files that the Monogame pipeline creates are way too big. I have a folder of images that makes up 8 mb. When passed through the pipeline and published for installing creates 2 gb of .xnb files... that's a little bit of a problem.
Example: An image (png) is 32,6 kB, gets turned into a .xnb.deploy worth 10,1 MB.
I'm not sure what to do about this. There must be something obvious I'm missing because I feel it shouldn't be this way at all, am I missing some compression setting in the pipeline and if so where do I find it? Thanks
Edit: Tried in both Debug and release, same result.
Edit 2: I've tried many of the solutions that I've found on the internet without any luck. I've tried to set the content pipeline settings to Compress - True but that did nothing.
I've tried to go through every image(around 500) and turning the TextureFormat to DxtCompressed, I had an error that I needed to have the ResizeToPower set as True, so I did that and tried again. This gave me the error "processor textureprocessor had unexpected failure" and I could not find a solution to that... so I gave up on that idea.
I also tried following this link http://rbwhitaker.wikidot.com/forum/t-904011/xnb-compression but could not find the "compress content pipeline output files" option anywhere.

How do I get pretty error messages in RStudio?

When working in RStudio (version 0.99.482 and 1.0.136), if I source a file and an error occur I just get the error message without any context. If the file is longer than a few lines of code, that informaiton is largely useless. What I want to know when an error occurs is the following:
What function threw the error? Preferably what function I called from my script, not something burried deep inside a package.
On what line of my file did the error occur?
This seems like a very basic thing for a scripting language to do, but yet I am unable to find an easy solution. Yes, I am aware of the traceback() function, but (a) it is a hassle to call it every time there is an error, and (b) the output is massive and not that helpful.

How to open up a matrix that's running into an error

I am running into an error on a big job in R. I running it as an R script. I keep getting the error that Error in chol.default(F.mat) :
the leading minor of order 1 is not positive definite.
I normally run my job in a qsub but that only gives me an error output but I can't poke around. I then tried running my job locally but my 4gb Macbook was completely overwhelmed.
Now I am trying using screen name and running it on a screen with options(error=recover). Now I am running into the same error as above but I don't know how to access the data frames. I get recover called non-interactively; frames dumped, use debugger() to view but then I get put into my bash line and I don't know how to open up the data frame.
Any ideas?
This is a bit awkward since (1) it's more or less remote debugging and (2) I don't actually ever try to debug non-interactively myself, but: it seems that
options(error=function() dump.frames(to.file=TRUE)) might be worth trying?
After your frames dump to a file (last.dump.rda in the working directory,by default), you should be able to run load("last.dump.rda"); debugger(last.dump) to get back to the debugging environment.
Two caveats:
I haven't actually tested this, just read & interpreted ?dump.frames;
I strongly recommend that you test this with short test runs, either running your original code on a small subset of your data or setting a mini-test script something like
options(error=function() dump.frames(to.file=TRUE))
Sys.sleep(60)
stop("testing error exit")

R - "Browser()" on Error?

I have been using some R libraries to analyze some large data recently, and I find myself frustrated by waiting several hours for the beginning of an analysis, just to get to the end and receive some trivial error, like that I did not install a prerequisite library, or that one of my parameters was wrong. So, then I have to start all over, do the exact same analysis, generate the same variables that it had when it died, and wait a long time. Please note that these are not handled exceptions--they are fatal errors from R.
This is just a thought--and perhaps it is too good to be true, so please at least explain why it wouldn't work--but is there any way to cause R to execute "browser()" in the environment whenever it has a fatal error? For example, say it is executing a script, and encounters "require(notInstalledYet)". Instead of just dying, and losing all the variables in the memory, it would be great if it would give me a browser() at the place it died, so that I could at least save the variables, and at best, fix the problem (e.g. install the library) and try again.
You can change the error option to open a browser on error
options(error=browser)
the default is
options(error=NULL)

Resources