Error: C stack usage when compile R Markdown - r

I get a new error, when I try to compile an R Markdown file int appears the next message:
Error: C stack usage 7971408 is too close to the limit
Execution halted
I did some research and I found some people with the same error:
Error: C stack usage is too close to the limit
C stack usage 7970960 is too close to the limit
GenomicRanges: C stack usage ... is too close to the limit
R mapping (C stack usage 7971616 is too close to the limit)
C stack usage 7972356 is too close to the limit #335
But these guys have problems with some function or something like that.
The actions I did in orden to try to solve this situation:
Uninstall R and RStudio, reinstall de last versions of both, reboot my computer... nothing.
Try to change ulimit -s, and this point is interesting because this is my ulimit -a on R terminal:
geomicrobio-mac:~ geomicrobio$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1392
virtual memory (kbytes, -v) unlimited
When I try to change de ulimit -s for unlimited or 65532 on R terminal, it doesn't change.
The ulimit -a of my terminal (macOS Monterey v12.0.1) is:
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 65532
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 1392
-n: file descriptors 2560
This just happen with R Markdown, I can do Shinny apps, and run scripts, etc. but I can`t compile any R Markdown despite it contains only text.
This is the info when I put base::Cstack_info() on console:
size current direction eval_depth
7969177 14032 1 2
My version of R:
platform x86_64-apple-darwin17.0
arch x86_64
os darwin17.0
system x86_64, darwin17.0
status
major 4
minor 1.2
year 2021
month 11
day 01
svn rev 81115
language R
version.string R version 4.1.2 (2021-11-01)
nickname Bird Hippie
If you know how to solve this I really appreciate your help.
Thank you.

I just delete the .Rprofile .-.

Related

Cstack size limit issue - r

Not able to read MNIST dataset (from dslabs library) due to a C stack limit. R console output is:
> mnist <- read_mnist()
Error: C stack usage 15923024 is too close to the limit
Unable to implement a solution from forum searches. Closest I can determine is to:
(preferable) increase the stack size in terminal and unable to find correct syntax for command line (Windows terminal) or within RStudio terminal.
reduce size of raw file (loaded from gz download rather than read_mnist() from the dslabs package.
System: Windows 10, 64bit, RStudio 3.6.3
> Cstack_info()["size"]
size
15922790
... seem to have changed limit in RTools:
Dav#DESKTOP-V8I63M1 MINGW64 ~
$ ulimit -s
2032
Dav#DESKTOP-V8I63M1 MINGW64 ~
$ ulimit -s 16384
Dav#DESKTOP-V8I63M1 MINGW64 ~
$ ulimit -s
16384
... with no effect on the Cstack size in RStudio.
Any thoughts?

How to list avaliable resources per node in MPI?

I have an access to MPI cluster. It is a pure, clean lan cluster, no SLURM or anething except OpenMP, mpicc, mpirun installed. I have sudo rights. Accessible and configured MPI nodes are all listed in /etc/hosts. I can compile and run MPI programms, yet how to get information on MPI cluster abilities: totall cores avaliable, processors info, total memory, currently running tasks?
Generaly I search for analog of sinfo and squeue that would work in MPI environment?
total cores avaliable:
total memory:
You can try to use Portable Hardware Locality hwloc to see the hardware topology and get info about total cores and total memory.
Additionally you can get information about CPU using lscpu or cat /proc/cpuinfo
currently running tasks:
You can use the monitoring software nmon from IMB (its free)
The option -t of nmon reports the top running process (like top command). You can use nmon online or offline mode.
The following example is from IMB developerWorks
nmon -fT -s 30 -c 120
Is getting one "snapshot" every 30 seconds until it gets 120 snapshots. Then you can examine the output.
If you run it without -f you will see the results live

Set memory limit to Valgrind

I'm trying to run Valgrind on a mips32 machine in order to detect a memory leak. The total available memory is 32MB (without SWAP). The problem is that Valgrind itself is not able to allocate the amount of memory that he needs and always generates an "out of memory" error.
root#babidi# valgrind --leak-check=yes grep -r "foo" /etc/config/
==9392== Memcheck, a memory error detector
==9392== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==9392== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==9392== Command: grep -r foo /etc/config/
==9392==
==9392==
==9392== Valgrind's memory management: out of memory:
==9392== initialiseSector(TC)'s request for 27597024 bytes failed.
==9392== 20516864 bytes have already been allocated.
==9392== Valgrind cannot continue. Sorry.
==9392==
==9392== There are several possible reasons for this.
==9392== - You have some kind of memory limit in place. Look at the
==9392== output of 'ulimit -a'. Is there a limit on the size of
==9392== virtual memory or address space?
==9392== - You have run out of swap space.
==9392== - Valgrind has a bug. If you think this is the case or you are
==9392== not sure, please let us know and we'll try to fix it.
==9392== Please note that programs can take substantially more memory than
==9392== normal when running under Valgrind tools, eg. up to twice or
==9392== more, depending on the tool. On a 64-bit machine, Valgrind
==9392== should be able to make use of up 32GB memory. On a 32-bit
==9392== machine, Valgrind should be able to use all the memory available
==9392== to a single process, up to 4GB if that's how you have your
==9392== kernel configured. Most 32-bit Linux setups allow a maximum of
==9392== 3GB per process.
==9392==
==9392== Whatever the reason, Valgrind cannot continue. Sorry.
What I'm wondering is if it is possible to limit the amount of memory that Valgrind allocates. I tried playing with --max-stacksize and --max-stackframe but the result is always the same.
As mentioned in the comments, 32MB is not much. It must cover the OS and some other necessary processes. When you analyze a program with Valgrind/Memcheck, it requires more than twice as much memory than the program would do by itself. This is because Memcheck stores shadow values for every allocated bit, so that it can recognize uninitialized variables.
I think that the best solution would be to compile your program for your desktop computer and run Memcheck from there. If you have leaks, uninitialized variables, etc in your program, you will have them on your desktop computer as well.
If you are curious how your program will behave on the MIPS, analyze it with other Valgrind tools, such as Massif (measuring heap over time) and Cachegrind (cache performance). Those are much more light-weigth than Memcheck.

Wget In Parralel. What can I do to improve the download speed?

I'm trying to make a web crawler using wget. The crawler only fetches the homepages of subdomains, and I'm running it like this:
cat urls.txt | xargs -n 1 -P 800 -I {} wget {} --max-redirect 3 --tries=1 --no-check-certificate --read-timeout=95 --no-dns-cache --connect-timeout=60 --dns-timeout=45 -q
When I run that, I only get speeds of ~5mbps. The server I'm crawling from has a 100mbps bandwidth connecton and can download files from individual sites at 20mbps+.
What can I do to speed up this crawler?
Note:
The nameserver is Google DNS (8.8.8.8)
I have these ulimits
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 254243
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 254243
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
and have tried these speed tweaks:
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 5 > /proc/sys/net/ipv4/tcp_keepalive_probes
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range

riak-admin fails on osx 10.8.5

I'm trying to install riak on my OSX 10.8.5, but when using the command riak-admin test it always fail. I can't find a solution for it!
Also using sudo riak-admin test doesn't help it.
I have installed riak(1.4.2) through brew.
>riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>riak ping
pong
>riak-admin test
Failed to write test value: {error,timeout}%
I have installed riak(1.4.2) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.2/osx/10.8/riak-1.4.2-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.2-osx-x86_64.tar.gz
>cd riak-1.4.2
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to write test value: {error,timeout}%
I have install riak(1.4.1) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.1/osx/10.8/riak-1.4.1-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.1-osx-x86_64.tar.gz
>cd riak-1.4.1
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to read test value: {error,{insufficient_vnodes,0,need,1}}%
Solution
Following this procedure http://docs.basho.com/riak/... solved my issue.
It has to do with the Open Files Limit on mac OSX.
Before
To check the current limits on your Mac OS X system, run:
>launchctl limit maxfiles
maxfiles 256 unlimited
Edit (or create) /etc/launchd.conf
Edit (or create) /etc/launchd.conf and increase the limits. Add lines
that look like the following (using values appropriate to your
environment):
limit maxfiles 16384 32768
Restart the system
Save the file, and restart the system for the new limits to take
effect. After restarting, verify the new limits with the launchctl
limit command:
>launchctl limit maxfiles
maxfiles 16384 32768

Resources