What does “soft/hard nproc” mean on Linux - limits

What does "hard/soft nproc"mean on CentOS7
What diffrent with 'hard/soft nproc' or 'hard/soft noproc'
THS
/etc/security/limits.conf
* hard nofile 131702
* soft nofile 131702
* hard nproc 131702
* soft nproc 131702
* hard core unlimited
* soft core unlimited
* hard memlock unlimited
* soft memlock unlimited

nofile # The total number of files that can be opened
npro # The total number of threads that can be opened
core unlimited # not limited
memlock unlimted # not limited

Related

How to make a TCP SYN request with trace route

I want to make TCP SYN requests using traceroute and discovered the flag -T. However, I don't know which values I have to use in order to make such requests.
Use -T option if you want to use TCP SYN for probing the remote address as shown in example below:
[root#localhost ~]# traceroute -T google.com
traceroute to google.com (172.217.31.206), 30 hops max, 60 byte packets
1 gateway (192.168.0.1) 2.050 ms 3.041 ms 2.819 ms
2 10.234.0.1 (10.234.0.1) 3.771 ms 3.749 ms 3.716 ms
3 * * *
4 * * *
5 * * *
6 14.140.100.6.static-vegas.net.us (14.140.100.6) 12.234 ms 13.070 ms 12.644 ms
7 * * *
8 * * *
9 108.170.253.97 (108.170.253.97) 11.451 ms 108.170.253.113 (108.170.253.113) 13.403 ms 108.170.253.97 (108.170.253.97) 13.886 ms
10 74.125.253.13 (74.125.253.13) 9.523 ms 11.963 ms 11.388 ms
11 maa03s28-in-f14.1e100.net (172.217.31.206) 10.477 ms 10.222 ms 8.391 ms
From here.

GRPC - Python Not able to send message greater than 4 MB

I am currently using grpc version 1.9.0. The GRPC python client seems like throwing error when msg size is greater than 4MB
Rendezvous of RPC that terminated with (StatusCode.RESOURCE_EXHAUSTED, Received message larger than max
Does any one know how to handle this ?
Specifying below does not work
channel = grpc.insecure_channel(conn_str, options=[('grpc.max_send_message_length', 1000000 * 1000),
('grpc.max_receive_message_length', 1000000 * 1000)])
Have tried to google a lot but in vain
I solved it by using GRPC Python Cython layer: https://github.com/grpc/grpc/tree/master/src/python/grpcio/grpc/_cython
For example if you want 100MB max message_lenght options will be:
options = [(cygrpc.ChannelArgKey.max_send_message_length, 100 * 1024 * 1024),
(cygrpc.ChannelArgKey.max_receive_message_length, 100 * 1024 * 1024)]

IBM BPM 8.5 Process Designer getting hung when multiple users login to the same Process Center

We have an IBM BPM 8.5 Process Center & we are using the Process Designer locally. When multiple users try logging in to the Process Center, the local PD stops responding. With only one user, we are able to work properly. Please provide a valid solution to this. We have tried increasing the heap Size as well.
Consider you are using the linux set up environment, try the following:
increase the allowable stack size inside the file "limits.conf". This is placed in the directory "etc/security". Edit and add the below snippet:
"Code"
# - stack - max stack size (KB)
user_name soft stack 32768
user_name hard stack 32768
# - nofile - max number of open files
user_name soft nofile 65536
user_name hard nofile 65536
# - nproc - max number of processes
user_name soft nproc 16384
user_name hard nproc 16384"
Check the current value by executing this command after editing: ulimit -n
Increase the umask value by issuing the command "umask 077"
Hope this helps!

R AWS - RAM usage

I am trying AWS r3.4xlarge instance with RStudio Server Amazon Machine Image (AMI).
ubuntu#ip:~$ free -m
total used free shared buffers cached
Mem: 122953 8394 114558 0 13 232
-/+ buffers/cache: 8148 114804
Swap: 1023 0 1023
ubuntu#ip:~$
With 122GB RAM and 16vCPU I thought that R would be really fast with medium-sized datasetes. However, R uses only 8.3 GB when I run rpart() on dataset with 10M rows and 21 columns (german data replicated 1,000 times)
This is reasult of ulimit -a:
ubuntu#ip:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 983543
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 983543
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
rpart runs now for more than 30 minutes and I wonder why. Is this more CPU intensive task? It does not seem so from htop:
rpart() finished now! Is there a way how to make it faster or this slowness is cannot be avoided when using rpart?

Conflict between R and Openfoam in Ubuntu

I'm working on a pc with Ubuntu 12.04 LTS and this computer has OpenFoam libraries. Recently, I installed the R software. I didn't have any problem during the installing process of R. The problem that I found was related to running R. When I write R in the terminal, I get this:
luke#glinux:~$ R
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.2.0
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.2.0-5be49240882f
Exec : R
Date : Mar 15 2014
Time : 10:56:27
Host : "glinux"
PID : 7525
Case : /home/luke
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
fileName::stripInvalid() called for invalid fileName UbuntuOne
For debug level (= 2) > 1 this is considered fatal
Aborted (core dumped)
Why, when I want to work with R, does the terminal show me the OpenFoam version?
What could I do to solve this problem?
You can figure out what versions of R the system knows about (and which it is going to use by default) by running which -a R or whereis R.
If you want to run the statistical program R instead of the OpenFoam thing, you could run /usr/bin/R (fill in the real full path for your system), or set up an alias for R: alias R=/usr/bin/R (thanks to #XavierStuvw)

Resources