I am currently using grpc version 1.9.0. The GRPC python client seems like throwing error when msg size is greater than 4MB
Rendezvous of RPC that terminated with (StatusCode.RESOURCE_EXHAUSTED, Received message larger than max
Does any one know how to handle this ?
Specifying below does not work
channel = grpc.insecure_channel(conn_str, options=[('grpc.max_send_message_length', 1000000 * 1000),
('grpc.max_receive_message_length', 1000000 * 1000)])
Have tried to google a lot but in vain
I solved it by using GRPC Python Cython layer: https://github.com/grpc/grpc/tree/master/src/python/grpcio/grpc/_cython
For example if you want 100MB max message_lenght options will be:
options = [(cygrpc.ChannelArgKey.max_send_message_length, 100 * 1024 * 1024),
(cygrpc.ChannelArgKey.max_receive_message_length, 100 * 1024 * 1024)]
Related
to present my configuration I followed this tutorial until "Securing the application"
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uswgi-and-nginx-on-ubuntu-18-04#step-4-%E2%80%94-configuring-uwsgi
So my .ini looks like this:
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
In flask i'm using SQLAlchemy with this configuration:
app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True
app.config['SQLALCHEMY_POOL_RECYCLE'] = 299
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 20
My api endpoints are like this with various themes, they receive parameters via get, process, and return a json
#app.route('/api/theme1/subtheme1')
#auth.login_required
def get_test1(): ...
#app.route('/api/theme1/subtheme2')
#auth.login_required
def get_test2(): ...
#app.route('/api/theme1/subtheme3')
#auth.login_required
def get_test3(): ...
Now my problem, when, for example I make 3 simultaneous calls (not other calls to the api) to those 3 endpoints procesor of the uwsgi process spikes to 25% per call.
Im using a tiny compute engine, just running this for now, the ram is for another tests. n1-highmem-2 (2 vCPUs, 13 GB memory)
I googled and search here and even tunning a little the configuration cannot reduce the processor usage, so I cannot improve the overall performance of the API.
Any idea of what can I be doing wrong? Why the spike in cpu usage?
Thank you!
My inputs are hundreds of big 1-line json files (~10MB-20MB).
After getting out-of-memory errors with my real setup (with two custom filters), I simplified the setup to isolate the problem.
logstash --verbose -e 'input { tcp { port => 5000 } } output { file { path => "/dev/null" } }'
`
My test input is a multi-level nested object in json:
$ ls -sh example_fixed.json
9.7M example_fixed.json
If I send the file once, it works fine. But if I do:
$ repeat 50 cat example_fixed.json|nc -v localhost 5000
I get the error message:
Logstash startup completed
Using version 0.1.x codec plugin 'line'. This plugin isn't well supported by the community and likely has no maintainer. {:level=>:info}
Opening file {:path=>"/dev/null", :level=>:info}
Starting stale files cleanup cycle {:files=>{"/dev/null"=>#<IOWriter:0x6f51765 #active=true, #io=#<File:/dev/null>>}, :level=>:info}
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
I have determined that the error triggers if I send the input more than 30 times with a heap size of 500MB. If I increase heap size, this limit goes up accordingly.
However, from documentation I understand logstash should be able to throttle the input when it cannot process the events quickly enough.
In fact, If I do a sleep 0.1 after sending new events, it can handle up to 100 repetitions. But not 1000. So I assume this means the input is not being throttled properly, and whenever the input rate is higher than the processing rate, it's a matter of time before the heap is filled and logstash crashes.
I am having problems getting RCurl function getURL to access an HTTPS URL on a server that is using a self-signed certificate. I'm running R 3.0.2 on Mac OS X 10.9.2.
I have read the FAQ and the curl page on the subject. So this is where I stand:
I have saved a copy of the certificate to disk (~/cert.pem).
I have been able to use this very same file to connect to the server using python-requests and the 'verify' option, and succeeded.
curl on the command-line seems to be ignoring the --cacert option. I succeeded in accessing the website with it after I flagged the certificate as trusted using the Mac OS X 'Keychain Access' app.
RCurl stubbornly refuses to connect to the website with the following code:
getURL("https://somesite.tld", verbose=T, cainfo=normalizePath("~/cert.pem"))
This is the output I get:
* Adding handle: conn: 0x7f92771b0400
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 38 (0x7f92771b0400) send_pipe: 1, recv_pipe: 0
* About to connect() to somesite.tld port 443 (#38)
* Trying 42.42.42.42...
* Connected to somesite.tld (42.42.42.42) port 443 (#38)
* SSL certificate problem: Invalid certificate chain
* Closing connection 38
When I tested both curl with the --cacert option and the RCurl code above in a Linux VM with the same cert.pem file and exact same URL, it worked perfectly.
So equal tests on Linux and Mac OS X, and only on Mac OS X do they fail. Even adding the certificate to the keychain didn't work.
The only thing that does work is using ssl.verifypeer=FALSE, but I don't want to do that for security reasons.
I'm out of ideas here. Anyone else have any suggestions on how to get this to work?
You can try:
library ("RCurl")
URL1 <- "https://data.mexbt.com/ticker/btcusd"
getURL(URL1,cainfo=system.file("CurlSSL","cacert.pem",package="RCurl"))
Coming back to this issue I just wanted to point out that if you are still using RCurl, you should be using httr (which uses curl) instead.
I have confirmed that using config(cainfo="/path/to/certificate") with httr connections will work as intended.
I have installed pgpool 3.2.1 with 2 backends in streaming replication mode with load balancing and connections pool.I did some high load tests tring to colapse the pgpool connections.
Suposing that this rule is correct : max_pool*num_init_children <= (max_connections - superuser_reserved_connections)
Test 1:
num_init_children = 90
max_pool = 1
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 90.
Test 2:
num_init_children = 90
max_pool = 2
(only in the master)
max_connections = 100
superuser_reserved_connections = 3
Result for psql -U postgres -c 'SELECT COUNT from pg_stat_activity' was 91. What happens with the other 6 connections to get up to 97 connections? which is the maximum number of connections I can get to postgresql.
In both cases I get all connections used in pgpoolAdmin and the connection to database get frozen and no new connections were allowed.
Thank you!
In pgpool they are using the following rule to control the connections:
max_pool*num_init_children <= (max_connections - superuser_reserved_connections) (no query canceling needed)
max_pool*num_init_children*2 <= (max_connections - superuser_reserved_connections) (query canceling needed)
So, the problem is when you have query cancelling you must set in postgresql the double number of connections configured in pgpool.
I'm running an Ubuntu Server 12.04 instance on EC2, with an installation of IRCD-hybrid 7.2 on it. Right now, I'm trying to load test the server by making a bunch on connections and seeing how much the server can handle. I have a script that connects to the room.
My problem is that I can get 4026 connections in the server maximum. My other socket connections just don't seem to work. I have the max clients set to 100k just to be safe and 50k for max number per ip.
When i run
sysctl fs.file-nr -> fs.file-nr = 4576 0 1513750
Also, my ulimits have been set:
ulimit -S -> 65536
My ulimit -n is 1024, but since I can get 4026 connections, I don't see how that's affecting it.
ulimit -n -> 1024
Memory and CPU are also nowhere even close to maximum when I run into this.
My code is this:
import random
import sys
import socket
import string
import time
n = ''.join(random.choice(string.letters) for i in xrange(40))
HOST="<MYHOST IS HERE>"
PORT=6666
NICK=n
IDENT=n
REALNAME=n
readbuffer=""
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send("NICK %s\r\n" % NICK)
s.send("USER %s %s %s :%s\r\n" % (IDENT, HOST, REALNAME, REALNAME))
s.send('JOIN #foobar\r\n')
while 1:
readbuffer=readbuffer+s.recv(1024)
temp=string.split(readbuffer, "\n")
readbuffer=temp.pop( )
for line in temp:
line=string.rstrip(line)
line=string.split(line)
if 'PRIVMSG' in line:
print line
if(line[0]=="PING"):
s.send("PONG %s\r\n" % line[1])
Is there a setting on ircd-hybrid that sets this? The terminal window says that "Server is full" when I try to connect with a regular client and I already have 4026 connections.
There are two types of ulimit, hard and soft. A ulimit on a particular resource may be increased up to the hard limit by the process. However it maybe listed no further.
On my box (ubuntu 12.04), the soft file descrption is 1024), but the hard limit is 4096
$ulimit -n
1024
moment#moment:~/tmp 20:26:04 0
$ulimit -n -H
4096
moment#moment:~/tmp 20:26:16 0
$ulimit -n -S
1024
It's entirely plausible that your irc server is increasing up to this hard limit.
A horrible hack to increase your ulimit temporarily works like this.
sudo su
ulimit -n 10000
su USERNAME
Long term you would need to increase the limit system wide or , preferably, increase the ulimit for just the process you are running. For daemons I normally do this using the ulimit instruction in upstart configuration files.
In general strace can be useful for debugging problems like this (this will probably show an earlier call to increase the file ulimit)