imap_client_buffer not being obeyed on nginx - nginx
I have imap_client_buffer set to 64k(as required by imap protocol) in the nginx.conf file
However when an imap client sends a very long command, post authentication, the length gets truncated at 4k(the default page size of the linux operating system)
how can i debug this problem? i have stepped through the code using gdb. as far as i can see, in mail_proxy module, the value of the conf file(120000 for testing) was correctly seen
gdb) p p->upstream.connection->data
$24 = (void *) 0x9e4bd48
(gdb) p s
$25 = (ngx_mail_session_t *) 0x9e4bd48
(gdb) p p->buffer->end
Cannot access memory at address 0x1c
(gdb) p s->buffer->end - s->buffer->last
$26 = 120000
(gdb) p s
$27 = (ngx_mail_session_t *) 0x9e4bd48
(gdb) n
205 pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module);
(gdb) n
207 s->proxy->buffer = ngx_create_temp_buf(s->connection->pool,
(gdb) p pcf
$28 = (ngx_mail_proxy_conf_t *) 0x9e3c480
(gdb) p *pcf
$29 = {enable = 1, pass_error_message = 1, xclient = 1, buffer_size = 120000,
timeout = 86400000, upstream_quick_retries = 0, upstream_retries = 0,
min_initial_retry_delay = 3000, max_initial_retry_delay = 3000,
max_retry_delay = 60000}
When the command below is sent using telnet, only 4 k of data is accepted, then nginx hangs until i hit enter on keyboard......after which the truncated command is sent to the upstream imap server.
I am using nginx 0.78, is this a known issue?
This is the command sent
HP1L UID FETCH 344990,344996,344998,345004,345006,345010:345011,345015,345020,345043,345046,345049:345050,345053,345057:345059,345071,345080,345083,345085,345090,345092:345093,345096,345101:345102,345106,345112,345117,345136,345140,345142:345144,345146:345147,345150,345161,345163,345167,345174:345176,345195,345197,345203,345205,345207:345209,345214,345218,345221,345224,345229,345231,345233,345236,345239,345248,345264,345267,345272,345285,345290,345292,345301,345305,345308,345316,345320,345322,345324,345327,345358,345375,345384,345386,345391,345409,345427:345428,345430:345432,345434:345435,345437,345439,345443,345448,345450,345463,345468:345470,345492,345494,345499,345501,345504,345506,345515:345519,345522,345525,345535,345563,345568,345574,345577,345580,345582,345599,345622,345626,345630,345632:345633,345637,345640,345647:345648,345675,345684,345686:345687,345703:345704,345714:345717,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345816,345720,345722:345724,345726,345730,345734:345737,345749,345756,345759,345783,345785:345787,345790,345806:345807,345812,345817,345902,345919,345953,345978,345981,345984,345990,345997,346004,346008:346009,346011:346012,346022,346039,346044,346050,346061:346062,346066:346067,346075:346076,346081,346088,346090,346093,346096,346098:346099,346110,346140,346170,346172:346174,346187,346189,346193:346194,346197,346204,346212,346225,346241,346244,346259,346323,346325:346328,346331:346332,346337:346338,346342,346346,346353,346361:346362,346364,346420,346426,346432,346447,346450:346451,346453:346454,346456:346457,346459:346460,346466:346468,346474,346476,346479,346483,346496,346498:346501,346504,346530,346539,346546,346576,346589:346590,346594:346595,346601,346607:346609,346612,346614:346615,346617:346618,346625,346631,346638,346641,346654,346657,346661,346665,346671,346687:346688,346693,346695,346734:346735,346741,346747:346748,346755,346757,346777,346779,346786:346788,346791,346793,346801,346815,346821:346822,346825,346828,346837,346839,346843,346848,346857:346858,346860,346862:346863,346866,346868:346869,346877,346883,346895:346897,346899,346923,346927,346945,346948,346961,346964:346966,346968,346970,346974,346987,346989:346990,346992,347000,347003,347008:347011,347013,347021,347028,347032:347034,347036,347049,347051,347058,347076,347079,347081,347083,347085,347092,347096,347108,347130,347145:347148,347150,347155:347158,347161,347163:347164,347181,347184,347187:347189,347204,347210:347211,347215,347217:347220,347227:347228,347234,347244,347246,347251,347253,347263:347264,347266,347268,347275,347292,347294,347304,347308,347317:347320,347322,347325:347327,347340:347341,347346,347352:347353,347357,347360:347361,347375,347379,347382:347386,347389,347392,347402,347405:347406,347411,347433:347434,347438,347440:347441,347443:347444,347448,347459:347460,347465,347468:347469,347476:347479,347490,347497,347506,347526,347530,347545,347547,347555:347556,347601:347605,347632,347634,347641,347643:347646,347649,347653,347660,347668,347676,347707,347719,347722,347724,347727:347732,347735,347746,347754,347756:347757,347761,347776,347779,347791,347798,347800,347805,347816:347817,347822,347837,347841,347843,347846,347848,347851,347879,347885,347892:347894,347903,347907:347911,347915:347916,347918,347950,347952:347953,347981,347986:347988,348001,348037:348038,348049,348052,348056:348058,348061,348072,348074,348077:348078,348080,348082,348100,348105,348109,348111:348116,348119:348123,348131:348132,348138,348150:348151,348153,348157,348161:348163,348166,348168:348169,348171,348173,348176,348178,348180:348181,348201,348204,348208,348218:348219,348222,348226,348229:348230,348235,348238:348240,348244:348247,348249,348251:348253,348256:348257,348263,348285,348288:348289,348293,348298:348299,348301:348302,348305:348306,348310,348327,348332:348337,348340,348342,348344,348348,348351:348353,348356:348357,348360,348366,348377,348386,348390,348398,348400:348401,348406:348407,348419,348422,348424,348427:348428,348430,348432:348433,348439,348444,348447:348448,348450:348451,348454,348456,348459:348460,348473,348493,348497:348498,348504,348506,348508,348516,348520,348527,348530,348532,348546:348547,348551,348560:348563,348567,348570:348572,348574,348577,348581,348588,348595,348610,348632,348636,348642,348646,348667,348672:348673,348679,348703,348713:348714,348716,348718:348722,348728:348729,348731,348735,348743:348745,348749,348751:348752,348759,348768,348773,348780:348781,348784:348791 (UID FLAGS)
Related
TRT inference using onnx - Error Code 1: Cuda Driver (invalid resource handle)
Currently I'm tryin to convert given onnx file to tensorrt file, and do inference on the generated tensorrt file. To do so, I used tensorrt python binding API, but "Error Code 1: Cuda Driver (invalid resource handle)" happens and there is no kind description about this. Can anyone help me to overcome this situation? Thx in advance, and below is my code snippet. def trt_export(self): fp_16_mode = True ## Obviously, I provided appropriate file names trt_file_name = "PATH_TO_TRT_FILE" onnx_name = "PATH_TO_ONNX_FILE" TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE) EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) builder = trt.Builder(TRT_LOGGER) network = builder.create_network(EXPLICIT_BATCH) parser = trt.OnnxParser(network, TRT_LOGGER) config = builder.create_builder_config() config.max_workspace_size = (1<<30) config.set_flag(trt.BuilderFlag.FP16) config.default_device_type = trt.DeviceType.GPU profile = builder.create_optimization_profile() profile.set_shape('input', (1, 3, IMG_SIZE, IMG_SIZE), (12, 3, IMG_SIZE, IMG_SIZE), (32, 3, IMG_SIZE, IMG_SIZE)) # random nubmers for min. opt. max batch config.add_optimization_profile(profile) with open(onnx_name, 'rb') as model: if not parser.parse(model.read()): for error in range(parser.num_errors): print(parser.get_error(error)) engine = builder.build_engine(network, config) buf = engine.serialize() with open(trt_file_name, 'wb') as f: f.write(buf) def validate_trt_result(self, input_path): TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE) trt_file_name = "PATH_TO_TRT_FILE" trt_runtime = trt.Runtime(TRT_LOGGER) with open(trt_file_name, 'rb') as f: engine_data = f.read() engine = trt_runtime.deserialize_cuda_engine(engine_data) cuda.init() device = cuda.Device(0) ctx = device.make_context() inputs, outputs, bindings = [], [], [] context = engine.create_execution_context() stream = cuda.Stream() index = 0 for binding in engine: size = trt.volume(engine.get_binding_shape(binding)) * -1 # assuming one batch dtype = trt.nptype(engine.get_binding_dtype(binding)) host_mem = cuda.pagelocked_empty(size, dtype) device_mem = cuda.mem_alloc(host_mem.nbytes) bindings.append(int(device_mem)) if engine.binding_is_input(binding): inputs.append(HostDeviceMem(host_mem, device_mem)) context.set_binding_shape(index, [1, 3, IMG_SIZE, IMG_SIZE]) else: outputs.append(HostDeviceMem(host_mem, device_mem)) index += 1 print(context.all_binding_shapes_specified) input_img = cv2.imread(input_path) input_r = cv2.resize(input_img, dsize = (256, 256)) input_p = np.transpose(input_r, (2, 0, 1)) input_e = np.expand_dims(input_p, axis = 0) input_f = input_e.astype(np.float32) input_f /= 255 numpy_array_input = [input_f] hosts = [input.host for input in inputs] trt_types = [trt.int32] for numpy_array, host, trt_types in zip(numpy_array_input, hosts, trt_types): numpy_array = np.asarray(numpy_array).astype(trt.nptype(trt_types)).ravel() print(numpy_array.shape) np.copyto(host, numpy_array) [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs] #### ERROR HAPPENS HERE #### context.execute_async_v2(bindings=bindings, stream_handle=stream.handle) #### ERROR HAPPENS HERE #### [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs] stream.synchronize() print("TRT model inference result : ") output = outputs[0].host for one in output : print(one) ctx.pop()
Looks like ctx.push() line is missing before a line with memcpy_htod_async. Such a error can happen if TensorFlow / PyTorch is also using CUDA in parallel with TensorRT. See the related question/answer: https://stackoverflow.com/a/73996477/5655977
How do you simulate packet drop caused by UDP flooding in Mininet?
Just to be clear, I am not interested in adding a constant packet drop on a link (as described by this Stack Overflow question). I want to observe packet drop taking place naturally in the network due to congestion. The intention of my project is to observe the packet drop and delay taking place in a network (preferably an SDN) by varying the qdisc buffer size on the router node. I have a basic topology of three nodes h1, h2 and h3 connected to a router r. I am conducting my experiment along the lines of this tutorial taking place inside a custom environment. My code is shown below: DELAY='110ms' # r--h3 link BBR=False import sys import shelve import os import re import numpy as np import matplotlib.pyplot as plt from mininet.term import makeTerm from mininet.net import Mininet from mininet.node import Node, OVSKernelSwitch, Controller, RemoteController from mininet.cli import CLI from mininet.link import TCLink from mininet.topo import Topo from mininet.log import setLogLevel, info import time class LinuxRouter( Node ): "A Node with IP forwarding enabled." def config( self, **params ): super( LinuxRouter, self).config( **params ) # Enable forwarding on the router info ('enabling forwarding on ', self) self.cmd( 'sysctl net.ipv4.ip_forward=1' ) def terminate( self ): self.cmd( 'sysctl net.ipv4.ip_forward=0' ) super( LinuxRouter, self ).terminate() class RTopo(Topo): def build(self, **_opts): defaultIP = '10.0.1.1/24' # IP address for r0-eth1 r = self.addNode( 'r', cls=LinuxRouter) # , ip=defaultIP ) h1 = self.addHost( 'h1', ip='10.0.1.10/24', defaultRoute='via 10.0.1.1' ) h2 = self.addHost( 'h2', ip='10.0.2.10/24', defaultRoute='via 10.0.2.1' ) h3 = self.addHost( 'h3', ip='10.0.3.10/24', defaultRoute='via 10.0.3.1' ) self.addLink( h1, r, intfName1 = 'h1-eth', intfName2 = 'r-eth1', bw=80, params2 = {'ip' : '10.0.1.1/24'}) self.addLink( h2, r, intfName1 = 'h2-eth', intfName2 = 'r-eth2', bw=80, params2 = {'ip' : '10.0.2.1/24'}) . self.addLink( h3, r, intfName1 = 'h3-eth', intfName2 = 'r-eth3', params2 = {'ip' : '10.0.3.1/24'}, delay=DELAY, queue=QUEUE) # apparently queue is IGNORED here. def main(): rtopo = RTopo() net = Mininet(topo = rtopo, link=TCLink, #switch = OVSKernelSwitch, # ~ controller = RemoteController, autoSetMacs = True # --mac ) net.start() r = net['r'] r.cmd('ip route list'); # r's IPv4 addresses are set here, not above. r.cmd('ifconfig r-eth1 10.0.1.1/24') r.cmd('ifconfig r-eth2 10.0.2.1/24') r.cmd('ifconfig r-eth3 10.0.3.1/24') r.cmd('sysctl net.ipv4.ip_forward=1') h1 = net['h1'] h2 = net['h2'] h3 = net['h3'] h3.cmdPrint("iperf -s -u -i 1 &") r.cmdPrint("tc qdisc del dev r-eth3 root") bsizes = [] bsizes.extend(["1000mb","10mb","5mb","1mb","200kb"]) bsizes.extend(["100kb","50kb","10kb","5kb","1kb","100b"]) pdrops = [] delays = [] init = 1 pdrop_re = re.compile(r'(\d+)% packet loss') delay_re = re.compile(r'rtt min/avg/max/mdev = (\d+).(\d+)/(\d+).(\d+)/(\d+).(\d+)/(\d+).(\d+) ms') bsizes.reverse() for bsize in bsizes: if init: init = 0 routercmd = "sudo tc qdisc add dev r-eth3 root tbf rate 18mbit limit {} burst 10kb".format(bsize) else: routercmd = "sudo tc qdisc replace dev r-eth3 root tbf rate 18mbit limit {} burst 10kb".format(bsize) r.cmdPrint(routercmd) h1.cmd("iperf -c 10.0.3.10 -u -b 20mb -t 30 -i 1 >>a1.txt &") h2.cmd("ping 10.0.3.10 -c 30 >> a2.txt") print("Sleeping 30 seconds") time.sleep(30) #Below is the code to analyse delay and packet dropdata f1 = open("a2.txt",'r') s = f1.read() f1.close() l1 = pdrop_re.findall(s) pdrop = l1[-1][0] pdrops.append(int(pdrop)) print("Packet Drop = {}%".format(pdrop)) l2 = delay_re.findall(s) delay = l2[-1][4] + '.' + l2[-1][5] delays.append(float(delay)) print("Delay = {} ms".format(delay)) bsizes = np.array(bsizes) delays = np.array(delays) pdrops = np.array(pdrops) plt.figure(0) plt.plot(bsizes,delays) plt.title("Delay") plt.savefig("delay.png") plt.show() plt.figure(1) plt.plot(bsizes,pdrops,'r') plt.title("Packet Drop %") plt.savefig("pdrop.png") plt.show() for h in [r, h1, h2, h3]: h.cmd('/usr/sbin/sshd') CLI( net ) net.stop() setLogLevel('info') main() However, when I run the program, only the delay increases with queue/buffer size as expected. The packet drop stays constant (apart from the initial 3% packet drop that occurs regardless of the queue size used). I am flummoxed by this, since theoretically, as buffer size decreases, the space to 'store' a packet on the queue decreases, therefore the chances of a packet getting dropped should increase, as per the tutorial. The graphs are shown below: Graph depicting an increase in delay: Graph depicting a stagnant packet drop: I need an explanation to this contrary behaviour. I would also appreciate a way to observe packet drop in my example. Could it have something to do with Mininet/SDNs in general prioritising ICMP over UDP packets, leading to a lack of packet drop? Or could it have something to do with controllers(I am using the default OpenFlow controller)?
Test if socket is empty (was: Reading data from a raw socket)
At the start, I thought that the bad performance of my driver was caused by the way in which I read data from a socket. This was the original function I used: socket_char_reader = function(in_sock) { string_read <- raw(0) while((rd <- readBin(in_sock, what = "raw", n=1)) > 0) { string_read <- c(string_read, rd) } return(string_read %>% strip_CR_NUL() %>% rawToChar()) } The results from 3 consecutive calls to this function give the expected result. It takes 0.004 seconds to read 29 bytes in total. My second try reads the socket until it is empty. Another function splits the resulting raw array in 3 separate parts. get_response = function() { in_sock <- self$get_socket() BUF_SIZE <- 8 chars_read <- raw(0) while (length(rd <- readBin(in_sock, what = "raw", n=BUF_SIZE)) > 0) { chars_read <- c(chars_read, rd) } return(chars_read) }, Reading from the socket now takes 2.049 seconds! Can somebody explain to me what could be the cause for this difference (or what is the best method for reading the socket until it is empty)? In the meantime I'll return to my original solution and continue looking for the cause of the bad performance. Ben I believe, I found the cause (but not the solution). While debuging, I noticed that the delay is caused by the last call to readBin(). In socket_char_reader(), the stop-condition for the while-loop is based on the value of rd; if that value equals 0, the loop stops. In get_response() the stop-condition is based on the number of bytes in the buffer. Before that number can be determined, readBin() first waits if any other bytes will be send to the socket. The timeOut-period is set in the socketConnection() call. private$conn <- socketConnection(host = "localhost", port, open = "w+b", server = FALSE, blocking = TRUE, encoding = "UTF-8", timeout = 1) Timeout has to be give a value > 0, otherwise it will take days before the loop stops. Is it possible to check if there still are any bytes in the socket without actually reading? Ben
Porting to Python3: PyPDF2 mergePage() gives TypeError
I'm using Python 3.4.2 and PyPDF2 1.24 (also using reportlab 3.1.44 in case that helps) on windows 7. I recently upgraded from Python 2.7 to 3.4, and am in the process of porting my code. This code is used to create a blank pdf page with links embedded in it (using reportlab) and merge it (using PyPDF2) with an existing pdf page. I had an issue with reportlab in that saving the canvas used StringIO which needed to be changed to BytesIO, but after doing that I ran into this error: Traceback (most recent call last): File "C:\cms_software\pdf_replica\builder.py", line 401, in merge_pdf_files input_page.mergePage(link_page) File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2013, in mergePage self.mergePage(page2) File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2059, in mergePage page2Content = PageObject._pushPopGS(page2Content, self.pdf) File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 1973, in _pushPopGS stream = ContentStream(contents, pdf) File "C:\Python34\lib\site-packages\PyPDF2\pdf.py", line 2446, in __init stream = BytesIO(b_(stream.getData())) File "C:\Python34\lib\site-packages\PyPDF2\generic.py", line 826, in getData decoded._data = filters.decodeStreamData(self) File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 326, in decodeStreamData data = ASCII85Decode.decode(data) File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 264, in decode data = [y for y in data if not (y in ' \n\r\t')] File "C:\Python34\lib\site-packages\PyPDF2\filters.py", line 264, in data = [y for y in data if not (y in ' \n\r\t')] TypeError: 'in <string>' requires string as left operand, not int Here is the line and the line above where the traceback mentions: link_page = self.make_pdf_link_page(pdf, size, margin, scale_factor, debug_article_links) if link_page != None: input_page.mergePage(link_page) Here are the relevant parts of that make_pdf_link_page function: packet = io.BytesIO() can = canvas.Canvas(packet, pagesize=(size['width'], size['height'])) ....# left out code here is just reportlab specifics for size and url stuff can.linkURL(url, r1, thickness=1, color=colors.green) can.rect(x1, y1, width, height, stroke=1, fill=0) # create a new PDF with Reportlab that has the url link embedded can.save() packet.seek(0) try: new_pdf = PdfFileReader(packet) except Exception as e: logger.exception('e') return None return new_pdf.getPage(0) I'm assuming it's a problem with using BytesIO, but I can't create the page with reportlab with StringIO. This is a critical feature that used to work perfectly with Python 2.7, so I'd appreciate any kind of feedback on this. Thanks! UPDATE: I've also tried changing from using BytesIO to just writing to a temp file, then merging. Unfortunately I got the same error. Here is tempfile version: import tempfile temp_dir = tempfile.gettempdir() temp_path = os.path.join(temp_dir, "tmp.pdf") can = canvas.Canvas(temp_path, pagesize=(size['width'], size['height'])) .... can.showPage() can.save() try: new_pdf = PdfFileReader(temp_path) except Exception as e: logger.exception('e') return None return new_pdf.getPage(0) UPDATE: I found an interesting bit of information on this. It seems if I comment out the can.rect and can.linkURL calls it will merge. So drawing anything on a page, then trying to merge it with my existing pdf is causing the error.
After digging in to PyPDF2 library code, I was able to find my own answer. For python 3 users, old libraries can be tricky. Even if they say they support python 3, they don't necessarily test everything. In this case, the problem was with the class ASCII85Decode in filters.py in PyPDF2. For python 3, this class needs to return bytes. I borrowed the code for this same type of function from pdfminer3k, which is a port for python 3 of pdfminer. If you exchange the ASCII85Decode() class for this code, it will work: import struct class ASCII85Decode(object): def decode(data, decodeParms=None): if isinstance(data, str): data = data.encode('ascii') n = b = 0 out = bytearray() for c in data: if ord('!') <= c and c <= ord('u'): n += 1 b = b*85+(c-33) if n == 5: out += struct.pack(b'>L',b) n = b = 0 elif c == ord('z'): assert n == 0 out += b'\0\0\0\0' elif c == ord('~'): if n: for _ in range(5-n): b = b*85+84 out += struct.pack(b'>L',b)[:n-1] break return bytes(out)
serial port speed
I try to connect a device to my windows CE device, through the serial port using ppp and I discovered that the other device (arm-linux) changes its port speed continuously. It should be 38400 baud but it's not constant. The speed should be constant? What I'm doing to chek it is stty -F /dev/ttyS3 and the output speed 9600 baud; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -brkint -imaxbel -echo [some time later] speed 38400 baud; [some time later] speed 0 baud; In the windows CE device I get: Error: 679 (Incorrect connection profile) The point is that the speed should be constant in my linux device? or should I check other thinks.