Network transferred finished but QNetworkReply read data very slow - qt

I am using QNetworkAccessManager to communicate with my RESTful APIserver, one of my API need to fetch a file from server by GET, commonly the file size is about 3MB. Most of time it works very well and very fast, but sometimes it may take very long time and end with RemoteHostClosedError.
My environment:
Qt 5.5 MinGW
Windows 10 64bit
API server run on the same computer
Here is my code to send GET request for the file:
QUrl url(QString("http://%1:%2/audio/channels")
.arg(m_ip)
.arg(m_port));
QNetworkRequest request(url);
QNetworkReply *reply = m_http->get(request);
// Use lamdba to handle fetched file
connect(reply, &QNetworkReply::finished, [this, reply]{
// Handle fetched file
});
When the slow issue occurred, I did wireshark capture, but I found the network transfer is finished very fast and all data is transferred, here is a snapshot of Wireshark capture: Wireshark capture
I also add slot to signal QNetworkReply::downloadProgress, then I found QNetworkReply was always trying to download the data, but very slow at about 5KB/s.
Here is the test code:
connect(reply, &QNetworkReply::downloadProgress, [this, reply](qint64 recved, qint64 total){
qDebug().noquote() << QDateTime::currentDateTimeUtc()
<< " Downloading from " << m_ip << ":" << m_port
<< ": " << recved << "/" << total;
});
Here is part of log output:
QDateTime(2022-04-20 09:06:32.870 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1144846 / 3070158
QDateTime(2022-04-20 09:06:32.969 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1145501 / 3070158
QDateTime(2022-04-20 09:06:33.070 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1146160 / 3070158
QDateTime(2022-04-20 09:06:33.170 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1146837 / 3070158
QDateTime(2022-04-20 09:06:33.271 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1147504 / 3070158
QDateTime(2022-04-20 09:06:33.371 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1148132 / 3070158
QDateTime(2022-04-20 09:06:33.470 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1148751 / 3070158
QDateTime(2022-04-20 09:06:33.571 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1149403 / 3070158
QDateTime(2022-04-20 09:06:33.671 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1150056 / 3070158
QDateTime(2022-04-20 09:06:33.771 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1150729 / 3070158
QDateTime(2022-04-20 09:06:33.871 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1151360 / 3070158
QDateTime(2022-04-20 09:06:33.971 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1152040 / 3070158
QDateTime(2022-04-20 09:06:34.071 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1152707 / 3070158
QDateTime(2022-04-20 09:06:34.171 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1153354 / 3070158
QDateTime(2022-04-20 09:06:34.272 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1153973 / 3070158
QDateTime(2022-04-20 09:06:34.371 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1154618 / 3070158
QDateTime(2022-04-20 09:06:34.471 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1155259 / 3070158
QDateTime(2022-04-20 09:06:34.572 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1155936 / 3070158
QDateTime(2022-04-20 09:06:34.673 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1156592 / 3070158
QDateTime(2022-04-20 09:06:34.772 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1157242 / 3070158
QDateTime(2022-04-20 09:06:34.873 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1157808 / 3070158
QDateTime(2022-04-20 09:06:34.973 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1158460 / 3070158
QDateTime(2022-04-20 09:06:35.073 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1159104 / 3070158
QDateTime(2022-04-20 09:06:35.173 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1159735 / 3070158
QDateTime(2022-04-20 09:06:35.273 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1160407 / 3070158
QDateTime(2022-04-20 09:06:35.373 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1161049 / 3070158
QDateTime(2022-04-20 09:06:35.474 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1161669 / 3070158
QDateTime(2022-04-20 09:06:35.573 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1162266 / 3070158
QDateTime(2022-04-20 09:06:35.673 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1162852 / 3070158
QDateTime(2022-04-20 09:06:35.682 UTC Qt::TimeSpec(UTC)) Downloading from 127.0.0.1 : 8800 : 1162901 / 3070158
Based on my tests, I think it is the reason of QNetworkReply data reading process, it is too slow, but the actual network transferred is finished very fast, after about 3 minutes, server close the connection, then QNetworkReply get the error RemoteHostClosedError, finally cause failed request.
I am confused why network transferred finished, which means the data is already downloaded, but QNetworkReply still take so long to process these data.
Really appreciate if anyone can help me with this, Thanks.

Related

API query to HTTPS endpoint in R works only after running update.packages('openssl')

I'm looking to write some simple ETL scripts to query an API in R. They only authenticate successfully after I have run update.packages('openssl') and I'm not exactly sure what this means, beyond something being misconfigured.
Edit with runnable sample:
The following test GET request...
library(httr)
library(openssl)
library(tidyverse)
library(DBI)
#### SET QUERY/FORM VARIABLES ####
#### SET URL ENDPOINTS ####
api_base <- 'https://reqres.in'
sub_endpt <- '/api/unknown/2'
sub_url <- paste0(api_base, sub_endpt)
page_req <- GET(sub_url
, encode = 'json'
, verbose(info=TRUE)
...generates the following error:
* Trying 172.67.222.36...
* TCP_NODELAY set
* Connected to reqres.in (172.67.222.36) port 443 (#0)
* schannel: SSL/TLS connection with reqres.in port 443 (step 1/3)
* schannel: disabled server certificate revocation checks
* schannel: sending initial handshake data: sending 157 bytes...
* schannel: sent initial handshake data: sent 157 bytes
* schannel: SSL/TLS connection with reqres.in port 443 (step 2/3)
* schannel: failed to receive handshake, need more data
* schannel: SSL/TLS connection with reqres.in port 443 (step 2/3)
* schannel: encrypted data got 139
* schannel: encrypted data buffer: offset 139 length 4096
* schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
* Closing connection 0
* schannel: shutting down SSL/TLS connection with reqres.in port 443
* schannel: clear security context handle
Error in curl::curl_fetch_memory(url, handle = handle) :
schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
This same script works after running update.packages('openssl'), but only if update.packages('openssl') is run each time.
library(httr)
library(openssl)
library(tidyverse)
library(DBI)
update.packages('openssl')
#### SET QUERY/FORM VARIABLES ####
#### SET URL ENDPOINTS ####
api_base <- 'https://reqres.in'
sub_endpt <- '/api/unknown/2'
sub_url <- paste0(api_base, sub_endpt)
page_req <- GET(sub_url
, encode = 'json'
, verbose(info=TRUE)
)
After running the above, I instead receive:
Hostname in DNS cache was stale, zapped
* Trying 104.21.59.93...
* TCP_NODELAY set
* Connected to reqres.in (104.21.59.93) port 443 (#1)
* schannel: SSL/TLS connection with reqres.in port 443 (step 1/3)
* schannel: disabled server certificate revocation checks
* schannel: sending initial handshake data: sending 157 bytes...
* schannel: sent initial handshake data: sent 157 bytes
* schannel: SSL/TLS connection with reqres.in port 443 (step 2/3)
* schannel: failed to receive handshake, need more data
* schannel: SSL/TLS connection with reqres.in port 443 (step 2/3)
* schannel: encrypted data got 2563
* schannel: encrypted data buffer: offset 2563 length 4096
* schannel: sending next handshake data: sending 126 bytes...
* schannel: SSL/TLS connection with reqres.in port 443 (step 2/3)
* schannel: encrypted data got 258
* schannel: encrypted data buffer: offset 258 length 4096
* schannel: SSL/TLS handshake complete
* schannel: SSL/TLS connection with reqres.in port 443 (step 3/3)
* schannel: stored credential handle in session cache
-> GET /api/unknown/2 HTTP/1.1
-> Host: reqres.in
-> User-Agent: libcurl/7.59.0 r-curl/3.3 httr/1.4.1
-> Accept-Encoding: gzip, deflate
-> Accept: application/json, text/xml, application/xml, */*
->
* schannel: client wants to read 16384 bytes
* schannel: encdata_buffer resized 17408
* schannel: encrypted data buffer: offset 0 length 17408
* schannel: encrypted data got 1168
* schannel: encrypted data buffer: offset 1168 length 17408
* schannel: decrypted data length: 1105
* schannel: decrypted data added: 1105
* schannel: decrypted data cached: offset 1105 length 16384
* schannel: encrypted data length: 34
* schannel: encrypted data cached: offset 34 length 17408
* schannel: decrypted data length: 5
* schannel: decrypted data added: 5
* schannel: decrypted data cached: offset 1110 length 16384
* schannel: encrypted data buffer: offset 0 length 17408
* schannel: decrypted data buffer: offset 1110 length 16384
* schannel: schannel_recv cleanup
* schannel: decrypted data returned 1110
* schannel: decrypted data buffer: offset 0 length 16384
<- HTTP/1.1 200 OK
<- Date: Sat, 29 Jan 2022 16:11:59 GMT
<- Content-Type: application/json; charset=utf-8
<- Transfer-Encoding: chunked
<- Connection: keep-alive
<- x-powered-by: Express
<- access-control-allow-origin: *
<- etag: W/"e8-ov4wWh4/OY1IMQMqWgskYtOFnVs"
<- via: 1.1 vegur
<- Cache-Control: max-age=14400
<- CF-Cache-Status: REVALIDATED
<- Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
<- Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=fyNvgiYFkSfx2bLgj4G0uybx5hN7f9uUIiFjrgijkYbZD7FkoBWhZfFl%2BniMEaaf%2B8Ur5nMmH1Pe3mCgnwudLptXgIaYDRi8WZvItt%2Fz69En%2B%2BwTEhWhRuBbMDk%3D"}],"group":"cf-nel","max_age":604800}
<- NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
<- Vary: Accept-Encoding
<- Server: cloudflare
<- CF-RAY: 6d53bd4fe9628c90-EWR
<- Content-Encoding: gzip
<- alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
<-
* Connection #1 to host reqres.in left intact
I've confirmed this fails with the same error if I restart my R session and run the script without update.packages('openssl').
I've also confirmed that this also succeeds when I run update.packages('openssl') before any of my library imports.
Original Post
I'm using the following libraries:
library(httr)
library(curl)
library(openssl)
library(tidyverse)
library(DBI)
After these imports, my sessionInfo() is:
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17763)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DBI_1.1.0 forcats_0.5.0 stringr_1.4.0 dplyr_1.0.0 purrr_0.3.4 readr_1.3.1
[7] tidyr_1.1.0 tibble_3.0.2 ggplot2_3.3.2 tidyverse_1.3.0 openssl_1.4.2 curl_3.3
[13] httr_1.4.1 RevoUtils_11.0.2 RevoUtilsMath_11.0.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.5 cellranger_1.1.0 pillar_1.4.6 compiler_4.0.2 dbplyr_1.4.4 tools_4.0.2 lubridate_1.7.9
[8] jsonlite_1.7.0 lifecycle_0.2.0 gtable_0.3.0 pkgconfig_2.0.3 rlang_0.4.6 reprex_0.3.0 cli_2.0.2
[15] rstudioapi_0.11 haven_2.3.1 withr_2.2.0 xml2_1.3.2 fs_1.4.2 hms_0.5.3 generics_0.0.2
[22] vctrs_0.3.1 askpass_1.1 grid_4.0.2 tidyselect_1.1.0 glue_1.4.1 R6_2.3.0 fansi_0.4.1
[29] readxl_1.3.1 modelr_0.1.8 blob_1.2.1 magrittr_1.5 backports_1.1.7 scales_1.1.1 ellipsis_0.3.1
[36] rvest_0.3.5 assertthat_0.2.1 colorspace_1.4-1 stringi_1.4.6 munsell_0.5.0 broom_0.7.0 crayon_1.3.4
When I import those libraries and then run my script, I get the following:
schannel: SSL/TLS connection with www.formstack.com port 443 (step 1/3)
* schannel: disabled server certificate revocation checks
* schannel: sending initial handshake data: sending 165 bytes...
* schannel: sent initial handshake data: sent 165 bytes
* schannel: SSL/TLS connection with www.formstack.com port 443 (step 2/3)
* schannel: failed to receive handshake, need more data
* schannel: SSL/TLS connection with www.formstack.com port 443 (step 2/3)
* schannel: encrypted data got 139
* schannel: encrypted data buffer: offset 139 length 4096
* schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
* Closing connection 0
* schannel: shutting down SSL/TLS connection with www.formstack.com port 443
* schannel: clear security context handle
Error in curl::curl_fetch_memory(url, handle = handle) :
schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
My script succeeds when I instead run:
library(httr)
library(curl)
library(openssl)
library(tidyverse)
library(DBI)
update.packages('openssl')
I suspected that this might have something to do with a conflict between curl and httr, because of this upon import:
> library(curl)
Attaching package: ‘curl’
The following object is masked from ‘package:httr’:
handle_reset
But the script also succeeds when I run the update first and swap the order of httr and curl imports, such as:
update.packages('openssl')
library(curl)
library(httr)
library(openssl)
library(tidyverse)
library(DBI)
I hope that's enough to help troubleshoot what's wrong in my environment here and appreciate any and all lessons about library maintenance / dependencies.

Content length limitation in http requests to pod ip and port

Env:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
km12-01 Ready master 26d v1.13.0 10.42.140.154 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
km12-02 Ready master 26d v1.13.0 10.42.104.113 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
km12-03 Ready master 26d v1.13.0 10.42.177.142 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
prod-k8s-node002 Ready node 25d v1.13.0 10.42.78.21 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.3.2
prod-k8s-tmpnode005 Ready node 24d v1.13.0 10.42.177.75 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.3.2
calico v3.3.1
What Happen:
I have a deployment, with 2 pods scheduled on prod-k8s-node002 and prod-k8s-tmpnode005. Just like this:
# kubectl -n gitlab-prod get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api-monkey-76489bd8c9-7gbkt 1/1 Running 0 35m 192.244.199.37 prod-k8s-node002 <none> <none>
api-monkey-76489bd8c9-w2zrs 1/1 Running 0 55m 192.244.124.240 prod-k8s-tmpnode005 <none> <none>
Now I curl each pod from a master node, say it is km12-01:
# # wait 0ms, response a json string, which has a property named 'data', it is a string, which is '1' repeated by 1000 times
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=1000'
* Trying 192.244.124.240...
* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=1000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 1011
< ETag: W/"3f3-oUOV2TWikka+Y8l16Cqo/Q"
< Date: Sat, 08 Dec 2018 20:12:26 GMT
< Connection: keep-alive
<
* Connection #0 to host 192.244.124.240 left intact
{"data":"1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}
# curl -v '192.244.199.37:3000/health/test?ms=0&content=1&repeat=1000'
* Trying 192.244.199.37...
* Connected to 192.244.199.37 (192.244.199.37) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=1000 HTTP/1.1
> Host: 192.244.199.37:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 1011
< ETag: W/"3f3-oUOV2TWikka+Y8l16Cqo/Q"
< Date: Sat, 08 Dec 2018 20:15:16 GMT
< Connection: keep-alive
<
* Connection #0 to host 192.244.199.37 left intact
{"data":"1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}
Good, both work.
So if I make a longer response body (2000 bytes), what will happen?
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=2000'
* Trying 192.244.124.240...
* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=2000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Unfortunately, the connection was hanging on and then reset by peer after couple minutes. But it works on the host of the requested pod.
Curl a pod from its host:
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=2000'
* Trying 192.244.124.240...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=2000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
{"data":"11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 2011
< ETag: W/"7db-ViIL+fXpsfh/YwmlqDHSsQ"
< Date: Sat, 08 Dec 2018 20:22:55 GMT
< Connection: keep-alive
<
{ [2011 bytes data]
100 2011 100 2011 0 0 440k 0 --:--:-- --:--:-- --:--:-- 490k
* Connection #0 to host 192.244.124.240 left intact
The other one is in the same situation.
Summary:
I tried many times, and found a interesting and strange thing: If I request a pod from any other node (except the host of the pod), the response body cannot be longer than 1140 bytes. Otherwise, the connection will hang on.
My problem:
How it happen? And how can I break this limitation?
This is the kubeadm-1.12.0 initialization file:
# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
controlPlaneEndpoint: 10.42.79.210:6443
kind: ClusterConfiguration
kubernetesVersion: v1.12.0
networking:
podSubnet: 192.244.0.0/16
serviceSubnet: 192.96.0.0/16
apiServerCertSANs:
- 10.42.140.154
- 10.42.104.113
- 10.42.177.142
- km12-01
- km12-02
- km12-03
- 127.0.0.1
- localhost
- 10.42.79.210
etcd:
external:
endpoints:
- https://10.42.140.154:2379
- https://10.42.104.113:2379
- https://10.42.177.142:2379
caFile: /etc/kubernetes/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
clusterDNS:
- 192.96.0.2
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
We use an external etcd cluster, and the underlying network uses IPVS rather than iptables. We did an upgrade from 1.12.0 to 1.13.0.
# kubeadm upgrade apply 1.13.0
You can use this to test, and we're the ones that came up after the upgrade. #Adam Otto
There are some IPVS errors as follows:
kube-proxy logs
Seems like to be an MTU issue.

RCurl getURL with FTP over proxy

In R I'm trying to get data from a ftp server using RCurl. The server has explicit encryption activated. Below one of the many attempts to make this work but I always get 530 Login or password incorrect! Still the username / password works when using another client like WinSCP. Any help is welcome!
library(RCurl)
# CURL_SSLVERSION_TLSv1_1 <- 5L
# CURL_SSLVERSION_TLSv1_2 <- 6L
opts <- curlOptions(
proxy = "http://my.proxy/",
proxyport = 8080,
dirlistonly = TRUE,
sslversion = 6L,
ftp.use.epsv = FALSE,
ssl.verifypeer=TRUE
)
dat <- getURL("ftp://myUser:myPaas#ftps.myServer.com/", .opts = opts )
Here some screenshots of the winscp setup (only German OS available)
Connection screen:
Protocol screen:
WinSCP connection string:
Here is the output having verbose = TRUE added
* Trying 10.x.x.x...
* Connected to 10.x.x.x (10.x.x.x) port 8080 (#0)
> GET ftp://User:Pass#ftp.myserver.com/ HTTP/1.1
Host: ftp.myserver.com:21
Accept: */*
Proxy-Connection: Keep-Alive
< HTTP/1.1 403 Forbidden
< Server: squid/3.5.20
< Mime-Version: 1.0
< Date: Thu, 15 Nov 2018 11:12:24 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3898
< X-Squid-Error: ERR_FTP_FORBIDDEN 530
< Vary: Accept-Language
< Content-Language: en
< WWW-Authenticate: Basic realm="FTP ftp.myserver.com"
< X-Cache: MISS from Proxy-xxxxxx
< X-Cache-Lookup: MISS from Proxy-xxxxx:8080
< Via: 1.1 Proxy-xxxxxx (squid/3.5.20)
< Connection: keep-alive
<
* Connection #0 to host 10.x.x.x left intact
Here is the WinSCP log:
. 2018-11-15 17:09:15.999 Verbinde mit ftp.MyServer.com ...
. 2018-11-15 17:09:15.999 HTTP proxy command: CONNECT ftp.MyServer.com:21 HTTP/1.1
. 2018-11-15 17:09:15.999 Host: ftp.MyServer.com:21
. 2018-11-15 17:09:16.014 Verbindung mit dem Proxy hergestellt, führe Handshakes aus ...
. 2018-11-15 17:09:16.039 HTTP proxy response: HTTP/1.1 200 Connection established
. 2018-11-15 17:09:16.039 HTTP proxy headers:
. 2018-11-15 17:09:16.039
. 2018-11-15 17:09:16.039 Verbunden mit ftp.MyServer.com, TLS Verbindung wird ausgehandelt...
< 2018-11-15 17:09:16.059 220 Welcome to FTP MyServer
> 2018-11-15 17:09:16.059 AUTH TLS
< 2018-11-15 17:09:16.093 234 Using authentication type TLS
. 2018-11-15 17:09:16.179 Verifying certificate for "MY Server" with fingerprint xx:yy:zz.............. and 18 failures
. 2018-11-15 17:09:16.179 Certificate common name "ftp.MyServer.com" matches hostname
. 2018-11-15 17:09:16.179 Certificate for "MY Server" matches cached fingerprint and failures
. 2018-11-15 17:09:16.179 Using TLSv1.2, cipher TLSv1/SSLv3: ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA, ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
. 2018-11-15 17:09:16.216 TLS Verbindung hergestellt. Warte auf die Willkommensnachricht...
> 2018-11-15 17:09:16.216 USER MyUser
< 2018-11-15 17:09:16.216 331 Password required for MyUser
> 2018-11-15 17:09:16.216 PASS **********
< 2018-11-15 17:09:16.236 230 Logged on
Here is the R console log with more info:
* Trying 10.101.0.32...
* Connected to 10.101.0.32 (10.x.x.x) port 8080 (#0)
> GET ftp://MyUser:#ftp.MyServer.com/ HTTP/1.1
Host: ftp.MyServer.com:21
Accept: */*
Proxy-Connection: Keep-Alive
< HTTP/1.1 401 Unauthorized
< Server: squid/3.5.20
< Mime-Version: 1.0
< Date: Fri, 16 Nov 2018 08:25:46 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3898
< X-Squid-Error: ERR_FTP_FORBIDDEN 530
< Vary: Accept-Language
< Content-Language: en
< WWW-Authenticate: Basic realm="FTP MyServer ftp.MyServer.com"
< X-Cache: MISS from Proxy-xxxx
< X-Cache-Lookup: MISS from Proxy-xxxx:8080
< Via: 1.1 Proxy-xxxx (squid/3.5.20)
< Connection: keep-alive
<
* Ignoring the response-body
* Connection #0 to host 10.x.x.x left intact
* Issue another request to this URL: 'ftp://ftp.MyServer.com'
* Found bundle for host ftp.MyServer.com: 0xafea17a
* Re-using existing connection! (#0) with host 10.x.x.x
* Connected to 10.x.x.x (10.x.x.x) port 8080 (#0)
* Server auth using Basic with user 'Nordex'
> GET ftp://MyUser:#ftp.MyServer.com/ HTTP/1.1
Authorization: Basic Tm9yZGV50g==
Host: ftp.MyServer.com:21
Accept: */*
Proxy-Connection: Keep-Alive
< HTTP/1.1 401 Unauthorized
< Server: squid/3.5.20
< Mime-Version: 1.0
< Date: Fri, 16 Nov 2018 08:25:46 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3947
< X-Squid-Error: ERR_FTP_FORBIDDEN 530
< Vary: Accept-Language
< Content-Language: en
* Authentication problem. Ignoring this.
< WWW-Authenticate: Basic realm="FTP MyServer ftp.MyServer.com"
< X-Cache: MISS from Proxy-xxxx
< X-Cache-Lookup: MISS from Proxy-xxxx:8080
< Via: 1.1 Proxy-xxxx (squid/3.5.20)
< Connection: keep-alive
<
* Connection #0 to host 10.x.x.x left intact
WinSCP sends CONNECT command to proxy:
. 2018-11-15 17:09:15.999 HTTP proxy command: CONNECT ftp.MyServer.com:21 HTTP/1.1
While RCurl sends GET command:
> GET ftp://User:Pass#ftp.myserver.com/ HTTP/1.1
In cURL, you use CURLOPT_HTTPPROXYTUNNEL option to make it use CONNECT:
make libcurl tunnel all operations through the HTTP proxy (set with CURLOPT_PROXY). There is a big difference between using a proxy and to tunnel through it.
Tunneling means that an HTTP CONNECT request is sent to the proxy, asking it to connect to a remote host on a specific port number and then the traffic is just passed through the proxy.
In RCurl, that corresponds to httpproxytunnel. So this should do:
opts <- curlOptions(
proxy = "http://my.proxy/",
proxyport = 8080,
httpproxytunnel = 1,
...
)

Asterisk ARI / phpari - Bridge recording: "Recording not found"

I'm using phpari with Asterisk 13 and trying to record a bridge (mixing type).
In my code:
$this->phpariObject->bridges()->bridge_start_recording($bridgeID, "debug", "wav");
It returns:
array(4) {
["name"]=>
string(5) "debug"
["format"]=>
string(3) "wav"
["state"]=>
string(6) "queued"
["target_uri"]=>
string(15) "bridge:5:1:503"
}
When and I stop and save with
$this->phpariObject->recordings()->recordings_live_stop_n_store("debug");
It returns FALSE.
I debug with
curl -v -u xxxx:xxxx -X POST "http://localhost:8088/ari/recordings/live/debug/stop"
Result:
* About to connect() to localhost port 8088 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8088 (#0)
* Server auth using Basic with user 'xxxxx'
> POST /ari/recordings/live/debug/stop HTTP/1.1
> Authorization: Basic xxxxxxx
> User-Agent: curl/7.19.7 (xxxxx) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8088
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: Asterisk/13.2.0
< Date: Thu, 19 Feb 2015 11:58:18 GMT
< Cache-Control: no-cache, no-store
< Content-type: application/json
< Content-Length: 38
<
{
"message": "Recording not found"
* Connection #0 to host localhost left intact
* Closing connection #0
}
Asterisk CLI verbose 5 trace: http://pastebin.com/QZXnpXVA
So, I've solved the problem.
It was a simple write permission problem.
Asterisk user couldn't write on /var/spool/asterisk/recording because it was owned by root.
Changing the ownership to the asterisk user solved it.
I detected this problem by looking at the Asterisk CLI trace again:
-- x=0, open writing: /var/spool/asterisk/recording/debug format: sln, (nil)
This (nil) indicates that the file could not be written, so I checked the folder and saw where the problem was.

apigee geoqueries return 400 BAD REQUEST

I have my defined entity with location property set, but when I use geoqueries, it doesn't work. Tried with both JavaScript SDK and cURL. With cURL, it returned 400 BAD REQUEST.
Here's one of the entity:
{
address : 132 Canal St, Boston, MA 02114
created : 1405138421997
location :
latitude : -71.06065690000003
longitude : 42.36471450000001
metadata :
path : /bars/e889e9ee-097a-11e4-98ef-f94f76c7327e
modified : 1405390673164
name : Sports Grill Boston
region : Charlestown
type : bars
uuid : e889e9ee-097a-11e4-98ef-f94f76c7327e
website : http://www.sportsgrilleboston.com/
}
And here's the cURL command:
curl -v -X GET "https://api.usergrid.com/my-org/my-app/bars?access_token=token&ql=location within 16000 of 42.358431,-71.059773"
And here's the returns:
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f8bab003c00) send_pipe: 1, recv_pipe: 0
* About to connect() to api.usergrid.com port 443 (#0)
* Trying 54.209.30.241...
* Connected to api.usergrid.com (54.209.30.241) port 443 (#0)
* TLS 1.0 connection using TLS_RSA_WITH_AES_128_CBC_SHA
* Server certificate: api.usergrid.com
* Server certificate: Thawte SSL CA
* Server certificate: thawte Primary Root CA
* Server certificate: Thawte Premium Server CA
> GET /my-org/my-app/bars?access_token=token&ql=location within 16000 of 42.358431,-71.059773 HTTP/1.1
> User-Agent: curl/7.30.0
> Host: api.usergrid.com
> Accept: */*
>
< HTTP/1.1 400 BAD_REQUEST
< Content-Length: 0
< Connection: Close
<
* Closing connection 0
Figured it out - you need to URL encode the request:
curl -v -X GET "https://api.usergrid.com/my-org/my-app/bars?access_token=token&ql=location%20within%2016000%20of%2042.358431,%20-71.059773"

Resources