Sending Bittorrent HTTP request to Ubuntu tracker - http

I downloaded the torrent file of ubuntu 17.10 from here:
https://www.ubuntu.com/download/alternative-downloads
Here is what inside:
TorrentInfo{Created By: null
Main tracker: http://torrent.ubuntu.com:6969/announce
Comment: Ubuntu CD releases.ubuntu.com
Info_hash: f07e0b0584745b7bcb35e98097488d34e68623d0
Name: ubuntu-17.10.1-desktop-amd64.iso
Piece Length: 524288
Pieces: 2866
Total Size: 1502576640
Is Single File Torrent: true
File List:
Tracker List:
http://torrent.ubuntu.com:6969/announce
http://ipv6.torrent.ubuntu.com:6969/announce
What I have tried:
I sent: (Only torrent info-hash)
http://torrent.ubuntu.com:6969/announce?info_hash=%f0%7e%0b%05%84%74%5b%7b%cb%35%e9%80%97%48%8d%34%e6%86%23%d0
and received:
you sent me garbage - id not of length 20
I sent: (torrent info-hash and my peer-id)
http://torrent.ubuntu.com:6969/announce?info_hash=%f0%7e%0b%05%84%74%5b%7b%cb%35%e9%80%97%48%8d%34%e6%86%23%d0&peer_id=%2D%41%5A%35%37%35%30%2D%54%70%6B%58%74%74%5A%4C%66%70%53%48
and received:
you sent me garbage - invalid literal for long() with base 10: ''
What am I missing? The spec doesn't specify any example.
Spec:
https://wiki.theory.org/index.php/BitTorrentSpecification#Tracker_HTTP.2FHTTPS_Protocol

The announce misses the obligatory keys port, uploaded, downloaded and left.
These keys plus info_hash and peer_id, MUST be in every announce.
Further, while the event key isn't obligatory in every announce,
the first announce to the tracker MUST include 'event=started'.
Trying:
http://torrent.ubuntu.com:6969/announce?info_hash=%f0%7e%0b%05%84%74%5b%7b%cb%35%e9%80%97%48%8d%34%e6%86%23%d0&peer_id=%2D%41%5A%35%37%35%30%2D%54%70%6B%58%74%74%5A%4C%66%70%53%48&port=6881&uploaded=0&downloaded=0&left=1502576640&event=started
and the tracker responses with:
your client is outdated, please upgrade
oh well, more to fix...
From my answer here: Why does tracker server NOT understand my request? (Bittorrent protocol)
It is because the request string don't have compact=1 in it.
Most tracker require that nowadays. The legacy way is too ineffective.
So, adding compact=1 to the announce:
http://torrent.ubuntu.com:6969/announce?info_hash=%f0%7e%0b%05%84%74%5b%7b%cb%35%e9%80%97%48%8d%34%e6%86%23%d0&peer_id=%2D%41%5A%35%37%35%30%2D%54%70%6B%58%74%74%5A%4C%66%70%53%48&port=6881&uploaded=0&downloaded=0&left=1502576640&event=started&compact=1
and the tracker responses with:
d8:completei2134e10:incompletei100e8:intervali1800e5:peers300:[ binary data ... ]e
Success!

Related

Wordpress Perfect Varnish VCL Random 503 Error

I am using wordpress. I deploy Varnish Using Docker. This Is My default.vcl. What's Wrong With This Config?? Sometimes, Get Random 503 Error. I Exclude Wordpress Search Page Using RegEx. Also Get Random 503 Error On wordpress Search Page Too!!
varnishlog
https://www.dropbox.com/s/ruczg2i3h/log.txt
I am Using NGINX Backened..
Help Appreciated
Thanks
Your log output contains the following lines:
- Error out of workspace (bo)
- LostHeader Date:
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
- LostHeader Content-Type:
- LostHeader Retry-After:
Apparently you ran out of workspace memory because the size of the response.
The following parameters in your Docker config might cause that:
-p http_resp_hdr_len=65536 \
-p http_resp_size=98304 \
While increasing the size of individual headers and the total response size, the total memory consumption exceeds the workspace_backend value, which defaults to 64k.
Here's the documentation for http_resp_size:
$ varnishadm param.show http_resp_size
http_resp_size
Value is: 32k [bytes] (default)
Minimum is: 0.25k
Maximum number of bytes of HTTP backend response we will deal
with. This is a limit on all bytes up to the double blank line
which ends the HTTP response.
The memory for the response is allocated from the backend
workspace (param: workspace_backend) and this parameter limits
how much of that the response is allowed to take up.
As you can see, it affects workspace_backend. So here's the documentation for that:
$ varnishadm param.show workspace_backend
workspace_backend
Value is: 64k [bytes] (default)
Minimum is: 1k
Bytes of HTTP protocol workspace for backend HTTP req/resp. If
larger than 4k, use a multiple of 4k for VM efficiency.
NB: This parameter may take quite some time to take (full)
effect.
The solution is to increase the workspace_backend through a -p runtime parameter.

Error while trying to send logs with rsyslog without local storage

I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}

Self-hosted gitlab server with with RPi and pitunnel showing http error 413 when trying to push

1. Problem
The git push command returns the following error if one file is larger than ~1MB:
Pushing to http://mygitlabserver.pitunnel.com/root/my_project.git
POST git-receive-pack (1163897 bytes)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
The server is an RPi 4 with an SSD attached. Accessed via pitunnel (standard subscribtion).
The push fails if one file is larger than 1MB
The push returns no error even if the commit is 150MB (a lot of small files)
The push returns no error if an mp3 file of multiple MBs gets pushed.
2. Problem
Not really a problem but it can be related to the other one
If a large project is imported that was exported from gitlab.com it returns the same error:
413 Request Entity Too Large
nginx/1.10.3 (Ubuntu)
But only if connected via pitunnel (link), it works if the project is uploaded in the local network.
The nginx seems to be the problem.
In the gitlab.rb file the following parameters are set and the gitlab service was restarted according to the gitlab docs:
nginx['enable'] = true
nginx['client_max_body_size'] = '900m'
PS: The repo will use git LFS after this problem is solved.
for all with similar a similair problem:
Pitunnel was the problem.

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

Dcm4che Error Unrecognized PDU

I have the following error in a data retrieve on a Dcm4che Server:
2015-04-27 14:55:16,463 ERROR -> (TCPServer-1-2) [org.dcm4cheri.server.ServerImpl] org.dcm4che.net.PDUException: Unrecognized PDU[type=71, length=14113
95360]
org.dcm4che.net.PDUException: Unrecognized PDU[type=71, length=1411395360]
at org.dcm4cheri.net.FsmImpl$2.parse(FsmImpl.java:1051)
at org.dcm4cheri.net.FsmImpl.read(FsmImpl.java:512)
at org.dcm4cheri.net.AssociationImpl.accept(AssociationImpl.java:287)
at org.dcm4cheri.server.DcmHandlerImpl.handle(DcmHandlerImpl.java:248)
at org.dcm4cheri.server.ServerImpl.run(ServerImpl.java:288)
at org.dcm4cheri.util.LF_ThreadPool.join(LF_ThreadPool.java:174)
at org.dcm4cheri.util.LF_ThreadPool$1.run(LF_ThreadPool.java:221)
at java.lang.Thread.run(Thread.java:662)
Can somebody help me please?
Likely a mismatch in aet name or settings.
Imagine your dcmrcv server as a house. It has a mailbox address that is just listening for stuff.
Now when you use say dcmsnd from somewhere else you must make sure it sends its information to the correct address. If you get ip and port right you may see a blurb come out on dcmrcv but it won't accept it.
Simplify your test as much as possible. Here is a very simple dcmrcv:
dcmrcv TRANSFERTEST#10.1.50.75:104 -dest "C:\Temp"
call this from another cmd prompt:
dcmsnd CONI_STORAGE#dev.capsurecloud.com:11012 "C:\modalities\xa" -L WST
Note: That WST is not needed, but is the aet name of my application. Read the confluence documentation for more arguments.
Most the time i see your error is becuase the aetname#ip is mismatch or one side is sending or listening for tls and the other side doesn't match.

Resources