How to send data over QUIC - tcp

I am want to use QUIC protocol to send my own data (lets say a video). I have already done the following setup,
1- download and compiled this https://github.com/google/proto-quic
2- I have also set up the toy example. https://www.chromium.org/quic/playing-with-quic
Issue: I can only send this www.example.com page, How I can send my own data over the quic in this setup.

I would recommend you to use the Caddy project on the server side.
An alternative can be GoQuic.
Both servers should be configured to provide your own web page via QUIC (which might also contain video). QUIC works only if the connection is secure, so a certificate for a valid domain should be generated. In the example here the certificate is generated for the domain www.example.org. If you want to generate a valid certificate for https://localhost the script that generates the certificate has to be updated accordingly.
On the client side use the last version of Google Chrome. Run Chrome from the command line as follows:
google-chrome \
--user-data-dir=/tmp/chrome-profile \
--no-proxy-server \
--enable-quic \
--origin-to-force-quic-on=localhost:443 \
--host-resolver-rules='MAP localhost.org:443' \
https://localhost

Related

local artifactory golang proxy and checksum verification

When getting go modules using a local artifactory go proxy "go get" fails when doing module checksum verification.
At work we can't do downloads directly from internet but must go through a local proxy based on artifactory. I have specified GOPROXY (GOPROXY=https://repo.mycompany.se/artifactory/api/go/gocenter) to a proxy setup in our local artifactory. When running "go get" the download goes ok what I can see but the checksum verification fails as go try to use sum.golang.org directly instead of getting the checksum through the proxy.
C:\Users\x\go\src\hello2>go get rsc.io/quote#v1.5.2
go: finding rsc.io v1.5.2
go: downloading rsc.io/quote v1.5.2
verifying rsc.io/quote#v1.5.2: rsc.io/quote#v1.5.2: Get https://sum.golang.org/lookup/rsc.io/quote#v1.5.2: dial tcp: lookup sum.golang.org: no such host
C:\Users\x\go\src\hello2>
Do artifactory support getting the checksum through the local proxy and if so how do you set it up. I have read a blog post about support when using gocenter directly but I can't find any information when using artifactory.
I'm using go 1.13 and we are using artifactory 6.12.2.
Artifactory 6.12.2 when used as golang proxy, currently does not support checksum verification when there is no access to sum.golang.org. A feature request RTFACT-20405 (Artifactory to support go client checksum verification when sum.golang.org is not accessible).
In the meantime, refer to 'go help module-private' and documentation on using GONOSUMDB environment variables. An excerpt taken from the 1 -
"If GOSUMDB is set to "off", or if "go get" is invoked with the -insecure flag, the checksum database is not consulted, and all unrecognized modules are accepted, at the cost of giving up the security guarantee of verified repeatable downloads for all modules. A better way to bypass the checksum database for specific modules is to use the GOPRIVATE or GONOSUMDB environment variables. See 'go help module-private' for details"
Artifactory 6.16 has gosumdb support - https://www.jfrog.com/confluence/display/RTF/Release+Notes

SVN over HTTPS: how to hide or encrypt URLs?

I run Apache over HTTPS and can see in the log file that a HTTP/1.1 request is made for every single file of my repository. And for every single file, the full URL is disclosed.
I need to access my repository from a location where I don't want sysadmins to look over my shoulder and see all these individual URLs. Of course I know they won't see file contents since I am using HTTPS or not HTTP, but I am really annoyed they can see URLs and as a consequence, file names.
Is there a way I can hide or encrypt HTTPS urls with SVN?
This would be great, as I would prefer not having to resort using svn+ssh, which does not easily/readily support path-based authorization, which I am heavily using.
With HTTPS, the full URL is only visible to the client (your svn binary) and the server hosting the repository. In-transit, only the hostname you're connecting to is visible.
You can further protect yourself by using a VPN connection between your client and the server, or tunneling over SSH (not svn+ssh, but an direct ssh tunnel).
If you're concerned about the sysadmin of the box hosting your repository seeing your activity in the Apache logs there, you have issues far beyond what can be solved with software. You could disable logging in Apache, but your sysadmin can switch it back on or use other means.
Last option: if you don't trust the system(s) and/or network you're on, don't engage in activities that you consider sensitive on them. They can't see something that isn't happening in the first place.

How to respond to HTTP POST from unix (AIX, RHEL, or UB) server?

I am building a custom slash command for slack. When the slack user types a command, ex /uptime, a HTTP POST message is sent to the server URL.
The tutorials I've read all include installing a tool such as ngrok, pagekite, or localtunnel to generate a URL for the local machine.
Since I am working with a server, can I not just open a port and have slack connect directly to that hostname and port? How can I do this?
Doing some research, I came across opening a port with nc, then listening with curl, however I don't understand how to put it all together.
Yes, if you are running your script for handling the POST requests from Slack on a server that has a URL that can be reached on the Internet you do not need a local tunnel like ngrok.
If you starting from scratch I can recommend using a standard Apache + PHP [+ MySql] stack and have a PHP script to interpret and react to the POST request. Of course other script languages (e.g. Python) work just as well.

Stream content form pipe with Apache2

Is it possible to write an Apache2 service that can pipe content to the client at is being generated?
I would like to setup a simple http service that triggers a build and immediately starts sending stdout (gcc stuff) to the client while the compiling is going on. The goal is that a client can use e.g. curl to test a build:
curl http://myserver.com/testbuild -F "file=#mypkg.tar.gz"
And immediately get to see stdout from the build process on the server.
I think it would be possible somehow using a cgi script, but the trick is to immediately get the stdout, bypassing the buffering. If you do not really need http as a transport protocol, why not use direct tcp streaming via netcat.
On the build server you run a script like:
#!/bin/bash
while true ; do
nc -l -p 8080 -e /path/to/buildscript
done
and when any client connects via
nc <buildservername or ip> 8080
it gets the build stout immediately.
My recommendation would be something different (using jenkins as ci server, I do this even on a cubietruck), but for a quick and small solution, it should be enough. If you need http, you can even get this adding the http header to your build script.

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

Resources