Does my golang webserver need to be run as the root user in order to support HTTPS/TLS? - http

I want to use golang's built in ListenAndServeTLS() function to serve up my webserver (it's a very simple one), and I need to show it where my keys are stored. The keys are stored in a location only the root user can access (Let's Encrypt did this by default) and I can't listen to port 80 or 443 unless I'm the root user.
Does this mean I have to be running the script as root all the time? Is that not insecure?

To blatantly quote the well-written Caddy FAQ:
No. On Linux, you can use setcap to give Caddy permission to bind to
low ports. Something like setcap cap_net_bind_service=+ep ./caddy
should work. Consult the man pages of your OS to be certain. You could
also use iptables to forward to higher ports.
Privilege de-escalation is another option, but it is not yet a
reliable solution. It will be implemented as soon as this becomes a
robust possibility. Concerned readers are encouraged to get involved
to help this become a reality.

To add to #ma_il's answer, you can use setcap but you still would have to change the permissions of the cert.
Or build your app then run it as root, example: go build && sudo ./app

Related

SVN over HTTPS: how to hide or encrypt URLs?

I run Apache over HTTPS and can see in the log file that a HTTP/1.1 request is made for every single file of my repository. And for every single file, the full URL is disclosed.
I need to access my repository from a location where I don't want sysadmins to look over my shoulder and see all these individual URLs. Of course I know they won't see file contents since I am using HTTPS or not HTTP, but I am really annoyed they can see URLs and as a consequence, file names.
Is there a way I can hide or encrypt HTTPS urls with SVN?
This would be great, as I would prefer not having to resort using svn+ssh, which does not easily/readily support path-based authorization, which I am heavily using.
With HTTPS, the full URL is only visible to the client (your svn binary) and the server hosting the repository. In-transit, only the hostname you're connecting to is visible.
You can further protect yourself by using a VPN connection between your client and the server, or tunneling over SSH (not svn+ssh, but an direct ssh tunnel).
If you're concerned about the sysadmin of the box hosting your repository seeing your activity in the Apache logs there, you have issues far beyond what can be solved with software. You could disable logging in Apache, but your sysadmin can switch it back on or use other means.
Last option: if you don't trust the system(s) and/or network you're on, don't engage in activities that you consider sensitive on them. They can't see something that isn't happening in the first place.

Modifying nginx config directly in memory?

This might be a very silly question but I'll still ask it.
Nginx reads nginx.conf file & keeps information in memory/cache until you do a 'nginx -s reload'.
Is there a way were I can modify nginx configuration directly in memory as I need to do reload multiple times per minute and config file can be huge.
Basically the problem I'm trying to solve is that I have multiple docker containers coming up & down dynamically on a set of host machines. Every time a container comes up, it'll have a different IP & port open (application design constraint). And I'm thinking of using Nginx as reverse proxy. What should I do to solve this problem considering the fact that final product might have 3000 - 5000 containers running on a cluster of hosts. The rate at which containers are launched/destroyed might be around 100 per second.I need a fast way to make sure routing is happening properly
hmmm, probably not, nginx loads its config in multiple workers, so this does not look like a good idea to try to change it on the fly.
What it your goal ? You seem to need to do some dynamic routing or other sort of treatment. You should instead look at:
nginx directives and modules such as eval
Lua scripting
nginx module dev (in C/C++)
This would allow you to do more or less whatever you want, you can read some config in a db like redis, and change the behavior of your code according to the value in Redis.
For example, you could do a lot just by reading a value in Redis, and then use if directive in your nginx config file. You can use How can I get the value from Redis and put it in a variable in NGiNX? to get redis value in nginx with eval module.
UPDATE :
For dynamic IP in nginx, you should look at Dynamic proxy_pass to $var with nginx 1.0.
So I would suggest that you :
have a process that write in redis the IP address of your dockers
read it with eval and redis module in nginx
use the value to proxy

How to use external IP internally and using it with ownCloud? And WAMP shutdown

I have setup a server on of my PCs and I am running ownCloud on it. Everything is working fine but I wanted to ask a few things just to make the whole process more convenient.
How can I use a dynamic IP address in ownCloud? I have a ddns but, since I have a dyn external IP address, I need to put in the ddns again and modify the account when the IP changes. Is there a way through which ownCloud could work on the ddns and not on external IP address? (I hope you got what I meant) IMAGE: http://s27.postimg.org/r0224wfsz/Untitled_2.jpg
Also, is there a way to use the ddns(xyzz.co) in the same home network in which my server is? instead of the internal IP address(192.168.1.2). Because again, I need to modify the account when I am in the home network and when outside.
My WAMP server shuts down automatically like it would if I manually exit it. Is there a solution to that too? I have kept it on auto start on OS boot-up. But, I think that is not the solution.
Thanks a lot!
Why not let your router attached to your server cope with this? In the most recent routers, you can set your (D)DNS-settings.
You can set port mapping in your router to an external address. Then, when you are at home, you don't need to edit the settings of the sync client since you can always use your external IP-address.
I'd use cron for that. Look it up when you enter crontab --help in the terminal. On most distributions, cron comes with examples in the cron itself. So you can just edit it by entering crontab -e. Enough online, too.

UNIX server alias

Is there a way to check the list of aliases a particular UNIX host has?.
When I use WinSCP kind of a tool to login to list of UNIX servers the contents are one and the same.
How I verify/obtain the list of servers that are aliases to each other using a UNIX command line.
Waiting for help/inputs !
Thanks,
Mohan
The short answer is that there's no definitive way to do this. Literally anyone running a DNS server anywhere can provide an alias for your server without telling you about it (in fact, I do this at home for systems at work to make the VPN more convenient for me). The underlying problem is that there is a mapping from the aliases to the server, but no mapping in reverse.
Some sites document server aliases in /etc/hosts, but that's more convention than requirement, since it involves duplicating information that's already in DNS, anyway.
Now, for just the servers you care/know about, you can use the nslookup or host command. The first line of output from host will tell you if the name refers to a server directly, or if it refers to an alias. Just give 'host' each of the names you're interested in, and you'll eventually build up a useful map of the names on the network.

Will direct access to a webdav-mounted file cause problems?

I'm thinking about configuring the remind calendar program so that I can use the same .reminders file from my Ubuntu box at home and from my Windows box at work. What I'm going to try to do is to make the directory on my home machine that contains the file externally visible through webdav on Apache. (Security doesn't really concern me, because my home firewall only forwards ssh, to hit port 80 my my home box, you need to use ssh tunneling.)
Now my understanding is that webdav was designed to arbitrate simultaneous access attempts. My question is whether this is compatible with direct file access from the host machine. That is, I understand that if I have two or more remote webdav clients trying to edit the same file, the webdav protocol is supposed to provide locking, so that only one client can have access, and hence the file will not be corrupted.
My question is whether these protections will also protect against local edits going through the filesystem, rather than through webdav. Should I mount the webdav directory, on the host machine, and direct all local edits through the webdav mount? Or is this unnecessary?
(In this case, with only me accessing the file, it's exceedingly unlikely that I'd get simultaneous edits, but I like to understand how systems are supposed to work ;)
If you're not accessing the files under the WebDAV protocol, you're not honoring locks set via LOCK and UNLOCK methods and therefore will open to potential to overwrite changes made by another client. This situation is described in the WebDAV RFC here: https://www.rfc-editor.org/rfc/rfc4918#section-7.2

Resources