I am running my websites on Ubuntu 14.04 server with nginx. I would like to set up a mail server (probably using squirrelmail) which uses apache2 (presently not in use). I intend to keep the port numbers for the web server and mail server entirely different.
Can I do this? Do I have to do anything out of the ordinary (secret handshakes) to set this up, and if so, what exactly do I need to do?
Yes it's entirely possible. Generally the complications you will have to sort out is both servers trying to listen on the same port, and directory permission issues.
In this case you will have to take precaution to keep Nginx user www-data as root because only root is allowed access to port 80 and 443 (which I'm assuming you are using for your website).
If you work out the kinks though it's entirely possible and has been done many times for situations like yours.
With that said I think it would be a lot of extra work setting up an entire apache webserver just to run squirrelmail. You can configure squirrelmail to work with Nginx and it's not that difficult.
Tutorial here.
Imho that is the easiest way forward with the fewest possible ways to crash your entire server and be forced to startover. Which brings me to another point.
Before you do any work on a server BACK UP EVERYTHING. Clone your hard drive if you can, if not make sure you get all databases static files config files ect....
It's also best practice to not work on a live production server if you can avoid it. Use a backup in the mean time while you play around with yours.
It will save you a lot of stress to work at a comfortable pace rather than work against the clock of hundreds of customers trying to view your website.
I know that doesn't apply in all cases, but it's a good practice to get into if you can to prepare you for any bigger jobs that come down the road.
Related
I am working for a company that is providing File-Share-Software for all sorts of Protocols such as FTP, SFTP, FTPS and so on. One of our customers is facing an issue with Key-Auth and spontaneously login-problems.
Going trough the code I am pretty certain that the server collapses with too many requests at the same time. What I need right now is a simple tool to test a situation just like this. I need a simple SFTP-Fuzzer or Stresser, sending invalid or broken Auth-Attempts to the SFTP-Server.
I am not a developer but a technician and instead of writing something myself (which would take forever) I would love to have a simple script or toolset to go...if there is one.
Ok, found one faster than I thought.
Steps:
Download Kali Linux (or any Distro that contains Metasploit)
Fire up Kali Linux and put it in the same subnet as your SFTP-Server
Start Metasploit and use the SSH-Fuzzer /auxiliary/fuzzer/ssh/ssh_version_2
Set RHOST and RPORT to the relevant IP and port your server is listening to
Exploit and see what will happen
I have a VM from Digital Ocean. It currently has two domains linked to the VM.
I do not use any other web server but Golang's built in http module. Performance-wise I like it, and I feel like I have a full control over it.
Currently I am using a single Go program that has multiple websites built in.
http.HandleFunc("test.com/", serveTest)
http.HandleFunc("123.com/", serve123)
http.HandleFunc("/", serve123)
As they are websites, Go program is using port 80 for that.
And the problem is when I am trying to update only 1 website, I have to recompile whole thing as they are written in the same code.
1) Is there a way to make it hot-swappable only with Golang (without Nginx or Apache)
2) What would be a standard best practice?
Thank you so much!
Well, you can do hotswapping in go, but I really wouldn't want to do that unless really ncecessary as the complexity added isn't negligible (and I'm not talking about code).
You can have something close with a kind of proxy that would sit in front of the program and do a graceful swap whenever your binary change : the principle is to have the binary on one port, the proxy on another. When a new binary is ready, you run it on another port, and make the proxy redirect to the new port, then gracefully shutdown the old one.
There was a tool for that in Go that I can't remember the name of…
EDIT: not the one I had in mind, but close call https://github.com/rcrowley/goagain
Personnal advice: use a reverse proxy for that, its much more simple to do. My personnal setup is to use h2o to terminate SSL, HTTP2, etc, and send the requests to the various websites running on the background. Not only Go ones, though, but also PHP ones, a Gitlab instance, etc. Its much more flexible, and the performance penalty of the proxy is small…
I'm looking for a way to setup a server in order to make the static caches created by boost module easily mirrorable to several other servers.
You COULD use rsync to do this but it is brittle and liable to break. You would be better off by using either:
a single shared network filesystem
or my recommended solution, use a cluster distributed filesystem such as glusterFS. This is what is generally used on web server clusters for distributing web apps across nodes automagically.
here are some ideas...
If you want to prevent getting stabbed in the back by your hosting provider wouldn't be better to use a solution that doesn't rely on the hosting provider?
My choice would be to use a third party dns provider which supports Round Robin [ http://en.wikipedia.org/wiki/Round_robin_DNS ] -or your own server configured to support round robin- (which you can also use for auto-load-balancing).
Round robin should allow you to have several A addresses and, every time someone goes to your domain, it checks whether the servers are up or down, and redirects to the servers that are up.
For the static caches I think you could use rsync, but that's involving your hosting provider. Maybe a better way (but I think not resource-efficient) would be to have clones of your drupal installation in each server, then syncing the DBs using MySql Mirroring (and cron to create the boost static cache)... then you would not depend on any server because all of them would have the whole site and use Round Robin to redirect your domain to the working server.
I have a content management system running on a web server, that among others allows the user to upload assets like images, files, etc to the server.
The problem i have is that there will be 2 servers running behind a load balancer and i am trying to find an efficient way to handle the assets management.
The question i have is:
Will the assets be uploaded to one server every time? Or is there a chance that the images/files will end up into server1 or server2 depending on the load?
How to i serve the images if i don't know on which server they end up in? Will i have to keep the directories of these assets (images/files) synchronized between the two servers?
Thanks,
Synchronization is a tough problem to crack. You can do ad-hoc synchronization using Couchdb but that requires good knowledge of the low-level issues. Therefore you need to choose a write master.
DRDB
You could look at DRDB :D Use one server as the write-master and the other as the slave. Then you can server content from both. This approach is amazing for database-pairs.
Note: seperating your code and URL's for write-master and serve-only will be anoying
Couchdb
You could use couchdb but I think that might be overkill. This is for the LARGE amounts of data and high-levels of fault tolerance.
NFS
You could export the asset directory on the write-master as an nfs drive and import it from the other computer. But in this case it wouldn't be load-balanced in all cases -- i.e., only if the files are cached by the slave. You could use a third computer as an NFS server -- this would allow you to scale to more web-servers.
A central NFS server might just be your best solution as you can do without a write-master as every front-end server can perform writes. This is the approach I would use unless I am thinking of going past the peta-byte range :P
I currently have a virtual dedicated server through Media Temple that I use to run several high traffic Wordpress blogs. Both tend to receive sudden StumbleUpon traffic surges that (I'm assuming) cause the server CPU to run at 100% and slow down everything. I'm currently using WP-Super-Cache, S3, and CloudFront for most static files, but high traffic is still causing slowdown on the CPU.
From what I'm reading, it seems like I might want to use EC2 to help the existing server when traffic spikes occur. Since I'm currently using the top tier of virtual dedicated servers on Media Temple, I'd like to avoid jumping to a dedicated server if possible. I get the sense that AWS might help boost the existing server's power. How would I go about doing this?
I apologize if I'm using any of these terms incorrectly -- I'm relatively amateur when it comes to server administration. If this isn't the best way to improve performance, what is the recommended course of action?
The first thing I would do is move your database server to another Media Temple VPS. After that, look to see which one is hitting 100% CPU. If it's the web server, you can create a second instance, and use a proxy to balance the load. If it's the database, you may be able to create some indexes.
Alternatively, setting up a Squid caching server in front of your web server can take off a lot of load from anonymous users. This is the approach Wikipedia takes, as the page doesn't need to be re-rendered for each user.
In either case, there isn't an easy way to spin up extra capacity on the EC2 unless your site is on the EC2 to begin with.
There is just 3 type of instance you can have. Other than that they cant give you any more "server power". You will need to do some load balancing. There are software Load Balancers, such as HAProxy, NginX, which are not bad, if you dont want to deal with that, you can do DNS Round Robin, after setting up the high load blogs on different machines.
You should be able to scale them, that s the beauty of AWS, scaling.