I have issues with enabling SSL/HTTPS certificate for my web site.
Context:
I have installed Bitnami Wordpress on AWS.
Webserver (EC2 instance) is running in Frankfurt region
When I purchased a domain name from AWS, I ended up automatically in "Global" region in which Hosted Zone was automatically created.
I have created elastic IP and added A entry into DNS with elastic IP
the entries (after several attempts to make this work) in DND (route 53) are: 2 A records where one points to my-domain-name.com and other to www.my-domain-name.com, 1 MX, 1 NS, 1 SOA, 4 CNAME, 3 TXT entries
Tried with following approaches
With Certificate Manager (Frankfurt zone) I have requested a certificate and created distribution list -> did not work
Tried also to connect with my Windows PC using ssh and pem file (following this tutorial https://www.youtube.com/watch?v=q2AdWoPNios) -> connection did not work after installing Cygwin and making permission changes. Error messages "Permissions for key-pair.pem are too open..." or "Permission denied (public key".
Now, all this aside... is it really so damn hard to add a simple SSL/HTTPS to a website?
Does a person doing this need to have 3 PhDs from MIT? If I registered everything over AWS why can't I simply enable this option? Instead I am jumping from one tutorial to another and from wanting to add this SIMPLE functionality I ended up going through tutorials about changing Windows permissions on a file, installing and using Cygwin, learning how to work with it, researching how to change the inheritance of a file etc etc etc.
I just want to add SSL / HTTPS to my site and work on a site, not learn everything there is to learn about Cloud, Windows and Cygwin...
Can anyone please offer a simple, step-by-step solution? Tx!
Related
I've created a site using ASP.Net MVC that is meant to be stored on a local machine at my place of work. The intention is to have the site stored on this machine, but then accessible by all the other machines within this building.
I've followed Microsoft's tutorial as well as Code Project's tutorial, but I am not having very much luck. The binding is just the localhost, port 80, with * for the IP address. The URL is localhost/GrantTracker.
I've opened the ports within the firewall, checked the permissions on the directory (which is just within wwwroot), tried having the site take the place of the default IIS site (as Microsoft tutorial has you do) and tried having the site stand on its own with its own port (per Code Projects tutorial).
On the host machine I am receiving the standard "This site can't be reached, localhost refused to connect" which feels like either a port or permissions problem. I must be missing a step, but I can't seem to find what it would be. I am new to hosting sites through IIS so forgive me if I am just missing something basic.
I find it a bit strange too because my project uses Windows Authentication and when the site is first visited it performs that initial check with the user, authenticates, but then throws me the error.
Anyone have any ideas? Thanks in advance.
start simple,
create a simple html page, create an IIS application for it, on port 80.
Check and make sure you can see that page from another computer using the internal IP address of the the host machine so something like:
http:\\192.168.0.3\hostapp\test.html .
You can see the proper URL by running it from IIS, this will give you the entire URL you need, with localhost then just replace localhost with the IP address of the host machine to see it on other machines.
Do this in the original IIS folder so you don't encounter any folder permission issues. If you choose another folder you'll have to give access to the Network Service user ( i think, can't remember now, but there is a specific user that needs access to the folder where the website is deployed )
if you can see the page then deploy a proper website and do the same thing. Make sure the app pool is created correctly and it's up and running, then access it again on other computers and it should work.
Port 80 should be open by default so that should not be an issue.
Good day!
Due intense desire to learn new things, I have tried setting up my very own new server which is in Linux and a hosting using Centos Web Panel at home. After the installation process, I then proceeded with the common installation of the necessary configurations including the WebServer. I chose NGINX because I've read that it is lighter and more scalable than Apache, and can be used as a web server or as a reverse proxy. After that I then proceeded with the creation of my new website with the domain I have. After creating my website, it was only then that I have read about Server Blocks in NGINX (Site I read).
My question is how can I implement the Server Block method to my existing website? Or should I simply remove my site and create a new one using the server block method?
Thanks in advance
UPDATE
I created my website by creating an account in User Accounts category on CWP where I declared my domain name and ip address. Then I was given prompted to User Account dashboard(IP:2082).
Uploading of files is through FTP using my ip/username/password/port which is usually located on /home/user_account/public_html. But after seeing the tutorials, everything is set to to /var/www/domain/public_html
I have a site www.testsite.com that is hosted by wordpress.com, I'm trying to issue a certificate so I can have https.
Is there any good methods to use when I cannot really install letsEncrypt's ACME client on my wordpress.com's account?
One way to issue your certificate for your wordpress (and actually, for any other website type), is to install (clone) Let's encrypt to your machine.
The main and very basic idea is that Let's Encrypt (LE) wants you to prove that you do have access control to your domain. This is achieved through a challenge. You have to output a string they provide you inside a folder. For example, at yourwordpress.com/.well-known/acme-challenge/
the (LE) authority wants to read a long string it provides you.
So, inside your wordpress site folder, create those directories.
Once this is successfully checked, LE, saves your ssl certificates in your local machine with root privileges.
Now you have to upload those certificates *.pem to your server. This last step is different depending on the company server.
Here is a nice tutorial that worked for me to get the LE certificates.
I have XAMPP running fine on one machine and I have 2 WordPress installs running fine on that machine. I would like to be able to access and work on those WordPress installs on other machines on my network.
Right now, I have it set so that if I try to access those directories from another computer on the network, all I get is either the XAMPP splash screen, or a 404 error if I try to access specific folders.
I've researched this and researched this and I have found numerous posts about how to do this.... but only in bits and pieces.
Does anyone know of a step by step, start to finish, guide of how to do this? In layman's terms?
Remote (from another network) would be great too. But I'll cross that bridge once I figure this out.
Any help would be appreciated.
Thanks!
I would suggest rather than shared folders you use an FTP client / server - I use Filezilla server and client for my local sandbox testing server. This will give you additional info like file and folder groups and permissions.
Got to ask the question what OS? have you opened port 80 on the server machine?
Things like this can also occur on Linux if the folders and files do not have the correct groups for access and or permissions.
chown -R apache:apache /var/www/html
the above if on linux will assign all the folders and files to be accessed by the apache process. not having the correct groups assigned can give you 404 when you know the files are actually there.
also check the file and folder permissions,
different files and directories have permissions that specify who and what can read, write, modify and access them - this wordpress page gives a good overview of permissions
http://codex.wordpress.org/Changing_File_Permissions
edit p.s. to access then from other networks all you will have to do is forward ports from your router to your internal server then your ftp and www services will be accessable from the outside world. I would suggest using a .htpass htaccess password to protect your services at this point.
I am trying to introduce a staging step in my company's code-production process. We currently have ~10 eng clients who commit code individually, update local codebase - debug/check locally, then we deploy the code to production environment and have other employees QA. Obviously we would like to have a better pre-production test process to help catch bugs before they go live to the public.
My first attempt is to create a staging environment on an extra ubuntu box with the most recent committed code from the eng clients. I then could allow the Product Managers to check this site and find bugs, test features, expose bottlenecks, etc.
What I have: The ubuntu machine (local server) is currently configured as a normal eng client. It has a local drupal installation, complete backup of the db, and all of this is accessible locally. Let's go with mysite.com = official site; and the local staging domain I use on the ubuntu box = ms.com. This local ms.com works just fine, so in essence, I need to just allow other people at the company to navigate to some URL and it acts the exact way ms.com currently behaves. I have DNS servers pointing to the ubuntu box and it is running some side projects out of the /www folder.
In an effort to keep the side projects running, I think my solution is to create a name-based virtual host that points to the directory of the local drupal installation. Is this the right thing to do to achieve my goals? Is there an easier way to open up this local config to the employees.
In trying to set up the virtual host I did the following:
I added the static ip address of the local server to /etc/hosts
I added a virutalHost to /etc/apache2/sites-available with the DocumentRoot dir/DrupalInstallation
I added a2ensite
Then restarted apache.
Halfway success. I can get to the main page, but none of the modules load, I tried loading more hosts/variations, started changing all localhost references to the external, but I don't really know what the underlying issue is and I do not know how to diagnose it. The one interesting bit is that if you click on some of the links, it kicks you back out to the index page of the www folder - I don't think the site alias is 100% sticking for requests.
Let me know if there is any sort of log or report I can share to help diagnose/debug this. Any and all help greatly appreciated - thanks!
It sounds like your specific error accessing pages beyond the homepage is related to not having mod_rewrite enabled/configured.
A different approach:
On a bigger scale, it sounds to me like you might not have what it takes to administer the staging server when something goes wrong. If you're unskilled at linux server admin, save yourself the headache and use a preconfigured virtual appliance (e.g. Quickstart, AegirDev, or Walid) instead of the dedicated box. If your staging box isn't beefy enough to handle hosting virtual machines, then just run the QuickStart install scripts over a base uBuntu build.
Now that you know your staging server is working and runs imported Drupal sites successfully, install git, create a shared repo, and make sure you and your developers are setup to use git as their source control in their IDE.