Recently, I configured a Nexus repo on a corporate server at https://mycorporateserver.corporation.com/nexus/.
The way "its always been done" is to put our "apps" on the server and use apache httpd to serve the pages and manage access using ldap.
Nexus is configured for anonymous access, https, localhost only (all works fine). Then, we used Apache httpd to serve that Nexus page/URI to others using proxypass and reverseproxypass (per instructions in sonatype's documentation).
The catch is that the httpd configuration requires ldap. So, if I hit the given Nexus URI from a web browser, the browser asks for my corporate login. I log in with my user name and password and can view the repository as an anonymous user just fine.
I did not configure Nexus for ldap, Nexus provided me read-only anonymous access combined with the ability to log in as an admin from the login menu.
Great. The problem (not surprising) is when Eclipse/M2E tries to contact the Nexus repository I get:
"could not transfer artifact 'myartifact' from/to nexus (https://mycorporateserver.corporation.com/nexus/): handshake alert."
In my settings.xml, I included
<servers>
<server>
<id>tried many different versions of the server name including full URI</id>
<username>username</username>
<password>password</password>
<server/>
<servers/>
but that doesn't seem to work - which I think makes sense since I'm not trying to login to Nexus but rather supply my credentials to ldap.(?)
In M2E/Eclipse, is there a way to provide the needed LDAP information?
Is it better to not let httpd manage access but configure Nexus to handle everything LDAP? Is there a better/different way to configure Nexus/httpd/LDAP/Eclipse to solve the problem?
Thanks for all pointers and guidance!
"could not transfer artifact 'myartifact' from/to nexus
(https://mycorporateserver.corporation.com/nexus/): handshake alert."
That's an SSL handshake problem, the Java running Eclipse does not consider the certificate installed on Nexus to be valid. This is almost certainly because either:
The certificate is self signed.
The certificate has been signed by a
private certificate signing authority which is not in the truststore
of the Java running Eclipse.
Either way the workaround is to install the certificate on Nexus into the trust store of the java running Nexus.
See here for more information:
https://support.sonatype.com/hc/en-us/articles/213464948-How-to-trust-the-SSL-certificate-issued-by-the-HTTP-proxy-server-in-Nexus
Ultimately, as I understand it, it was a mismatch between how the VirtualHost and ServerName were defined in the apache httpd configuration.
https://mycorporateserver.corporation.com/nexus/ was the ServerName but the VirtualHost was defined with the ip and port https://mycorporateserver.corporation.com:port.
Original
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com/nexus/
...ldap and proxy pass configs
</VirtualHost>
Since we have more than one virtual host containing this ip and port combination, the server looked further into the configuration to find the proper page by reading the ServerName. Since no ServerNames matched what the clients sent, the handshake error occurred.
https://httpd.apache.org/docs/current/vhosts/name-based.html
Changing ServerName in the httpd conf to include the port solved the handshake error.
Final
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com:port/nexus/
...ldap and proxy pass configs
</VirtualHost>
(I'm by no means an apache httpd expert, still want to find out if there is a way to do all this without showing the port in the URL)
Then, when sending a request from Eclipse/M2E to the server, the response was "Unauthorized"
Adding the nexus server plus username and password to settings.xml solved the authorization problem and all worked great!
<servers>
<server>
<id>nexus</id>
<username>username</username>
<password>password</password>
<server>
</servers>
To ensure passwords were not stored in plain text, instructions at this Maven site were used to create encrypted passwords: https://maven.apache.org/guides/mini/guide-encryption.html
In hindsight, the question probably could have been asked better/differently but I didn't yet know what I learned today.
Related
This site can’t provide a secure connection localhost sent an invalid response.
Try running Windows Network Diagnostics.
ERR_SSL_PROTOCOL_ERROR
When running a service or site locally you can avoid this problem by doing the following:
In project properties enable SSL:
Make sure to put https link as a start URL or just make direct request to https version:
I'm working on a small Flask based service behind Nginx that serves some content over HTTP. It will do so using two-way certificate authentication - which I understand how to do with Nginx - but users must log in and load their own certificate that will be used for the auth piece.
So the scenario is:
User has a server that generates a cert that is used for client authentication.
They log into the service to upload that cert for their server.
Systems that pull the cert from the user's server can now reach an endpoint on my service that serves the content and authenticates using the cert.
I can't find anything in the Nginx docs that says I can have a single keystore or directory that Nginx looks at to match the cert for an incoming request. I know I can configure this 'per-server' in Nginx.
The idea I currently have is that I allow the web app to trigger a script that reads the Nginx conf file, inserts a new server entry and a specified port with the path to the uploaded cert and the sends the HUP signal to reload Nginx.
I'm wondering if anyone in the community has done something similar to this before with Nginx or if they have a better solution for the odd scenario I'm presenting.
After a lot more research and reading some of the documentation on nginx.com I found that I was way over complicating this.
Instead of modifying my configuration in sites-available I should be adding and removing config files from /etc/nginx/conf.d/ and then telling Nginx to reload by calling sudo nginx -s reload.
I'll have the app call a script to run the needed commands and add the script into the sudoers file for the www-data user.
Something I have been struggling with for a while, but I'm not able to come up with a proper solution.
This is the situation:
Host 1 - IBM HTTP Server, Customization Toolbox and WAS Plugins
Host 2 - WAS + Application
These are the steps I executed to configure the plugin and propagate it from Websphere:
1. I used the Customization Toolbox, selected the correct WAS Plugins directory and created a new Web Server Plugin
2. I copied the new configureSERVER.bat to my Application Server on Host 2, and configured the current profile.
3. On Host 1, I created an Administrator account.
When I open the WAS-console on Host 2, I can see the actual Web Server, so that went ok. When I select "Generate plugin" and "Propagate Plugin", I get no errors. I checked the HTTP Server, and indeed, my plugin-cfg.xml is neatly created and exists.
To make sure everything is all right, I opened the http://HOST1/snoop on Host 1 and I saw the correct diagnostics. So far, so good.
After that, I deployed my application, which runs on port 9044. However, this application runs on HTTPS, so we need to make sure that the IBM HTTP Server accepts SSL connections. I generated my own selfsigned certificate, imported it in the httpd.conf and restarted the server. (If someone is interested, I'll put some more details on how to do this).
Now, when I open https://HOST1/snoop I can see the diagnostics, which is good news. It means it accepts connections on https and reroutes it to Host 2. But the problem is, I have no idea how to access my application, which is running on port 9044.
Something that puzzles me is the details when I run the snoop-servlet.
When I run it via http (so without SSL), this is the output:
Local address XXX.XXX.XXX.XXX
Local host XXXXXXXXXX
Local port 9080
That is correct, because the port on Websphere is 9080 for that particular servlet. However, when I open https://HOST1/snoop (so, via SSL), this is what being generated:
Local address XXX.XXX.XXX.XXX
Local host XXXXXXXXXX
Local port 9044
So, apparently, 443 is being rerouted to 9044 on the second host, but the Snoop servlet runs on 9443, not 9044 (which is my application). But, then I wonder, why can I access the servlet, if it is running on another port.
So, if there's anyone who can give me some guidance, that would be nice.
This is the VHOST:
<VirtualHostGroup Name="default_host">
<VirtualHost Name="*:9080"/>
<VirtualHost Name="*:80"/>
<VirtualHost Name="*:9443"/>
<VirtualHost Name="*:5060"/>
<VirtualHost Name="*:5061"/>
<VirtualHost Name="*:443"/>
<VirtualHost Name="*:9044"/>
</VirtualHostGroup>
Event though you have 2 ports (I'm assuming you created custom transport chain and assign 9044 port to it), you added that port to default_host, which is visible in the VirtualHostGroup in the plugin. Your application is probably mapped also to the default_host, so it is available using all ports - 9080, 9443 and 9044. Second transport should be visible in the plugin config for your server with port 9044. Since it is also ssl transport plugin could choose that one to route request to your server. There is no way to force plugin to use specific port for communication to WAS for given application.
However you didn't specify what you actually want to achieve? Since your app should now be available via https.
I have a website on our Internal network that is also accessible to the public. I have purchased and installed an SSL certificate for that public site. The site is available using both https://site.domain.com (Public) and https://site.domain.local (Internal).
The problem I am having is creating and installing a self-signed certificate for the internal "site.domain.local" so that people on our internal network do not get the security warning. I have a keystore in the root folder and also created a self-signed certificate in that keystore with no luck. The public key is working just fine. I am running Debian linux with Tomcat 7 installed and I am also using Active Directory on the network with Microsoft DNS. Any and all help would be greatly appreciated. If you need more details, please ask.
Not sure I fully understand your set-up, but you could front your Tomcat with Apache, install the cert on the Apache instance and then do a Reverse-Proxy (plain http) to your Tomcat instance. People would access the Apache instance which would handle the SSL connection.
One way would be to add the CA certificate in every client certificate trusted store (which is not convenient) : the client click on the certificate warning message and install/trust the self signed x509 CA certificate. If this doesn't work, there is a problem with the certificate (though most openssl generated stuff .CER/.CRT/.P12/.PFX will install with no problem under recent windows).
If one client accepts the self-signed certificate with manual setup, you can try to install these certificates with Active Directory ; basically you add trusted CA cert within your AD, and client automagically synchronize (nb: mostly on login) : See there for a hint about setting thing up with AD : http://support.microsoft.com/kb/295663/en-us (You may try this or dig in that direction : with AD, you never know).
Another possibility would be to set up your internal DNS to point site.domain.com to the local web site address (the easy way). You can test this setup with you /etc/hosts file on linux/unix flavours (or system32/drivers/etc/hosts on windows flavours)
If your certificate is for site.domain.com and users are going to site.domain.local and getting that cert, then clearly there is a name mismatch and the browser will always warn you.
You either need to :
get the cert regenerated with BOTH names
get a cert for just the internal site
mangle DNS so that when your internal users go to site.domain.com
they get the IP address of site.domain.local.
I created an instance to host my wordpress blog. I made a keypair, converted it using PuTTY Gen so that it would work with winscp.
My security group that is associated with my instance has:
ICMP Allow All
TCP 0-65535
TCP 22 (SSH)
TCP 80 (HTTP)
TCP 443 (HTTPS)
UDP 0-65535
I am running a Bitnami-Wordpress 3.2.1-0 Ubuntu AMI
My Question is: How do I host a simple file on my instance?
UPDATE: so I was able to login using SFTP by simply filling in my instance Public DNS as my host, and the PuTTY Gen key as the private key, the username I had to use was Bitnami. So now I have access to the server, how or where do I put a file so that it will come out www.mywebsite.com/myfile.file???
I am assuming that I need to SSH into the server using putty, and add it into the WWW directoroy?
What I have tried:
I tried logging in using WinSCP with host name being my instance's Public DNS, and my private key file the converted PuTTY Gen file that was originally the key pair for the instance.
Using SFTP, pressing login it asks me for a user name, entering "user" or "ec2-user" I get an error saying:
"disconnected, no supported authentication methods available (server sent: public key), Server >refused our key. Authentication failed.
Using root for the username, it asks for a passphrase that I created for my keypair using PuTTY Gen, It accepts it, but then I get this error:
"Received too large (1349281121 B) SFTP packet. Max supported packet size is 1024000 B. The error >is typically caused by message printed from startup script (like .profile). The message may start >with ""Plea"". Cannot initialize SFTP protocol. Is the host running a SFTP server?
If in WinSCP I put the username as "user" and the password as "bitnami" (before I press login) (default wordpress password for bitnami AMI) it gives me this error:
Disconnected: No supported authentication methods available (server sent: publickey). Authentication log (see session log for details):Using username: "user". Server refused ourkey. Authentication failed.
I get the same errors using SCP instead of SFTP in WinSCP except when I use SCP and I press login, and I use username "root" it asks me for my passphrase, after entering that I get this error:
Connection has been unexpectedly closed. Server sent command exit status 0. Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
Also, if you want to remove wordpress from the URL, you can use the following instructions I posted on my blog (travisnelson.net):
$ sudo chmod 777 /opt/bitnami/apache2/conf/httpd.conf
$ vi /opt/bitnami/apache2/conf/httpd.conf
changed DocumentRoot to be: DocumentRoot “/opt/bitnami/apps/wordpress/htdocs”
$ sudo chmod 544 /opt/bitnami/apache2/conf/httpd.conf
$ sudo apachectl -k restart
Then in WordPress, change the Site address (URL) in General Settings to not have /wordpress.
Hope this helps
If you are already able to connect using SFTP. Now you just need to copy the file. Where you need to copy it depend on what you are trying to do.
BitNami Wordpress AMI has the following directory structure (I only include the relevant directories for this question):
/opt/bitnami
|
|-- apache2/htdocs
|-- apps/wordpress/htdocs
You mentioned that you want to www.mywebsite.com/myfile.file. If you didn't modify the default apache configuration you will need to copy file in /opt/bitnami/apache2/htdocs (this is the DocumentRoot for the BitNami WordPress AMI.
If you want that file to be accessed from www.mywebsite.com/wordpress/myfile.file, then you need to copy it in /opt/bitnami/apps/wordpress/htdocs.
If what you are trying to do is to manually install a theme or plugin you can follow the WordPress documentation taking into account that the wordpress installation directory is /opt/bitnami/apps/wordpress/htdocs.
Also, you can find below some links to the BitNami Wiki explaining how to connect to the AMIs. I just include them as a reference for other users that find the same connection issues.
Further reading:
How to connect to your amazon instance
How to upload files from Windows
I had a similar problem recently. Having setup Bitnami Wordpress on AmazonAWS I was unable to modify, add, or remove themes from within the Wordpress admin interface even though all of my permissions were setup appropriately according to Wordpress recommended settings. However, I did not want to have to resort to turning FTP access on.
I was able to resolve the issue by:
Setting the file access method for Bitnami Wordpress to 'direct'.
Changing all users to Apache Bitnami.
Adding Bitnami to Apache group and Apache to Bitnami group.