I had my new Nexus 3 repository running okay. I was able to configure some of the basic settings. Then I went through the process of enabling SSL. I used the instructions here. I also watched the video on that page, which does not give instructions that match the page.
My system info: ubuntu 14.4 with Java 8.
Install directory: /opt/nexus-3.0.0-b2016011501/
To simplify the path, I created a link to this directory: nexus -> /opt/nexus-3.0.0-b2016011501/ therefore the path to nexus is /opt/nexus
I generated my keystore as follows:
Created directory: /opt/nexus/etc/ssl
Changed to that directory and ran: keytool -keystore keystore -alias jetty -genkey -keyalg RSA -validity 3650. This generated a file called keystore. I then copied that file to keystore.jks.
Updated the following files: /opt/nexus/etc/org.sonatype.nexus.cfg added application-port-ssl=443 and added ${karaf.etc}/jetty-https.xml(this is different from the written instructions) to the end of the nexus-args=$ line. Then (this is in the video, but not the written instructions) I edited the /opt/nexus/etc/jetty-https.xml file and replaced the password in three places with the password I specified when I generated my keystore.
After this if I start nexus with ./nexus run it get the following error:
2016-01-27 02:20:41,013+0000 ERROR [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Failed to start
java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind0(Native Method) [na:1.8.0_72]
at sun.nio.ch.Net.bind(Net.java:433) [na:1.8.0_72]
at sun.nio.ch.Net.bind(Net.java:425) [na:1.8.0_72]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) [na:1.8.0_72]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) [na:1.8.0_72]
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [org.eclipse.jetty.util:9.3.5.v20151012]
at org.eclipse.jetty.server.Server.doStart(Server.java:384) [org.eclipse.jetty.server:9.3.5.v20151012]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [org.eclipse.jetty.util:9.3.5.v20151012]
at org.sonatype.nexus.bootstrap.jetty.JettyServer$JettyMainThread.run(JettyServer.java:274) [org.sonatype.nexus.bootstrap:3.0.0.b2016011501]
If it start it with sudo ./nexus run it will work but shows me the nag message saying I should not run it as root.
I have verified that my user is the owner of all the files and directories /opt/nexus
On Linux (and other unix type systems) you can't run on port numbers less than 1024 unless you are root. The best way to solve this is to run Nexus behind a reverse proxy. You can find instructions for this here:
http://books.sonatype.com/nexus-book/reference/install-sect-proxy.html
The above was written for Nexus 2.x, but the configuration needed will be the same in Nexus 3.
Regarding running as non-root as a service, there is a bug in 3.0m7 that makes this problematic:
https://issues.sonatype.org/browse/NEXUS-9437
The fix is to edit the "bin/nexus" startup script is to replace this line:
INSTALL4J_JAVA_PREFIX="su - $run_as_user -c"
With this:
exec su - $run_as_user "$prg_dir/$progname" $#
This fix will be in the next release.
Once that change is made, symlink $NEXUS_HOME/bin/nexus to /etc/init.d/nexus, then install the service. And edit "$NEXUS_HOME/bin/nexus.rc" and set the "run_as_user" appropriately.
Related
I created a Flask app that uses Beautiful Soup and Selenium to scrape and track Amazon product prices. The data was stored using CS50's version of SQLalchemy.
I then created an account to use Oracle's always free VM, with Ubuntu. I followed this excellent guide to the dot https://asdkazmi.medium.com/deploying-flask-app-with-wsgi-and-apache-server-on-ubuntu-20-04-396607e0e40f and set up Apache's conf file and the Wsgi file. I also added the network rules on Oracle's Virtual Cloud Network and to iptables, which I believe works fine.
Following this, the website still couldn't launch. Apache's error log showed a "PermissionError: [Errno 13] Permission denied: '/flask_session'". Based on this post Permission issue when writing file on webserver (flask, apache & wsgi) I changed the OS env to my env os.chdir('/home/ubuntu/flaskapp') and used chown to give rights
sudo chown -R ubuntu:www-data flaskapp
sudo chmod -R g+s flaskapp.
Now, my front page is accessible on http://129.150.38.171/ . However, upon any request to the server, Chrome displays "This page isn’t working 129.150.38.171 didn’t send any data." Apache's log shows a "segmentation fault (core dumped) python flask". Based on the sequence of my code, the error begins when I try to execute SQL, e.g. rows = usersdb.execute("SELECT * FROM users WHERE username = ?", request.form.get("username")).
I do not think that it is not my codes' error as it runs fine locally and the production server also worked when I set it up on Oracle VM using this guide https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/flask-on-ubuntu/01oci-ubuntu-flask-summary.htm .
I've found this guide on debugging https://www.bustawin.com/debug-segmentation-faults-in-apache-from-mod_wsgi/ using gdb. But with source /etc/apache2/envvars
sudo -E gdb /usr/sbin/apache, it just tells me "No executable file specified".
Any ideas on what could be the error?
I am trying to set up SELinux and an encrypted additional partition that I mount at startup using a systemd service.
If I run SELinux in permissive mode, everything runs ok (partition is correctly mounted, data can be accessed and service runs properly).
If I run SELinux in enforcing mode (enforcing=1), I am not able to mount such partition with the error:
/dev/mapper/temporary-cryptsetup-1808: chown failed: Permission denied
sh[1777]: Failed to open temporary keystore device.
sh[1777]: Command failed with code 5: Input/output error
Any ideas to fix that?
Audit2allow does not return any additional rules to be added
Solved assigning to cryptsetup the lvm_exec_t context.
In the lvm.fc file cryptsetup was defined as /bin/cryptsetup but I had to change it to /usr/sbin/cryptsetup where it actually was.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a VPS hosting with a domain redirecting to it.
I have LAMP stack for my main website using WordPress CMS.
Plus I am using Odoo as my back-end with python and PostgreSQL in a sub-domain.
Everything was working fine until I installed Certbot Let’s Encrypt to obtain an SSL certificate by following these tutorials
For My Wordpress i installed this plugin:
WP Encryption – One Click single / wildcard Free SSL certificate & force HTTPS
Which got me in a loop because it forced the https i will explain it later on
So when the plugin didn't work i searched for another way for the whole VPS with these tutorials:
How To Secure Apache with Let's Encrypt on Ubuntu 16.04
How To Secure Apache with Let's Encrypt on Ubuntu 18.04
After completing the second tutorial for ubuntu 18.04 i noticed that all my domain traffic is going to https and it got stuck in a loop saying same as i said above
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
and couldn't access the website front-end for the wordpress in the doamin.
Then when i applied
"Step 3 — Allowing HTTPS Through the Firewall"
my internet connection got interpreted and when i got back to the ssh session i found my self locked out of the server and did not find any way to get back in.
And when i tired to use the sub-domain that has Odoo on it i have got the same error
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
Until here i was hopeless and did't know what to do.
I contacted my VPS server provider and told him about what exactly happened. Then some how he managed to get me into the server again with a URL to the terminal i still couldn't access the server using ssh clients like putty.. so when i entered the server after he provided me with the URL first thing noticed is that he "rebooted the VPS" will get to this in a second.
So first thing i did was removing the wordpress plugin "WP Encryption" and update the wordpress site-url in wp_options table in mysql database because the plugin changed it from http to https so i changed it back and that solved the ERR_TOO_MANY_REDIRECTS for my wordpress website.
Then the second thing i did was disabling the ufw firewall that i enabled in the tutorial in Step 3 above.
I instantly got my connection to the server back using ssh client putty but what i have noticed again is the postgres service was inactive and went down with the reboot of the VPS. i tried to start the service but it didn't a gave me this error.
Failed to start postgresql.service: Unit postgresql.service is masked.
i searched for a solution and found these commands to unmask
sudo systemctl unmask postgresql
sudo systemctl enable postgresql
sudo systemctl restart postgresql
and then the service has started and everything sames OK when i run the status command
service postgresql status
the response is
● postgresql.service - LSB: PostgreSQL RDBMS server
Loaded: loaded (/etc/init.d/postgresql; generated)
Active: active (exited) since Thu 2020-03-26 05:54:09 UTC; 2h 22min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 2286)
Memory: 0B
CGroup: /system.slice/postgresql.service
but when i try to connect to postgres through the default port with odoo it says:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"
after many searches i made i found the posgres main cluster is also inactive or down i tried to start it with this command
pg_ctlcluster 11 main start
but i get this error
Job for postgresql#11-main.service failed because the service did not take the steps required by its unit configuration. See "systemctl status postgresql#11-main.service" and "journalctl -xe" for details.
and when i run the command as requested
systemctl status postgresql#11-main.service
i get this error
● postgresql#11-main.service - PostgreSQL Cluster 11-main Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled) Active: failed (Result: protocol) since Thu 2020-03-26 15:22:15 UTC; 14s ago Process: 18930 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 11-main start (code=exited, status=1/FAILURE)
alone with
systemd[1]: Starting PostgreSQL Cluster 11-main...
postgresql#11-main[18930]: Error: Could not find pg_ctl executable for version 11
systemd[1]: postgresql#11-main.service: Can't open PID file /run/postgresql/11-main.pid (yet?) after start: No such file or
systemd[1]: postgresql#11-main.service: Failed with result 'protocol'.
systemd[1]: Failed to start PostgreSQL Cluster 11-main.
I guessed Let's Encrypt added an ssl configuration to the pg_hba.conf and postgres.conf like id did with apache so i searched for them and commented the "ssl on" lines and restarted postgres service along with the main cluster but nothing happened still the the same error which is
Error: Could not find pg_ctl executable for version 11
I know i shouldn't run pg_ctl directly under Ubuntu/Debian. I must use pg_ctlcluster instead, which is installed by postgresql-common. I saw the main page documentation. But when i run "sudo pg_ctlcluster 11 main reload" command i always get the above Error telling me that he could not find pg_ctl executable
I have searched a lot for this problem but nothing worked how can i solve the pg_ctl executable in version 11 ??
Ps:
I am using Ubuntu 19.10 (GNU/Linux 5.3.0-24-generic x86_64)
Odoo 11 with postgres 11 as the database odoo can't connect to postgres as i mentioned before
edit:
Unfortunately i can't do a restore or recover the server to fix postgres package because my last backup of the server was on 19/3 and today is 26/3 i have an important data between this period
Update 27/3/2020 4:06 AM
I compared my last server backup with the production server and found a lot of postgres files missing!! like int this path /usr/lib/postgres/11/ and /etc/postgres/11/ i think postgres some how got damaged and lost some files in the reboot of the server >>> but found the data files of the database located in /var/lib/postgres/11/ <<< Can i read them in my backup server ? i will try and let you know
So finally after a hours of digging
All PostgreSQL files where damaged and missing and i lost hope of repairing them i don't know what caused that but it has a relation with the accidental reboot of the server.
So i managed to find the main cluster data file for my important database information for the production server in this path
/var/lib/postgres/11/
and i took a backup from it by zipping the whole folder using this command
zip -r main.zip main/
then i did a full purge and reinstall for postgres usuing these commands from here
apt-get --purge remove postgresql\*
to remove everything PostgreSQL from your system. Just purging the postgres package isn't enough since it's just an empty meta-package.
Once all PostgreSQL packages have been removed, run:
rm -r /etc/postgresql/
rm -r /etc/postgresql-common/
rm -r /var/lib/postgresql/
userdel -r postgres
groupdel postgres
Then i installed postgres with this command to match odoo11
sudo apt-get install postgresql libpq-dev -y
then creating the ODOO PostgreSQL User
sudo su - postgres -c "createuser -s odoo" 2> /dev/null || true
Now everything is okay odoo should work fine but you still don't have any database
So to bring back the backup from the cluster folder we took earlier we need to move the zip file to the same directory we took it from which is
/var/lib/postgres/11/
but before that you should stop postgres service
sudo systemctl stop postgresql
and make sure it has stopped
sudo systemctl status postgresql
after that rename the main cluster that postgres uses right now because its empty and we don't need it because we are replacing it with our backed up cluster
mv /var/lib/postgres/11/main /var/lib/postgres/11/main_old
then move the zip file from where you backed it up to the postgres cluster folder with this command
mv /backups/main.zip /var/lib/postgres/11/
unzip the folder in the same path by using this command
unzip -a /var/lib/postgres/11/main.zip
after unzipping the folder give the ownership to your postgres user and group
chown -R postgres:postgres main
Then you are good to go. Start Postgres service
sudo systemctl start postgresql
sudo systemctl status postgresql
and make sure you also start the main cluster service
pg_ctlcluster 11 main start
if you stopped odoo make sure to start it also
service odoo-server start
Ps: I solved ERR_TOO_MANY_REDIRECTS for the odoo sub-domain by commenting ssl configurations in my odoo.config Apache2 virtual host that lets encrypt updated before and everything got back to where left it before installing lets encrypt.
I guess i will leave it here and won't use ssl in production again till i figure out how to use it in a test server .. thanks for your time i hope my question and answer helps someone in the future
Try adding 'pg_path' in your odoo configuration file.
Like: pg_path = /path/to/postgresql/binaries
Generally '/usr/lib/posrgresql/11/bin' is the binary directory.
I have SLS files set up to copy things from a network folder to a local directory on a minion.
Looks a little like this:
cmd-test:
cmd.run:
- name: 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E'
and get the following output:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Tuesday, December 6, 2016 10:50:35 AM
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Getting File System Type of Source \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Source - \\<Server>\<program>\<folder>\
Dest : C:\<path>\<folder>\
Files : *.*
Options : *.* /S /E /DCOPY:DA /COPY:DATS /PURGE /MIR /NP /R:1 /W:1
------------------------------------------------------------------------------
NOTE : NTFS Security may not be copied - Source may not be NTFS.
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Accessing Source Directory \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Waiting 1 seconds... Retrying...
When I run the same thing locally in command line as 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E' and it worked perfectly. I have no idea how to fix this 'use local user account' that Robocopy seems to give when using it through salt.
I also tried adding /MIR and /SEC which didnt't work.
Running Windows 10, Minion 2016.3.3
Master: Red Hat, 2016.3.3
Salt seems to be connecting to the network resource with a computer account. A few possible solutions:
Try changing the Salt Service on the Client (if that's how salt is executing the commands) to run as a domain user.
Try using the salt file server
Implement this hacky workaround where a scheduled task is created - discussed in the github issue that seems related to your problem: https://github.com/saltstack/salt/issues/16340
I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html