How to use Vagrant for CentOS with GUI - qt

I am trying to up a Vagrant machine with CentOS version with GUI. Here is my vagrant file:
Vagrant.configure(2) do |config|
config.vm.box = "puppetlabs/centos-7.0-64-nocm"
config.vm.provider :virtualbox do |vb|
vb.name = "DSW-Run-7"
end
config.vm.network "private_network", ip: "192.168.33.13"
config.vm.synced_folder ".", "/home/vagrant/CartoDSW"
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
vb.memory = "2048"
end
config.vm.provision "shell", inline: <<-SHELL
sudo yum -y install epel-release
sudo yum -y install qt5-qtbase
sudo yum -y install qt5-qtbase-devel
sudo yum -y install jbigkit.x86_64
sudo yum -y install gcc-c++
sudo yum -y install git
sudo yum groupinstall basic-desktop desktop-platform x11 fonts
SHELL
end
After this I did vagrant up and the GUI is up. I tried to login with Vagrant, but Vagrant says 'invalid login':
Centos Linux 7 (core)
kernel 3.10.0-123.e17.x86_64
localhost login:vagrant
password:password
Login incorrect
After vagrant up I did vagrant ssh and used commands such as startx, but still not able to launch the GUI.
Please suggest how I can install CentOS with GUI, as I need to work with QT to debug my code.

To solve this for Centos/7 I installed and launched the GNOME from Vagrant file with the following shell commands:
config.vm.provision "shell", inline: <<-SHELL
sudo yum -y groupinstall "GNOME Desktop"
sudo systemctl set-default graphical.target
sudo systemctl start graphical.target
SHELL

For CentOS 6, I have a repo got it work here: https://github.com/hsiaoyi0504/vagrant_centos_6_gui.
In short, use following codes to install and set up:
config.vm.provision "shell", inline: <<-SHELL
# install GUI desktop
sudo yum update
sudo yum groupinstall -y "X Window System" "Desktop"
sudo yum install -y gnome-core xfce4 xorg-x11-fonts
sudo echo "id:5:initdefault:" > /etc/inittab
# fix fonts problem in terminal
# https://forums.anandtech.com/threads/fonts-screwed-up-in-centos-6-terminal.2186468/
sudo yum -y install terminus-fonts terminus-fonts-console
reboot # reboot to load GUI
SHELL

Put the following into your Vagrantfile, to reset passwords. Works on Centos 6x images I typically use
config.vm.provision :shell, :inline => "echo \"vagrant\"|passwd --stdin vagrant"
config.vm.provision :shell, :inline => "echo \"vagrant\"|passwd --stdin root"
For example:
https://github.com/lastnitescurry/documentum71/blob/master/Vagrantfile
Figured it out from:
https://github.com/puphpet/packer-templates/blob/master/centos-6-x86_64/http/ks.cfg

There are 2 potential solutions:
first : using a GUI and boot the GUI directly
second : using x11 forwarding
First Option : Boot in GUI mode
vagrant user has no password in most cases (unless you specified otherwise and build a new box) as it connects with ssh key.
If you want to connect via GUI, you'll need to give the user a new password.
run vagrant ssh to connect into the VM
run sudo passwd to enter a new password for your user
From there you will be able to login via GUI and then make sure you have X environment to start working on, you can install
sudo yum install 'xorg*'
sudo yum install xterm
or to install a Gnome environment
yum -y groups install "GNOME Desktop"
Make sure to set your Vagrantfile with
config.vm.provider "virtualbox" do |vb|
vb.gui = true
end
and your GUI will boot when you run vagrant up
The Alternative, your second option : use X11 forwarding
From what you try to achieve, there might be a better way (but I am not familiar enough with QT to really judge). Vagrant has an option to forward X11.
config.ssh.forward_x11 - If true, X11 forwarding over SSH
connections is enabled. Defaults to false.
You will need a X11 client on your host (If you run on mac you can download and use XQuartz it does the job pretty well, if you're running on another system, check for an equivalent)
So when you have your X11 client installed on your host and turn on config.ssh.forward_x11 you can run directly X-command and they will be forwarded on your host so
sudo yum install xterm
xterm &
and the xterm window will appear on your host machine.
Note: you may need to install xauth on the VM, using e.g. sudo apt-get install xauth (Debian/Ubuntu/...) or yum install xorg-x11-xauth (CentOS, Fedora, ...).

Related

Installing OpenStack on centos Stream 9 using PackStack, networking Problem

I have installed Openstack using Packstack on centos stream 8 but when i want to use Centos Stream 9 for Openstack ( using packstack ), it through an Error that
OpenStack networking currently does not work on systems that have the
Network Manager service enabled.
But as you may know, network-scripts/ifcfg ... are not Available on Centos 9 anymore!
in case of Centos Stream 8, I have manually disabled and Stoped Network Manager and instead i have been using systemctl enable network to enable the Network availability after reboot or during the installation!
But this is not available for Centos stream 9!
Anyone can give me some insights on how to fix this issue!
Any replacement for Network Manager like what we do on centos stream 8 ( using network ) but for centos Stream 9?
dnf install -y centos-release-openstack-yoga &&
dnf install -y openstack-packstack
packstack --gen-answer-file /root/openstack-answer.txt
Thanks
best regards
This worked for me:
Install centos-release-openstack-yoga and openstack-packstack:
dnf install -y centos-release-openstack-yoga
dnf install -y openstack-packstack
Install network-scripts:
dnf update -y
dnf install -y network-scripts
Disable NetworkManager and enable network service:
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
Install openstack:
packstack --allinone
Check ALL of your network interfaces are configured correctly. For me they weren't, because packstack simply ignored my second NIC and it didn't magically migrate from NetworkManager configuration:
ls -la /etc/sysconfig/network-scripts/ifcfg-*

How to force a shiny app ran in Docker to use https

I am trying to run a shiny application on an open port on my server. I usually run docker images using command docker run -p 4000:3838 tag_name, assuming that docker container has exposed port, and shiny app is running at this port.
This all works completely fine for any shiny application that is using http. But I need https.
So the Dockerfile I use consists of:
FROM rocker/r-ver:4.0.1
# System libs:
RUN apt-get update
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libssl-dev
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libxml2-dev
# R packages installed
RUN R -e "install.packages('remotes')"
RUN R -e "remotes::install_version('searchConsoleR')"
RUN R -e "remotes::install_version('googleAuthR')"
# [...] more R libraries are installed
# Copy application files to a dir
RUN mkdir /root/app
COPY . /root/app
# Expose and set run command from dir above
EXPOSE 3838
CMD ["R", "-e", "shiny::runApp('/root/app', port = 3838, host = '0.0.0.0')"]
Then I execute docker build -t tag_name . and docker run -p 4000:3838 tag_name.
The page is available at http://server.host:4000
However, since I am using Google Login, I need to use https. But when I visit server's https://server.host:4000 I see an error of page not existing.
Can someone please help?

Wordpress docker container with test e-mail environment

Is there any simple way to run Wordpress using the docker with the environment to test mailing?
I have a container with WordPress and MariaDB running and I am trying to connect it to MailDev or similar environment for the mailing test.
I have installed sendmail in the WordPress container
apt-get install -y sendmail sendmail-bin mailutils
I'm using the plugin WP Mail SMTP in which I set the "other SMTP" option. The plugin informs you that the mail has been sent, but nothing appears in MailDev.
Is there any solution to test e-mails locally?
If you're on an Ubuntu environment, I'd highly recommend you go with Mailcatcher to troubleshoot and catch all your emails. It basically provides a nice GUI web interface for you to see all the emails that get sent out of your server.
https://mailcatcher.me/
# Install dependencies
# older ubuntus
#apt-get install build-essential libsqlite3-dev ruby1.9.1-dev
# xenial
apt install build-essential libsqlite3-dev ruby-dev
# Install the gem
gem install mailcatcher --no-ri --no-rdoc
# Make it start on boot
echo "#reboot root $(which mailcatcher) --ip=0.0.0.0" >> /etc/crontab
update-rc.d cron defaults
# Make php use it to send mail
# older ubuntus
#echo "sendmail_path = /usr/bin/env $(which catchmail) -f 'www-data#localhost'" >> /etc/php5/mods-available/mailcatcher.ini
# xenial
echo "sendmail_path = /usr/bin/env $(which catchmail) -f 'www-data#localhost'" >> /etc/php/7.0/mods-available/mailcatcher.ini
# Notify php mod manager (5.5+)
# older ubuntus
#php5enmod mailcatcher
# xenial
phpenmod mailcatcher
# Start it now
/usr/bin/env $(which mailcatcher) --ip=0.0.0.0

How to Choose R Server's R as Default in Operationalization, Remote R Workspace and RStudio Server?

So I've set up an Azure Data Science Virtual Machine on Linux (Ubuntu) and I've executed the following on the terminal to enable Remote R workspace, RStudio Server, R Server Operationalization and hadoop:
sudo apt update
sudo apt -y upgrade
# Hadoop is installed but doesn't seem to appear on the PATH or have its environment variable set by default
sudo echo "" >> ~/.bashrc
sudo echo "export PATH="'$'"PATH:/opt/hadoop/hadoop-2.7.4/bin" >> ~/.bashrc
sudo echo "export HADOOP_HOME=/opt/hadoop/hadoop-2.7.4" >> ~/.bashrc
#
source ~/.bashrc
#Setting up a password as none exists to begin with because of private key selection in the installation
#RStudio Server requires a password though
"MyPassword\nMyPassword\n" | sudo passwd sshuser
#Unfortunately hadoop fails on Data Science Virtual Machine
#error: mkdir: Call From IM-DSonUbuntu/192.168.5.4 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
# hadoop fs -mkdir /user/RevoShare/rserve2
# hadoop fs -chmod uog+rwx /user/RevoShare/rserve2
sudo mkdir -p /var/RevoShare/rserve2
sudo chmod uog+rwx /var/RevoShare/rserve2
# hadoop fs -mkdir /user/RevoShare/sshuser
# hadoop fs -chmod uog+rwx /user/RevoShare/sshuser
sudo mkdir -p /var/RevoShare/sshuser
sudo chmod uog+rwx /var/RevoShare/sshuser
#Setting up R Server Operationalisation
cd /opt/microsoft/mlserver/9.2.1/o16n
sudo dotnet Microsoft.MLServer.Utils.AdminUtil/Microsoft.MLServer.Utils.AdminUtil.dll -silentoneboxinstall MyPassword
#They say this Data Science Virtual Machine already has RStudio Server, but even though the port 8787 is open, it's nowhere to be found! So installing it now, and after the installation it's accessible by refreshing the page that failed before.
#Perhaps it's not installed then? Or a service is not running like it shoudl?
#https://www.rstudio.com/products/rstudio/download-server/
wget https://download2.rstudio.org/rstudio-server-1.1.414-amd64.deb
yes | sudo gdebi rstudio-server-1.1.414-amd64.deb
#They are small, leave them for debug reasons - lets have evidence the script run thus far.
#sudo rm rstudio-server-1.1.414-amd64.deb
# Remote R workspace Service needs dotnet sdk
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt update
sudo apt -y install dotnet-sdk-2.0.0
sudo apt install libxml2-dev
#Downloading and installing the Remote R service
wget -O rtvs-daemon.tar.gz https://aka.ms/r-remote-services-linux-binary-current
tar -xvzf rtvs-daemon.tar.gz
sudo ./rtvs-install -s
sudo systemctl enable rtvsd
sudo systemctl start rtvsd
#sudo rm rtvs-daemon.tar.gz
#sudo rm rtvs-install
#Fixing Remote R: For some reason, even though 'sudo systemctl enable rtvsd' runs, after every reboot the service won't become automatically active. So let's fix that.
wget https://sa0im0general.blob.core.windows.net/general-blob-container/StartRemoteRAfterReboot.sh
sudo mv StartRemoteRAfterReboot.sh /var/RevoShare/StartRemoteRAfterReboot.sh
sudo /sbin/shutdown -r 5
sudo chown root /etc/rc.local
sudo chmod 755 /etc/rc.local
sudo systemctl enable rc-local.service
sudo -s
sudo find /etc/ -name "rc.local" -exec sed -i 's/exit 0//g' {} \;
sudo echo "" >> /etc/rc.local
sudo echo "sh /var/RevoShare/StartRemoteRAfterReboot.sh" >> /etc/rc.local
sudo echo "exit 0" >> /etc/rc.local
exit
I've also tried, one by one, these, to see if it makes any difference to the RStudio Server (it didn't, but even if it did, I want a global solution to work on Remote R Workspace Service and R Server Operationalisation as well, not only RStudio Server):
#Configuring RStudio Server to see the R Server R
sudo echo "rsession-which-r=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> /etc/rstudio/rserver.conf
export RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.profile
source ~/.profile
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.bashrc
source ~/.bashrc
sudo echo "PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R" >> ~/.bashrc
export PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R
source ~/.bashrc
The problem is that even though "which R" points to R Server's R, i.e. typing "sudo R" will show the message "Loading Microsoft R Server packages, version 9.2.1." and will load packages like RevoScaleR, everything else fails to do so.
Accessing the RStudio Server with http://THE-IP-GOES-HERE.westeurope.cloudapp.azure.com:8787 and logging in with the initial user ("sshuser") (or with any other user for that matter) will NOT load R Server and RevoScaleR rx functions are unavailable
Using my local Visual Studio 2017 to access the remote workspace via "Add connection" on "Workspaces" tab loads MRO and says:
Installed R versions:
[0] Microsoft R Open '3.4.1.1347' (Default)
And finally, when I use R Server's Operationalisation and log in with "mrsdeploy" package's "remoteLogin()" R Server packages like RevoScaleR are not loaded again, so things like "rxSummary(~., data=iris)" fail with error 'could not find function "rxSummary"'
The exact same thing happened when I deployed from azure a "Machine Learning Server 9.2.1 on Linux (Ubuntu)".
I don't want to just use the regular open source R, I want to be able to use the R Server - that's why I deployed this VM. How can I make it so that everything loads R Server's R, not Microsoft R Open? (Like I'm able to do from terminal using "R")
As a result of my having tried all of this and the fact that R Server is loaded in the console, my mind now goes to permissions. Could it be that by default the Data Science VM doesn't have the correct permissions to allow these?
I'm at a loss
RStudio Server is installed on the Ubuntu DSVM, but the service is disabled by default as it does not support SSL. You can enable it with systemctl enable rstudio-server, then start it with systemctl start rstudio-server.
RStudio Server uses the same R as Microsoft R Server, but the .libPaths are different, which is why you cannot load the MRS packages. You will need to manually set the .libPaths so they match.

Vagrant - centos networking

I have set up a Vagrant machine with this configuration --
Vagrant.configure("2") do |config|
config.vm.box = "intprog/centos7-ez6"
config.ssh.insert_key = false
config.vm.network "public_network", ip: "192.168.33.243"
config.vm.provision "file", source: "/server/bin/nginx/conf/domains-enabled/cemcloudMigration.conf", destination: "~/cemcloud.conf"
config.vm.provision "shell", path: "webroot/bootstrap/script.sh"
end
This is how my script looks like -- sudo su
#update the centos version
#yum update -y
yum -y erase httpd httpd-tools apr apr-util
#getting nginx from the right address
yum install -y http://http://nginx.org/packages/centos/7/x86_64/RPMS/nginx-1.10.0-1.el7.ngx.x86_64.rpm
yum install -y nginx
#installing composer
curl -sS https://getcomposer.org/installer | php
chmod +x composer.phar
mv composer.phar /usr/bin/composer
cd /srv/www/cemcloud2
composer install
#removal of old mariadb5.5 and installation of the new one
yum -y remove mariadb-server mariadb mariadb-libs
yum clean all
yum -y install MariaDB-server MariaDB-client
#clear unnecessary software
yum -y remove varnish
## restart the service
service mysql restart
service php-fpm restart
service nginx restart
/var/log/nginx/access.log is producing this --
10.0.2.2 - - [17/Oct/2016:11:42:10 +0000] "GET / HTTP/1.1" 301 185 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:51.0) Gecko/20100101
Firefox/51.0" "-"
Really strange behavior from nginx because it some times produces log and sometimes it doesn't. When I open up my firefox developer it produces log and when I am on google chrome it doesn't.
Every time I put in the URL into browser it says
the connection has timed out.
Anyhow I want to get connected to this machine. What am I doing wrong ??
Please check your network on the guest-mashine with:
nmap -sT -O localhost
Check if the ports your are using in your nginx configuration are open.
If not, open them in your firewall and check again.
It was a firewall issue inside this machine "intprog/centos7-ez6". It wasn't listening to the port https.
I have followed those steps:
firewall-cmd --add-service=https
firewall-cmd --add-service=https --permanent
firewall-cmd --reload
and it all worked.

Resources