mariaDB install in Ubuntu 16.04 [closed] - mariadb

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
i was install mariaDB 10.1 in Ubuntu16.04LTS
i use this code.
sudo apt-get install mariadb-server
install is success, but when i connect to mariadb, it is don't need password!
i was try this. but these all doesn't work.
i use mysql_secure_installation command and finish configuration successfully. but it is still don't need password.
SET PASSWORD FOR 'root'#'%' = PASSWORD('newpass'); and flush privilege
update user set password=password('newpass') where user='root';
i want use password. what should i do?

My answer to "i want use password. what should i do?" is:
"no, you probably don't!" (depending on your app's deployment - see comments below)
With Ubuntu 15.10 and later MariaDB installs with the unix_socket user authentication plugin enabled by default: https://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/
With a fresh install login with sudo mysql -uroot and execute:
SELECT User, Password, Host, plugin from mysql.user WHERE user = 'root';
You should see a single user. This also means you cannot remotely login as root with the default install (one of the tasks the mysql_secure_installation performs).
+------+----------+-----------+-------------+
| User | Password | Host | plugin |
+------+----------+-----------+-------------+
| root | | localhost | unix_socket |
+------+----------+-----------+-------------+
I would recommend leaving this as-is and creating a non-root user with the appropriate permissions for your application to use.
However, if you absolutely need a password-authenticated root user you need to change the plugin to mysql_native_password.
UPDATE mysql.user SET plugin = 'mysql_native_password', Password = PASSWORD('secret') WHERE User = 'root';
FLUSH PRIVILEGES;
Now quit then log-in with mysql -uroot -p.
Note if you need to restart the service you will need to do three commands instead of just sudo systemctl restart mysql. I am pretty sure there is some addition to a .cnf file you can make so that restart will work as expected but I couldn't easily figure it out. Basically the lesson is use unix_socket.
sudo systemctl stop mysql
sudo kill -9 $(pgrep mysql)
sudo systemctl start mysql

Related

[process exited with code 1], can't open WSL, zsh [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I get [process exited with code 1] when I try to access a WSL distro. This happened after removing zsh using command: sudo apt-get remove zsh.
i remove zsh and forget to set bash as default shell.
So i did this and its work now;
Login with root wsl -u root
and then execute this command chsh -s /bin/bash <username>
restart terminal and that it.
Best regards !
This happened with me after I uninstalled zsh from wsl2
You need to change default terminal to use bash instead of zsh which you can do by first installing zsh and then setting bash as default
Step 1:go to windows PowerShell C:\WINDOWS\system32>
wsl.exe -e sudo apt-get install zsh
Step 2: restart Windows terminal and
Change /etc/pam.d/chsh: from:
`auth required pam_shells.so`
to
`auth sufficient pam_shells.so`
Step 3:
chsh -s /bin/bash root

Set password for RStudio Server with AWS EC2 instance

I managed to follow all the steps to create EC2 instance and install R Server on it.
When I go to RStudio Server page to connect (which looks something like "ec2-[Public IP]-.eu-west-3.compute.amazonaws.com:8787"), I am asked a username and a password.
I figured out to set a username ("user1") this way:
$ sudo useradd user1
But then when I try this command to write the password:
echo user1:password | chpasswd
I receive this message:
chpasswd: cannot lock /etc/passwd; try again later.
I looked at different solutions suggested here:
https://superuser.com/questions/296373/cannot-lock-etc-passwd-try-again-later
but I do not see a resolution to my problem.
I did not find either any passwd.lock, shadow.lock, group.lock, gshadow.lock files to remove.
type in 'sudo passwd your_username' and you will be prompted to enter a new password

Google Cloud: Compute VM Instances

How do I get root access to my Google VM instance, and also how can I log into my VM Instance from my PC with a SSH client such as putty?
I would also like to add that I have tried to do sudo for things that need root access to do those things, such as yum or wget. But it does not allow me to do sudo, it asks me for the root password but I do not know how, or where I would be able to get the root password.
You can become root via sudo su. No password is required.
How do I use sudo to execute commands as root?
(splitting this off from the other answer since there are multiple questions within this post)
Once you connect to your GCE VM using PuTTY or gcloud compute instances ssh or even clicking on the "SSH" button on the Developers Console next to the instance, you should be able to use the sudo command. Note that you shouldn't be using the su command to become root, just run:
sudo [command]
and it should not prompt you for a password.
If you want to get a root shell to run several commands as root and you want to avoid prefixing all commands with sudo, run:
sudo su -
If you're still having issues, please post a new question with the exact command you're running and the output that you see.
sudo su root <enter key>
No password required :)
if you want to connect your gce (google-cloud) server with putty using root, here is the flow:
use puttygen to generate two ppk files:
for your gce-default-user
for root
do the followings on putty (replace gce-default-user with your gce username):
Putty->session->Connection->data->Auto-login username: gce-default-user
Putty->session->Connection->SSH->Auth->Private-key for authentication: gce-default-user.ppk
Then connect to server using your gce-default-user
make the following changes in sshd_config
sudo su
nano /etc/ssh/sshd_config
PermitRootLogin yes
UsePAM no
Save+exit
service sshd restart
Putty->session->Connection->data->Auto-login username: root
Putty->session->Connection->SSH->Auth->Private-key for authentication: root-gce.ppk
Now ou can login to root via putty.
If you need to use eclipse remote system and log-in as root:
Eclipse->windows->preferences->General->network Connection->SSH2->private-keys:
root-gce.ppk
Please try sudo su - on GCE.
By default on GCE, there is no password required to sudo (do as a substitute user). The - argument to su (substitute user) further simulates a full login, taking the target user (the default user for both is root) configured login shell and its profile scripts to set new environment parameters. You'll at least notice the prompt change from ending in $ to # in any case.
JUST GOT TO CLOUD SHELL BY CLICKING SSH
AND FOLLOW PASSWORD CHANGE COMMAND FOR ROOT USER USING SUDO :)
sudo passwd
and it will change the root password :)
then to becom root use command
su
type your password and become a root :)
How do I connect to my GCE instance using PuTTY?
(splitting this off from the other answer since there are multiple questions within this post)
Take a look at setting up ssh keys in the GCE documentation which shows how to do it; here's the summary but read the doc for additional notes:
Generate your keys using ssh-keygen or PuTTYgen for Windows, if you haven't already.
Copy the contents of your public key. If you just generated this key, it can probably be found in a file named id_rsa.pub.
Log in to the Developers Console.
In the navigation, Compute->Compute Engine->Metadata.
Click the SSH Keys tab.
Click the Edit button.
In the empty input box at the bottom of the list, enter the corresponding public key, in the following format:
<protocol> <public-key> username#example.com
This makes your public key automatically available to all of your instances in that project. To add multiple keys, list each key on a new line.
Click Done to save your changes.
It can take several minutes before the key is inserted into the instance. Try connecting with ssh to your instance. If it is successful, your key has been propagated to the instance.

Storing datasets and intermediary files in Amazon EC2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'd like to use Amazon EC2 to work with large datasets in R.
I have launched an instance, installed R, and created an EBS image of the volume "root" in a drive of 300 Go, unchecking "Delete on Termination".
I then started this AMI in a new instance, uploaded some datasets to it, and terminated the instance.
When I launched this AMI later on a new instance, the hard drive was in the same state than when I first created the AMI - but I expected the uploaded datasets to be available. Is it expected behavior? If yes, what's the best way to store datasets and intermediate files between two connexions to Amazon EC2?
Perhaps you could use S3 as a filesystem.
Create an S3 bucket on AWS. In this example we're using the AWS command line utilities running locally:
aws s3 mb s3://bucketxyz
Then launch an EC2 instance. This example worked for Amazon Linux. ssh into the box, setup s3fs:
sudo yum install git gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel openssl-devel mailcap automake
git clone git://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr
make
sudo make install
... and then setup your AWS credentials:
echo '[AWS Access Key ID]:[AWS Secret Access Key]' | sudo tee /etc/passwd-s3fs
sudo chmod 400 /etc/passwd-s3fs
Then mount the drive as a folder:
sudo mkdir /bucketxyz
sudo s3fs bucketxyz /bucketxyz
This folder is now accessible like any other folder, but resides in S3 and is therefore persistent and could be accessed from other instances if necessary.

How to prevent /etc/resolv.conf from getting overwritten after reboot in Ubuntu 11.10? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I'm using Ubuntu 11.10 and I manually configure DNS servers in /etc/resolv.conf but it gets somehow overwritten after I reboot. How can I prevent this?
Thanks.
As you can read in the header of resolv.conf :
Dynamic resolv.conf file for glibc resolver generated by resolvconf
So, the resolv.conf is generated, if you want to keep the resolvconf configuration after reboot, you will have to edit /etc/resolvconf/resolv.conf.d/base. In that file, put in your info as you would in resolv.conf.
nameserver 8.8.8.8
Then regenerate resolv.conf with resolvconf:
sudo resolvconf -u
After reading other answers, I still needed something different for the following reasons:
I'm not using resolvconf, just plain /etc/resolv.conf.
Using chattr +i to lock down resolv.conf seems too hacky. I need Puppet to be free to make changes when necessary.
The best solution I found overrides the default behavior of dhclient using its documented hooks.
Create a new file at /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate with the following contents:
#!/bin/sh
make_resolv_conf() {
:
}
Then make the file executable:
chmod +x /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate
Now when dhclient runs -- either on reboot or when you manually run sudo ifdown -a ; sudo ifup -a -- it loads this script nodnsupdate. This script overrides an internal function called make_resolv_conf() that would normally overwrite resolv.conf and instead does nothing.
This worked for me on Ubuntu 12.04.
I figure that the NetworkManager is overwriting the /etc/resolv.conf file. In my case it was the order in which my DNS servers were listed was something I wanted to change. You can do that through the NetworkManager by editing your connection IP4V settings.
You have DHCP client doing this. Follow these instructions to override that.
I use the following line :
chattr +i /etc/resolv.conf
to undo use :
chattr -i /etc/resolv.conf
Let me know if it worked...
NetworkManager can be configured to use manually entered IPv4 configuration, or get from DHCP IP/netmask/router only - in such a case it should not change /etc/resolv.conf
However, one may want have his own settings in the /etc/resolv.conf - like nameserver or domain to search; I just need a domain and I did by adding a file /etc/NetworkManager/dispatcher.d/99my_fix containing:
#!/bin/bash
rc=/etc/resolv.conf; le="search my.domain"
grep -q domain $rc && ! grep -q "$le" $rc && echo "$le" >> $rc
Of course I chmod-ed +x it. The NetworkManager invokes it after setting the /etc/resolv.conf and my script fixes it if nesessary; the first grep detects that network is up, the second that the fix was not applied - they both are necessary for the fix to be applied.
I had the same problem and I edited my `/etc/dhcp/dhclient.conf' file by adding domain-name and domain-name-servers
supersede domain-name "local.com";
supersede domain-name-servers 192.168.56.103;
192.168.56.103 is my vm running bind9 and my domain name is local.com
and I have removed the same from request section as well.
If the network interfaces for your server instance is controlled by DHCP, the dhclient program will overwrite your /etc/resolv.conf file whenever the networking service is restarted.
You can fix the issue by editing the "/etc/dhcp/dhclient.conf" file and adding supersede statements for domain-name, domain-search and domain-name-servers as follows:
supersede domain-name "mydomain.com";
supersede domain-search "mydomain.com"
supersede domain-name-servers 8.8.8.8;
In this particular case the name server is located at "8.8.8.8" and the domain name is "mydomain.com". Substitute your particular information.
Note that each line is terminated by a semi-colon and the domain name is enclosed in double quotes.

Resources