reset cygwin environment like it first run - initialization

to be clear, I am not talking about how to reset terminal characters of cygwin.
what I would like to achieve is, how to re-run cygwin as "First run".
background: I accidentally delete all my home directory files, so all profiles are gone.
I would like cygwin re-generate/re-initialize it again for me like its "First run / Fresh start"
thanks

thanks #matzeri for the trick. that's definitely works!
so, to re-initialize my cygwin like first-run, what I do is:
username#computername ~
$ pwd
/home/username
username#computername ~
$ cd /home
username#computername ~
$ mv username username-old
username#computername ~
$ exit
exit all cygwin terminal, and run the cygwin again from start menu. your will see following output means your profiles is re-created again.
These files are for the users to personalise their cygwin experience.
They will never be overwritten nor automatically updated.
'./.bashrc' -> '/home/username//.bashrc'
'./.bash_profile' -> '/home/username//.bash_profile'
'./.inputrc' -> '/home/username//.inputrc'
'./.profile' -> '/home/username//.profile'
restore my working data:
username#computername ~
$ cd /home
username#computername ~
$ ls
username username-old
username#computername ~
$ mv username-old/mydata username/

Related

ls and dir doesn't show anything on debian 10

I've installed debian 10 32bit on a laptop and tried to use ls but then it prints nothing.
And when I use ls -a I get this:
root#acer-aspire-one:~# ls //This prints nothing
root#acer-aspire-one:~# ls -a
. .. .bashrc .profile
And dir doesn't show anything either
root#acer-aspire-one:~# dir //This prints nothing
root#acer-aspire-one:~# dir -a
. .. .bashrc .profile
Your current path is ~, which is the current user's (root) home folder.
That folder is empty an a fresh installation apart from a few hidden files which ls -a is showing.
So ls is doing its job.
To make it list a specific path, for example the /, you can run ls / or first cd into that path like this:
cd /
ls

How can I auto-create .docker folder in the home directory when spinning up VM Cluster (gce_vm_cluster) on gcloud through R?

I create VMs using the following command in R:
vms <- gce_vm_cluster(vm_prefix=vm_base_name,
cluster_size=cluster_size,
docker_image = my_docker,
ssh_args = list(username="test_user",
key.pub="/home/test_user/.ssh/google_compute_engine.pub",
key.private="/home/test_user/.ssh/google_compute_engine"),
predefined_type = "n1-highmem-2")
now when I SSH into the VMs, I do not find the .docker folder in the home directory
test_user#test_server_name:~$ gcloud beta compute --project "my_test_project" ssh --zone "us-central1-a" "r-vm3"
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .ssh
Now the below command gives an error (..obviously)
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
Unable to find image 'gcr.io/my_test_project/myimage:version1' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
I need to run the docker-credential-gcr configure-docker command to get the folder/file .docker/config.json
test_user#r-vm3 ~ $ docker-credential-gcr configure-docker
/home/test_user/.docker/config.json configured to use this credential helper for GCR registries
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .docker .ssh
Now,
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
version1: Pulling from my_test_project/myimage
Digest: sha256:98abc76543d2e10987f6ghi5j4321098k7654321l0987m65no4321p09qrs87654t
Status: Image is up to date for gcr.io/my_test_project/myimage:version1
gcr.io/my_test_project/myimage:version1
What I am trying to resolve:
I need the .docker/config.json to appear in the VMs without SSHing in and running the docker-credential-gcr configure-docker command
how about creating a bash script, upload to a cloud storage bucket, and call it while creating the cluster? Also you mentioned "R" Are you talking about R script?

Oh my zsh: No message if command not installed

I don't know if it's a problem with zsh or oh my zsh.
If i try to run a command that is not installed, no error message is shown.
Example oh my zsh:
➜ ~ svn -version
➜ ~
Example bash:
[mrclrchtr#fedora ~] $ svn -version
bash: svn: command not found
[mrclrchtr#fedora ~] $
Does anybody have an idea why there is no error message in zsh?
Edit:
Here is my abridged ~/.zshrc
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH
# Path to your oh-my-zsh installation.
export ZSH=/home/mrclrchtr/.oh-my-zsh
# Set name of the theme to load. Optionally, if you set this to "random"
# it'll load a random theme each time that oh-my-zsh is loaded.
# See https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
ZSH_THEME="robbyrussell"
# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(
git
)
source $ZSH/oh-my-zsh.sh
Edit 2
➜ ~ trap
➜ ~ whence -c command_not_found_handler
command_not_found_handler () {
runcnf=1
retval=127
[ ! -S /var/run/dbus/system_bus_socket ] && runcnf=0
[ ! -x /usr/libexec/packagekitd ] && runcnf=0
if [ $runcnf -eq 1 ]
then
/usr/libexec/pk-command-not-found $#
retval=$?
fi
return $retval
}
➜ ~

Can't get faketime to work with nginx

I'm trying to set up faking server time using libfaketime running nginx+php on ubuntu but no luck.
Here is what I've done:
1) Installed faketime:
$ wget http://www.code-wizards.com/projects/libfaketime/libfaketime-0.9.6.tar.gz
$ tar -xvzf libfaketime-0.9.6.tar.gz
$ cd libfaketime-0.9.6
$ make
$ sudo make install
$ echo "#2012-12-21 12:12:12" > /etc/faketimerc
2) added the following to my nginx.conf:
env LD_PRELOAD="/usr/local/lib/faketime/libfaketime.so.1";
3) Restarted nginx and php.
When I export LD_PRELOAD manually and then try date, it works, but when I do curl localhost or go to the website it gets the actual server date not the one from /etc/faketimerc
I've also tried setting LD_PRELOAD in :
/etc/environment
/etc/profile
/etc/profile.d/LD_PRELOAD.sh
/etc/default/nginx
Any ideas would be much appreciated.
Try set LD_PRELOAD for nginx (by root), not for user's shell:
LD_PRELOAD=/usr/local/lib/faketime/libfaketime.so.1 /path/to/nginx
Try creating a txt file (eg: faketime.txt) and in it give the time you want
Eg: 2015-06-27 18:30:00
Then put the the following commands in the .config file
set.default.LD_PRELOAD=/usr/local/lib/faketime/libfaketime.so.1
set.default.FAKETIME_TIMESTAMP_FILE=/home/Documents/faketime.txt

Creating symbolic links in AIX 6.1 server

I'm trying to create sym-links using the following commands :
root:d2stud -> $ ln -s /usr/lib/libssl.a /opt/freeware/lib/libssl.a
ln: 0653-421 /opt/freeware/lib/libssl.a exists.
Specify -f to remove /opt/freeware/lib/libssl.a before linking.
(/stud/config/git_install)
root:d2stud -> $
root:d2stud -> $ ln -s /usr/lib/libcrypto.a /opt/freeware/lib/libcrypto.a
ln: 0653-421 /opt/freeware/lib/libcrypto.a exists.
Specify -f to remove /opt/freeware/lib/libcrypto.a before linking.
(/stud/config/git_install)
root:d2stud -> $
I did not get what I need to remove as specified by the error message.
Can anyone explain how I can resolve the error.
You can move it to the side:
mv <orig-path> <new-path>
i.e.
mv /opt/freeware/lib/libssl.a /opt/freeware/lib/libssl.a-orig
then if you want to go back like it was, move it back:
mv /opt/freeware/lib/libssl.a-orig /opt/freeware/lib/libssl.a.
If or when you want to go back, you would need to remove what you created at /opt/freeware/lib/libssl.a (such as the symlink you are trying to create).

Resources