Git subtree & remote information not available to other users - git-subtree

Git subtree & remote information is missing from .git/config file on workspaces other than the committed workspace.
Other users who pulled the git repo are not able to see the remote repo information in their .git/config file
They are not able to update or modify the subtrees.
I used the following commands to add the subtree
$ git remote add -f github.com/google/cadvisor https://github.com/google/cadvisor.git
$ git merge -s ours --no-commit github.com/google/cadvisor/master
$ git read-tree --prefix=github.com/google/cadvisor -u github.com/google/cadvisor/master
$ git commit -m ""
What is the best way to get it working?

Related

Your environment may not have any index with Wazuh's alerts

I'm getting this error when I am trying to reinstall elk with wazuh
We need more precise information about your use case in order to help you to troubleshoot this problem.
I recommend you follow the uninstallation guide from the official documentation https://documentation.wazuh.com/current/user-manual/uninstall/elastic-stack.html, and after that, install it again https://documentation.wazuh.com/current/installation-guide/more-installation-alternatives/elastic-stack/all-in-one-deployment/unattended-installation.html.
If you want to preserve your configuration make sure to backup the following files
cp -p /var/ossec/etc/ /var/ossec_backup/etc/client.keys
cp -p /var/ossec/etc/ /var/ossec_backup/etc/ossec.conf
cp -p /var/ossec/queue/rids/sender_counter /var/ossec_backup/queue/rids/sender_counter
If you have made local changes to any of the following then also backup them:
cp -p /var/ossec/etc/local_internal_options.conf /var/ossec_backup/etc/local_internal_options.conf
cp -p /var/ossec/etc/rules/local_rules.xml /var/ossec_backup/rules/local_rules.xml
cp -p /var/ossec/etc/decoders/local_decoder.xml /var/ossec_backup/etc/local_decoder.xml
If you have the centralized configuration you must preserve:
cp -p /var/ossec/etc/shared/default/agent.conf /var/ossec_backup/etc/shared/agent.conf
Optionally the following files can be restored to preserve alert log files and syscheck/rootcheck databases:
cp -rp /var/ossec/logs/archives /var/ossec_backup/logs/archives/*
cp -rp /var/ossec/logs/alerts /var/ossec_backup/logs/alerts/*
cp -rp /var/ossec/queue/rootcheck /var/ossec_backup/queue/rootcheck/*
cp -rp /var/ossec/queue/syscheck /var/ossec_backup/queue/syscheck/*
After reinstalling, you need to place those files in their original path.
Also, in case you want to preserve your indexes after restarting consider making a backup of your indexes following this blog https://wazuh.com/blog/index-backup-management/.

How to delete all contents of local Artifactory repository via REST API?

I'm using a local repository as a staging repo and would like to be able to clear the whole staging repo via REST. How can I delete the contents of the repo without deleting the repo itself?
Since I have a similar requirement in one of my environments I like to provide a possible solution approach.
It is assumed the JFrog Artifactory instance has a local repository called JFROG-ARTIFACTORY which holds the latest JFrog Artifactory Pro installation RPM(s). For listing and deleting I've created the following script:
#!/bin/bash
# The logged in user will be also the admin account for Artifactory REST API
A_ACCOUNT=$(who am i | cut -d " " -f 1)
LOCAL_REPO=$1
PASSWORD=$2
STAGE=$3
URL="example.com"
# Check if a stage were provided, if not set it to PROD
if [ -z "$STAGE" ]; then
STAGE="repository-prod"
fi
# Going to list all files within the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-FileList
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X GET "https://${STAGE}.${URL}/artifactory/api/storage/${LOCAL_REPO}/?list&deep=1" \
-w "\n\n%{http_code}\n"
echo
# Going to delete all files in the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X DELETE "https://${STAGE}.${URL}/artifactory/${LOCAL_REPO}/" \
-w "\n\n%{http_code}\n"
echo
So after calling
./Scripts/deleteRepository.sh JFROG-ARTIFACTORY Pa\$\$w0rd! repository-dev
for the development instance, it listed me all files in the local repository called JFROG-ARTIFACTORY, the JFrog Artifactory Pro installation RPM(s), deleted them, but left the local repository itself.
You may change and enhance the script for your needs and have also a look into How can I completely remove artifacts from Artifactory?

How can I auto-create .docker folder in the home directory when spinning up VM Cluster (gce_vm_cluster) on gcloud through R?

I create VMs using the following command in R:
vms <- gce_vm_cluster(vm_prefix=vm_base_name,
cluster_size=cluster_size,
docker_image = my_docker,
ssh_args = list(username="test_user",
key.pub="/home/test_user/.ssh/google_compute_engine.pub",
key.private="/home/test_user/.ssh/google_compute_engine"),
predefined_type = "n1-highmem-2")
now when I SSH into the VMs, I do not find the .docker folder in the home directory
test_user#test_server_name:~$ gcloud beta compute --project "my_test_project" ssh --zone "us-central1-a" "r-vm3"
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .ssh
Now the below command gives an error (..obviously)
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
Unable to find image 'gcr.io/my_test_project/myimage:version1' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
I need to run the docker-credential-gcr configure-docker command to get the folder/file .docker/config.json
test_user#r-vm3 ~ $ docker-credential-gcr configure-docker
/home/test_user/.docker/config.json configured to use this credential helper for GCR registries
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .docker .ssh
Now,
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
version1: Pulling from my_test_project/myimage
Digest: sha256:98abc76543d2e10987f6ghi5j4321098k7654321l0987m65no4321p09qrs87654t
Status: Image is up to date for gcr.io/my_test_project/myimage:version1
gcr.io/my_test_project/myimage:version1
What I am trying to resolve:
I need the .docker/config.json to appear in the VMs without SSHing in and running the docker-credential-gcr configure-docker command
how about creating a bash script, upload to a cloud storage bucket, and call it while creating the cluster? Also you mentioned "R" Are you talking about R script?

Git post receive hook with sudo -u not working for some users

We have a website we're developing with WP-Engine. To simplify the processes I've set up the git repository for the site to automatically push code changes up to the WP-Engine staging area, using a Post Receive hook that looks like this:
#!/bin/bash -x
PUSH_AS_USER="admin"
sudo -u $PUSH_AS_USER git push wp-engine-staging master
I've also made it so that any user in the admin group, can sudo -u admin git, without typing the password (for reference, add the following to /etc/sudoers
%admin ALL=(admin) NOPASSWD: /usr/bin/git`
When I push to the repository, it fires the post commit hook no problem, however for some reason it only works for me. Two other users get the following warning:
[username#server repository]$ sudo -u admin git push wp-engine-staging master
fatal: unable to access '/home/username/.config/git/config': Permission denied
The weird thing is, I don't have this file for my user and mine works ok. Additionally, if I create the file for those users, I can't seem to give admin permission to view it, even if I add admin to the users own group, and give read permission to the group in every directory to the file. (ie, /home/username/.config/git/ and the config file itself).
As another example of this weird issue:
[daniel#server repository]$ sudo su username
[username#server repository]$ sudo -u admin git config --global --list
fatal: unable to access '/home/username/.config/git/config': Permission denied
[username#server repository]$ exit
[daniel#server repository]$ sudo -u admin git config --global --list
user.email=daniel#example.com
user.name=Daniel
[daniel#server repository]$ cat /home/daniel/.config/git/config
cat: /home/daniel/.config/git/config: No such file or directory
It's rather messing with our workflow. Any ideas?

Git Push from local repo to remote repo on a server for wordpress deployment

Despite finding minimal consistent documentation on this topic, it is my understanding that when you push a git commit from a local repository to a remote repository on a server (my remote repository is on a bluehost server) that the files are compressed therefore, they will not be immediately visible on the server in their normal form.
One method that I successfully used to see these files "uncompressed" is to clone the repository. I am however trying to use the local repo and remote repo in a workflow system to maintain a wordpress website. When I push a commit from the local repo to the remote repo, I am not able to access the site through the browser. Is there an additional step that I need to take that I am missing in order to access the uncompressed version of the files through a browser?
Do I have to clone the repository to the same folder after each push?
I have a website on Bluehost and this is very simple to set up. First you need to go into your CPanel and request ssh access.
Then follow this guide for setting up your private key (stays on your computer) and public key (goes in .ssh/authorized_keys on the bluehost server).
http://git-scm.com/book/en/Git-on-the-Server-Setting-Up-the-Server
I setup a directory under my home directory called git and setup a test.git project. Note that I'm using ~/test as my working tree as I don't want to push files into my www. You'll use ~/www.
*****#******.info [~]#
*****#******.info [~/git]# mkdir test.git
*****#******.info [~/git]# cd test.git
*****#******.info [~/git/test.git]# pwd
/home1/******/git/test.git
*****#******.info [~/git/test.git]# git init --bare
Initialized empty Git repository in /home1/*******/git/test.git/
*****#******.info [~/www/test.git]# cd hooks
*****#******.info [~/www/test.git]# vi post-receive
The post-receive file:
#!/bin/sh
GIT_WORK_TREE=/home1/*******/test git checkout -f
Save the file with :x
*****#******.info [~/www/test.git/hooks]# chmod +x post-receive
*****#******.info [~/www/test.git/hooks]# cd ~
*****#******.infoo [~]# git init test
Initialized empty Git repository in /home1/*******/test/.git/
*****#******.info [~]# exit
Back on my local machine:
[nedwidek#yule ~]$ git init test.git
Initialized empty Git repository in /home/nedwidek/test.git/.git/
[nedwidek#yule ~]$ cd test.git
[nedwidek#yule test.git]$ touch testfile.txt
[nedwidek#yule test.git]$ git add .
[nedwidek#yule test.git]$ git commit -m "testing" .
[master (root-commit) 1d6697c] testing
0 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 testfile.txt
[nedwidek#yule test.git]$ git remote add origin *****#******.info:/home1/*****/git/test.git
[nedwidek#yule test.git]$ git push -u origin master
Counting objects: 5, done.
Writing objects: 100% (3/3), 270 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To *****#******.info:/home1/*******/test.git
f144186..0fd10f8 master -> master
Branch master set up to track remote branch master from origin.
I checked and testfile.txt was placed in ~/test/.

Resources