Your environment may not have any index with Wazuh's alerts - kibana

I'm getting this error when I am trying to reinstall elk with wazuh

We need more precise information about your use case in order to help you to troubleshoot this problem.
I recommend you follow the uninstallation guide from the official documentation https://documentation.wazuh.com/current/user-manual/uninstall/elastic-stack.html, and after that, install it again https://documentation.wazuh.com/current/installation-guide/more-installation-alternatives/elastic-stack/all-in-one-deployment/unattended-installation.html.
If you want to preserve your configuration make sure to backup the following files
cp -p /var/ossec/etc/ /var/ossec_backup/etc/client.keys
cp -p /var/ossec/etc/ /var/ossec_backup/etc/ossec.conf
cp -p /var/ossec/queue/rids/sender_counter /var/ossec_backup/queue/rids/sender_counter
If you have made local changes to any of the following then also backup them:
cp -p /var/ossec/etc/local_internal_options.conf /var/ossec_backup/etc/local_internal_options.conf
cp -p /var/ossec/etc/rules/local_rules.xml /var/ossec_backup/rules/local_rules.xml
cp -p /var/ossec/etc/decoders/local_decoder.xml /var/ossec_backup/etc/local_decoder.xml
If you have the centralized configuration you must preserve:
cp -p /var/ossec/etc/shared/default/agent.conf /var/ossec_backup/etc/shared/agent.conf
Optionally the following files can be restored to preserve alert log files and syscheck/rootcheck databases:
cp -rp /var/ossec/logs/archives /var/ossec_backup/logs/archives/*
cp -rp /var/ossec/logs/alerts /var/ossec_backup/logs/alerts/*
cp -rp /var/ossec/queue/rootcheck /var/ossec_backup/queue/rootcheck/*
cp -rp /var/ossec/queue/syscheck /var/ossec_backup/queue/syscheck/*
After reinstalling, you need to place those files in their original path.
Also, in case you want to preserve your indexes after restarting consider making a backup of your indexes following this blog https://wazuh.com/blog/index-backup-management/.

Related

sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set after chmod 755

What i tried is this:
https://stackoverflow.com/a/29903645/4983983
I executed this:
n=$(which node); \
n=${n%/bin/node}; \
chmod -R 755 $n/bin/*; \
sudo cp -r $n/{bin,lib,share} /usr/local
but now i can not execute for example sudo su command, i get following error:
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
I am not sure how can i redo it ?
EDIT:
Regarding #Bodo answer:
sudo rpm --setperms mkdir
sudo rpm --setugids mkdir
cd /opt
mkdir test13121
mkdir: cannot create directory ‘test13121’: Permission denied
BUT:
sudo chown root:root /usr/bin/mkdir && sudo chmod 4755 /usr/bin/mkdir
mkdir test912121
The difficulty is to find out the normal permissions of the files you have changed.
You can try to reset the file permissions based on the information in the package management.
See e.g. https://www.cyberciti.biz/tips/reset-rhel-centos-fedora-package-file-permission.html
Citation from this page:
Reset the permissions of the all installed RPM packages
You need to use combination of rpm and a shell for loop command as follows:
for p in $(rpm -qa); do rpm --setperms $p; done
for p in $(rpm -qa); do rpm --setugids $p; done
I suggest to read the linked page completely and try this for a single package first.
I guess you can somehow ask rpm to find the package name that contains e.g. /usr/bin/sudo. and try if the commands work for a single package.
Edit: If the setuid or setgid bits are not correct, you can try to change the order of the commands and use --setugids before --setperms. (In some cases chown resets setuid or setgid bits; don't know if this applies to the rpm commands.)
There are sources in the internet that propose to combine --setugids and--setperms in one command or to use option -a instead of a loop like
rpm -a --setperms
Read the documentation. (I don't have an RPM based system where I could test the commands.)

Mount EFS to wp-content on elastic beanstalk

So i'm having a problem setting up a Wordpress site on EB. I got the EFS to mount correctly on wp-content/uploads/wpfiles (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html) however this only allows the pages to be stored and not the plugins. Is it possible to mount the entire wp-content folder onto EFS, I've tried and so far failed
I'm not sure if this issue was resolved and it passed silently. I'm having the same issue as you, but with a different error. My knowledge is fairly limited so take what I say with a grain of salt, according to what I saw in your log the problem is that your instance can't see the server. I think that it could be that your EB application is getting deployed in a different Availability Zone than your EFS. What I mean is that maybe you have mount targets for AZ a, b and d and your EB is getting deployed in AZ c. I hope this helps.
I tried a different approach (it basically does the same thing, but I'm manually linking each of the subfolders instead of the wp-content folder), for it to work I deleted the original folders inside /var/app/ondeck (that will eventually get copied to /var/app/current/ that is the folder which get served). Of course, once this gets done your Wordpress won't work since it doesn't have any themes, the solution here would be to quickly log in to the EC2 instance in which your ElasticBeanstalk app is running and manually copying the contents to the mounted EFS (in my case the /wpfiles folder). To connect to the EC2 instance (you can find the instance ID under your EB health configuration) you can follow this link and to mount your EFS you can follow this link. Of course, if the config works you won't have to mount it since it would be already mounted though empty. Here is the content of my config file:
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_NAME: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p $MOUNT_DIRECTORY
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
MOUNT_DIRECTORY=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME.efs.${EFS_REGION}.amazonaws.com:/ $MOUNT_DIRECTORY || true
mkdir -p $MOUNT_DIRECTORY/uploads
mkdir -p $MOUNT_DIRECTORY/plugins
mkdir -p $MOUNT_DIRECTORY/themes
chown webapp:webapp -R $MOUNT_DIRECTORY/uploads
chown webapp:webapp -R $MOUNT_DIRECTORY/plugins
chown webapp:webapp -R $MOUNT_DIRECTORY/themes
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-rm-wp-content-uploads:
command: rm -rf /var/app/ondeck/wp-content/uploads && rm -rf /var/app/ondeck/wp-content/plugins && rm -rf /var/app/ondeck/wp-content/themes
02-symlink-uploads:
command: ln -snf $MOUNT_DIRECTORY/uploads /var/app/ondeck/wp-content/uploads && ln -snf $MOUNT_DIRECTORY/plugins /var/app/ondeck/wp-content/plugins && ln -snf $MOUNT_DIRECTORY/themes /var/app/ondeck/wp-content/themes
I'm using another config file to create my EFS as in here, in case you have already created your EFS you must change EFS_NAME: '`{"Ref" : "FileSystem"}`' to EFS_NAME: id_of_your_EFS.
I hope this helps user3738338.
You can do following this link - https://github.com/aws-samples/eb-php-wordpress/blob/master/.ebextensions/efs-mount.config
Just keep a note it uses uploads, you can change it for wp-content.

What is the difference of cp -p and cp -a in UNIX?

What is the difference of cp -p and cp -a in UNIX?
Hi Everyone, by seeing the subject you'll have an idea what I wanted to know but before that, if you know any emulator or site where I can put in my command to test please let me know so I can do it first on my own. I would like a UNIX test environment given the fact that I don't have a UNIX environment setup on my laptop. Thank you!
With the -p option, the copy has the same modification time, the same access time, and the same permissions as the original. It also has the same owner and group as the original, if the user doing the copy has the permission to create such files.
The -a option means -R and -p, plus a few other preservation options. It attempts to make a copy that's as close to the original as possible: same directory tree, same file types, same contents, same metadata (times, permissions, extended attributes, etc.).

what the difference betwen mkdir vs mkdir -p

I tried to create folder in my local git repo using mkdir. It didn't work, but
mkdir -p works.
Why?
I'm using Mac OS by the way. I checked the definition of mkdir -p. But I still don't quite understand.
Say you're in the directory:
/home/Users/john
And you want to make 3 new sub directories to end up with:
/home/Users/john/long/dir/path
While staying in "/home/Users/john", this will fail:
mkdir long/dir/path
You would have to make three separate calls:
mkdir long
mkdir long/dir
mkdir long/dir/path
The reason is that mkdir by default only creates directories one level down. By adding the "-p" flag, mkdir will make the entire set in one pass. That is, while this won't work:
mkdir long/dir/path
this will work:
mkdir -p long/dir/path
and create all three directories.
From the help of mkdir:
-p, --parents no error if existing, make parent directories as needed
So you failed maybe just because you wanted to create both parent and child folder in one shoot without -p option.
That flag will create parent directories when necessary. You were probably trying to create something with subdirectories and failing due to missing the -p flag

rsync - create all missing parent directories?

I'm looking for an rsync-like program which will create any missing parent directories on the remote side.
For example, if I have /top/a/b/c/d on one server and only /top/a exists on the remote server, I want to copy d to the remote server and have the b and c directories created as well.
The command:
rsync /top/a/b/c/d remote:/top/a/b/c
won't work because /tmp/a/b doesn't exist on the remote server. And if it did exist then the file d would get copied to the path /top/a/b/c.
This is possible to do with rsync using --include and --exclude switches, but it is very involved, e.g.:
rsync -v -r a dest:dir \
--include 'a/b' \
--include 'a/b/c' \
--include 'a/b/c/d' \
--include 'a/b/c/d/e' \
--exclude 'a/*' \
--exclude 'a/b/*' \
--exclude 'a/b/c/*' \
--exclude 'a/b/c/d/*'
will only copy a/b/c/d/e to dest:dir/a/b/c/d/e even if the intermediate directories have files. (Note - the includes must precede the excludes.)
Are there any other options?
You may be looking for
rsync -aR
for example:
rsync -a --relative /top/a/b/c/d remote:/
See also this trick in other question.
rsync -aq --rsync-path='mkdir -p /tmp/imaginary/ && rsync' file user#remote:/tmp/imaginary/
From http://www.schwertly.com/2013/07/forcing-rsync-to-create-a-remote-path-using-rsync-path/, but don't copy and paste from there, his syntax is butchered.
it lets you execute arbitrary command to setup the path for rsync executables.
As of version 3.2.3 (6 Aug 2020), rynsc has a flag for this purpose.
From the rsync manual page (man rsync):
--mkpath create the destination's path component
i suggest that you enforce the existence manually:
ssh user#remote mkdir -p /top/a/b/c
rsync /top/a/b/c/d remote:/top/a/b/c
this creates the target folder if it does not exists already.
According to https://unix.stackexchange.com/a/496181/5783, since rsync 2.6.7, --relative works if you use . to anchor the starting parent directory to create at the destination:
derek#DESKTOP-2F2F59O:~/projects/rsync$ mkdir --parents top1/a/b/c/d
derek#DESKTOP-2F2F59O:~/projects/rsync$ mkdir --parents top2/a
derek#DESKTOP-2F2F59O:~/projects/rsync$ rsync --recursive --relative --verbose top1/a/./b/c/d top2/a/
sending incremental file list
b/
b/c/
b/c/d/
sent 99 bytes received 28 bytes 254.00 bytes/sec
total size is 0 speedup is 0.00
--relative does not work for me since I had different setup.
Maybe I just didn't understood how --relative works, but I found that the
ssh remote mkdir -p /top/a/b/c
rsync /top/a/b/c/d remote:/top/a/b/c
is easy to understand and does the job.
I was looking for a better solution, but mine seems to be better suited when you have too many sub-directories to create them manually.
Simply use cp as an intermediate step with the --parents option
cp --parents /your/path/sub/dir/ /tmp/localcopy
rsync [options] /tmp/localcopy/* remote:/destination/path/
cp --parents will create the structure for you.
You can call it from any subfolder if you want only one subset of the parent folders to be copied.
A shorter way in Linux to create rsync destination paths is to use the '$_' Special Variable. (I think, but cannot confirm, that it is also the same in OSX).
'$_' holds the value of the last argument of the previous command executed. So the question could be answered with:
ssh remote mkdir -p /top/a/b/c/ && rsync -avz /top/a/b/c/d remote:$_

Resources