I'm troubleshooting an issue regarding disk size usage in a centOS system (one of the partitions was growing too fast), and I notice one of my directories has 3.1GB:
$ du -hs /var/log/mongodb/
3.1G /var/log/mongodb/
$ df -h /var/log/mongodb/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-log 4.0G 3.7G 324M 93% /var/log
However, when I analyse the directory contents, I realize it only has 1 file, and that file is not that large (2.1GB):
$ ls -larth /var/log/mongodb/
total 3.1G
drwxr-xr-x 2 mongod mongod 24 Jul 2 2019 .
drwxr-xr-x. 22 root root 4.0K May 1 03:50 ..
-rw-r----- 1 mongod mongod 2.1G May 1 08:41 mongod.log
How can this happen?
Stat command:
$ stat /var/log/mongodb/mongod.log
File: ‘/var/log/mongodb/mongod.log’
Size: 2448779949 Blocks: 4880912 IO Block: 4096 regular file
Device: fd08h/64776d Inode: 6291527 Links: 1
Access: (0640/-rw-r-----) Uid: ( 996/ mongod) Gid: ( 994/ mongod)
Access: 2020-05-01 10:02:57.136265481 +0000
Modify: 2020-05-04 10:05:37.409626901 +0000
Change: 2020-05-04 10:05:37.409626901 +0000
Birth: -
Another example in another host:
$ df -kh | grep var
/dev/dm-3 54G 52G 2.1G 97% /var[
$ du -khs /var/
25G /var/
Is this somehow related to the difference between file size and actual space on disk occupied (due to disk blocks)? If so, how can I perform a defragmentation/optimization?
Related
I am running Suricata in IDS (af-packet) mode on Ubuntu 20.04.5 LTS (Focal Fossa) and deployed as the root user:
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
Following the Suricata "Adding your own Rules" Doc, I have added a very basic (for complexity ease when troubleshooting) alerting-rule with first available sid:1000000 from custom rules range:
########### Test Rules #############
alert ssh any any -> xxx.xxx.60.6 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000000;)
The .rules file for the local rules has sufficient permissions and matches suricata.rules, owned by root:
ls -halt /var/lib/suricata/rules/
total 22M
-rw-r--r-- 1 root root 3.2K Oct 17 00:00 classification.config
drwxr-x--- 2 root root 4.0K Oct 17 00:00 .
-rw-r--r-- 1 root root 22M Oct 17 00:00 suricata.rules
-rw-r--r-- 1 root root 210 Oct 13 21:45 local.rules
Ensured that the rules are added to Suricata suricata.yaml config and processed is restarted:
cat /etc/suricata/suricata.yaml | grep "rule-files" -A 5 -B 5
##
#default-rule-path: /var/lib/suricata/rules
default-rule-path: /etc/suricata/rules
rule-files:
- suricata.rules
- /var/lib/suricata/rules/local.rules
- /etc/suricata/rules/*.rules
AFAIK, the custom ruleset should be loaded into the suricata.rules file? Therefore, I am running the following verification subject to what I am reporting:
cat /var/lib/suricata/rules/suricata.rules | grep sid:1000000
I can test traffic and verify with tcpdump, matching the rule but never see a signature match in fast.log (which is logging other signature-matching traffic):
cat /var/log/suricata/fast.log | grep 1000000
I see no errors following statup of the service that would indicate an error present:
systemctl status suricata.service
● suricata.service - LSB: Next Generation IDS/IPS
Loaded: loaded (/etc/init.d/suricata; generated)
Active: active (running) since Mon 2022-10-17 13:11:39 UTC; 8h ago
Docs: man:systemd-sysv-generator(8)
Process: 2184275 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS)
Tasks: 78 (limit: 618963)
Memory: 25.2G
CGroup: /system.slice/suricata.service
└─2184295 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v>
Oct 17 13:11:39 sec3 systemd[1]: Starting LSB: Next Generation IDS/IPS...
Oct 17 13:11:39 sec3 suricata[2184275]: Starting suricata in IDS (af-packet) mode... done.
Oct 17 13:11:39 sec3 systemd[1]: Started LSB: Next Generation IDS/IPS.
Can somebody help me with somewhere I may be silly here?
TYIA!
I want to use a gluster replication volume for sqlite db storage
However, when the '.db' file is updated, LINUX does not detect the change, so synchronization between bricks is not possible.
Is there a way to force sync?
It is not synchronized even if you use the gluster volume heal command.
< My Gluster volume status >
[root#be-k8s-worker-1 common]# gluster volume create sync_test replica 2 transport tcp 10.XX.XX.X1:/home/common/sync_test 10.XX.XX.X2:/home/common/sync_test
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: sync_test: success: please start the volume to access data
[root#be-k8s-worker-1 common]# gluster volume start sync_test
volume start: sync_test: success
[root#be-k8s-worker-1 sync_test]# gluster volume status sync_test
Status of volume: sync_test
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.XX.XX.X1:/home/common/sync_test 49155 0 Y 1142
Brick 10.XX.XX.X2:/home/common/sync_test 49155 0 Y 2134
Self-heal Daemon on localhost N/A N/A Y 2612
Self-heal Daemon on 10.XX.XX.X1 N/A N/A Y 4257
Task Status of Volume sync_test
------------------------------------------------------------------------------
There are no active volume tasks
< Problem Case >
[root#be-k8s-worker-1 sync_test]# ls -al ## client 1
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.d
[root#be-k8s-worker-1 sync_test2]# ls -al ## client2
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.db
## diff -> No result
[root#be-k8s-worker-1 user]# diff sync_test/test.db sync_test2/test.db
But if I compare same file in windows
compare on windows
My SQLite database was set to WAL mode. So the wal file was being updated and the .db file was not immediately synced.
I turned off WAL Mode with this command:
PRAGMA journal_mode=DELETE;
I confirmed that it was synced immediately.
According to Sqlite document, It doesn't work over a network file system.
All processes using a database must be on the same host computer; WAL does not work over a network filesystem.
For some reason, in our CI, we need to run node tests inside docker container (including fetching dependencies, etc.). So, I am trying to have UI tests run as a part of docker build.
This is how my Dockerfile looks like:
FROM testcafe/testcafe:1.3.3
USER root
#some packages needed for some dependencies
RUN apk add --no-cache yarn python make build-base vim curl
RUN ln -s /opt/testcafe/docker/testcafe-docker.sh /usr/local/bin/testcafe-docker
WORKDIR /usr/src/app
RUN yarn config set registry https://private-npm-registry --global
COPY package*.json ./
RUN yarn
COPY . .
RUN yarn test:ui:ci
# "test:ui:clean": "rm -rf uitests/reports"
# "test:ui:ci-debug": "yarn test:ui:clean; testcafe-docker 'chromium --no-sandbox' uitests/tests -S -s uitests/reports/screenshots --video uitests/reports/videos -r spec,json:uitests/reports/report.json,html:uitests/reports/report.html",
# "test:ui:ci": "start-server-and-test serve http://127.0.0.1:8080 test:ui:ci-debug"
I get ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Also, I tried using user user, but it gives permission error when creating reports folder inside uitests folder before running tests.
I tried it with and without the --no-sandbox option, got the same issue. Also tried chromium:headless --no-sandbox, got the same error.
Any suggestion, please? Thanks.
UPDATE:
Also tried with user: user (avoiding permission issue by using /tmp folder for report) and got same issue:
20-Jul-2019 23:53:33 > yarn test:ui:clean; whoami; ls -sail; testcafe-docker 'chromium --no-sandbox' uitests/tests -S -s /tmp/uitests/reports/screenshots --video /tmp/uitests/reports/videos -r spec,json:/tmp/uitests/reports/report.json,html:/tmp/uitests/reports/report.html
20-Jul-2019 23:53:33
20-Jul-2019 23:53:34 $ rm -rf /tmp/uitests/reports
20-Jul-2019 23:53:34 user
20-Jul-2019 23:53:34 total 528
20-Jul-2019 23:53:34 13 4 drwxr-xr-x 10 user user 4096 Jul 20 13:52 .
20-Jul-2019 23:53:34 12 4 drwxr-xr-x 8 root root 4096 Jul 20 13:52 ..
20-Jul-2019 23:53:34 79 4 -rw-r--r-- 1 root root 20 Jul 19 07:06 .dockerignore
20-Jul-2019 23:53:34 75 4 -rw-r--r-- 1 root root 45 Jul 19 07:06 .eslintignore
20-Jul-2019 23:53:34 83 4 -rw-r--r-- 1 root root 790 Jul 19 07:06 .eslintrc
20-Jul-2019 23:53:34 78 4 drwxr-xr-x 8 root root 4096 Jul 20 13:48 .git
20-Jul-2019 23:53:34 82 4 -rw-r--r-- 1 root root 326 Jul 20 13:48 .gitignore
20-Jul-2019 23:53:34 87 4 -rw-r--r-- 1 root root 189 Jul 19 07:06 Dockerfile
20-Jul-2019 23:53:34 81 4 -rw-r--r-- 1 root root 592 Jul 20 13:48 DockerfileUITest
20-Jul-2019 23:53:34 89 4 -rw-r--r-- 1 root root 451 Jul 19 07:06 README.md
20-Jul-2019 23:53:34 90 4 drwxr-xr-x 3 root root 4096 Jul 19 07:06 backend
20-Jul-2019 23:53:34 4096 4 drwxr-xr-x 3 user user 4096 Jul 20 13:53 build
20-Jul-2019 23:53:34 85 4 -rwxr-xr-x 1 root root 959 Jul 19 07:06 build.sh
20-Jul-2019 23:53:34 84 4 -rw-r--r-- 1 root root 1124 Jul 19 07:06 deploy.yaml
20-Jul-2019 23:53:34 91 4 drwxr-xr-x 1348 user user 4096 Jul 20 13:52 node_modules
20-Jul-2019 23:53:34 74 4 -rw-r--r-- 1 root root 2959 Jul 20 13:48 package.json
20-Jul-2019 23:53:34 76 4 drwxr-xr-x 2 root root 4096 Jul 19 07:06 public
20-Jul-2019 23:53:34 88 4 -rwxr-xr-x 1 root root 742 Jul 19 07:06 runUITestsInCI.sh
20-Jul-2019 23:53:34 77 4 drwxr-xr-x 8 root root 4096 Jul 19 07:06 src
20-Jul-2019 23:53:34 80 4 drwxr-xr-x 7 root root 4096 Jul 19 07:06 uitests
20-Jul-2019 23:53:34 86 448 -rw-r--r-- 1 root root 454847 Jul 20 13:48 yarn.lock
20-Jul-2019 23:53:34 Using locally installed version of TestCafe.
20-Jul-2019 23:55:36 ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
20-Jul-2019 23:55:36
20-Jul-2019 23:55:36 Type "testcafe -h" for help.
Update-2:
Tried with firefox as well and same issue: ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Update-3:
Sorry for the delay. Got distracted in something else. Tried both in CI and local machine and it was same behaviour. Also tried suggestion in comment echo -e '#!/bin/sh\n/usr/bin/chromium-browser --no-sandbox --remote-debugging-port=9222 --headless' > /usr/local/bin/testcafe-docker and got following output in both CI and local env.
02-Dec-2019 18:45:17 DevTools listening on ws://127.0.0.1:9222/devtools/browser/711c6409-be9a-4e08-959e-0c994c8c5742
02-Dec-2019 18:45:17 [1202/074517.060637:ERROR:gl_implementation.cc(282)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
02-Dec-2019 18:45:17 [1202/074517.064824:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
02-Dec-2019 18:45:17 [1202/074517.072317:WARNING:dns_config_service_posix.cc(341)] Failed to read DnsConfig.
02-Dec-2019 18:45:17 [1202/074517.072782:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 1 time(s)
02-Dec-2019 18:45:17 [1202/074517.135149:ERROR:gl_implementation.cc(282)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
02-Dec-2019 18:45:17 [1202/074517.139203:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
02-Dec-2019 18:45:17 [1202/074517.143251:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 2 time(s)
02-Dec-2019 18:45:17 [1202/074517.208950:WARNING:gpu_process_host.cc(990)] Reinitialized the GPU process after a crash. The reported initialization time was 0 ms
02-Dec-2019 18:45:17 [1202/074517.209227:ERROR:gpu_channel_manager.cc(397)] ContextResult::kFatalFailure: Failed to create shared context for virtualization.
The recent 'Chromium' versions are not allow you to run them under the root user. Change the user from 'root' to user before running TestCafe tests.
Also, you need to setup permission for creating new folders.
See the detailed explanation in write in shared volumes docker
...
USER user
RUN yarn test:ui:ci
It is because of proxy. We use proxy to reach internet. I was adding http_proxy, https_proxy and no_proxy(127.0.0.1) in docker image. Browser in docker container was trying to reach testcafe server (running in same container) through proxy, because it does not use 127.0.0.1/localhost but 172.17.0.2as testcafe server host inside container. So adding 172.17.0.2 to no_proxy works.
My username (let’s call it my_name) belongs to the Apache group which is owner of var/www/html directory, sub-directories and files contained within.
In that directory I installed Wordpress. Directories and files permissions are set to 0775 (yeah, I know the files should have 644, but it is not a factor for now).
Well, my username has writing permissions indeed, because I am able to create new files or directories, as well as delete them, by using SSH terminal or WinSCP.
The problem comes up when I run a post-receive hook of a bare git repository, no matter if by running the script or by pushing changes from the local working repository.
In any scenario, the post-receive hook does not work because of permission denied. Really strange and I cannot understand why.
Could you help me please?
Edit:
This is the output of ls -alrth ~/git/devsite.git/hooks directory:
-rwxrwxr-x 1 name apache 896 Apr 2 22:41 commit-msg.sample
-rwxrwxr-x 1 name apache 727 Apr 7 09:09 post-receive
-rwxrwxr-x 1 name apache 189 Apr 2 22:41 post-update.sample
-rwxrwxr-x 1 name apache 398 Apr 2 22:41 pre-applypatch.sample
-rwxrwxr-x 1 name apache 1704 Apr 2 22:41 pre-commit.sample
-rwxrwxr-x 1 name apache 1239 Apr 2 22:41 prepare-commit-msg.sample
-rw-rw-r-- 1 name apache 1348 Apr 2 22:41 pre-push.sample
-rwxrwxr-x 1 name apache 4951 Apr 2 22:41 pre-rebase.sample
-rwxrwxr-x 1 name apache 3611 Apr 2 22:41 update.sample
This is the post-receive script:
#!/bin/sh
TARGET=/var/www/html/wp-content
GIT_DIR=/home/name/git/devsite.git
#run 'post-receive' hook
git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f
Try again after a
cd /path/tp/bare/repo
git config core.sharedRepository true
I mentioned it before in "Permissions with Git Post-Receive".
In the OP's instance, the post-receive script is not placed properly: it should be in ~/git/devsite.git/hooks, not ~/git/devsite.git.
I'm trying to apply a salt state to my non prod environment at /srv/salt/non-prod
I'm getting this result:
[root#salt non-prod]# salt '*' state.apply
salt.localdomain:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or external nodes data matches found.
Changes:
Summary for salt.localdomain
------------
Succeeded: 0
Failed: 1
I have this location defined in my master config
non-prod:
- /srv/non-prod
- /srv/salt/non-prod/services
- /srv/salt/non-prod/states
I have a top file located here:
[root#salt ~]# cat /srv/salt/non-prod/top.sls
base:
'*':
- apache
- python
- ssh
- users
These are the contents of the non-prod directory
[root#salt ~]# ls -lh /srv/salt/non-prod/
total 16K
drwxr-xr-x. 2 root root 4.0K Oct 3 21:02 apache
drwxr-xr-x. 2 root root 45 Oct 3 20:57 python
drwxr-xr-x. 2 salt salt 6 Oct 3 14:10 services
drwxr-xr-x. 2 root root 54 Oct 3 18:23 ssh
drwxr-xr-x. 2 salt salt 6 Oct 3 14:10 states
-rw-r--r--. 1 root root 80 Oct 3 15:29 state.template
-rw-r--r--. 1 root root 174 Oct 3 15:30 test.sls
-rw-r--r--. 1 root root 61 Oct 3 21:14 top.sls
drwxr-xr-x. 2 root root 22 Oct 3 21:03 users
drwxr-xr-x. 2 salt salt 99 Oct 3 18:28 webserver
it contains a few salt modules
How can I apply salt states to just the non-prod environment?
If you check the syntax using some yaml validation tools, then we can go to next step.
Read saltstack top documentation thoroughly, you will notice setting different environment, you first explicitly define alternate environment name on /etc/salt/master and also specify it under top.sls
i.e., you file_roots specify the non-prod environment
file_roots:
#non-prod environment
non-prod:
- /srv/non-prod
- /srv/salt/non-prod/services
- /srv/salt/non-prod/states
Thus your top.sls should use the environment name non-prod , not base
non-prod:
'*':
- apache
- python
- ssh
- users
Since saltstack always use "base" environment by default, you should apply the state explicitly.
salt '*' state.highstate saltenv=non-prod