How do I synchronize my gluster replicated volumes? - sqlite

I want to use a gluster replication volume for sqlite db storage
However, when the '.db' file is updated, LINUX does not detect the change, so synchronization between bricks is not possible.
Is there a way to force sync?
It is not synchronized even if you use the gluster volume heal command.
< My Gluster volume status >
[root#be-k8s-worker-1 common]# gluster volume create sync_test replica 2 transport tcp 10.XX.XX.X1:/home/common/sync_test 10.XX.XX.X2:/home/common/sync_test
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: sync_test: success: please start the volume to access data
[root#be-k8s-worker-1 common]# gluster volume start sync_test
volume start: sync_test: success
[root#be-k8s-worker-1 sync_test]# gluster volume status sync_test
Status of volume: sync_test
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.XX.XX.X1:/home/common/sync_test 49155 0 Y 1142
Brick 10.XX.XX.X2:/home/common/sync_test 49155 0 Y 2134
Self-heal Daemon on localhost N/A N/A Y 2612
Self-heal Daemon on 10.XX.XX.X1 N/A N/A Y 4257
Task Status of Volume sync_test
------------------------------------------------------------------------------
There are no active volume tasks
< Problem Case >
[root#be-k8s-worker-1 sync_test]# ls -al ## client 1
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.d
[root#be-k8s-worker-1 sync_test2]# ls -al ## client2
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.db
## diff -> No result
[root#be-k8s-worker-1 user]# diff sync_test/test.db sync_test2/test.db
But if I compare same file in windows
compare on windows

My SQLite database was set to WAL mode. So the wal file was being updated and the .db file was not immediately synced.
I turned off WAL Mode with this command:
PRAGMA journal_mode=DELETE;
I confirmed that it was synced immediately.
According to Sqlite document, It doesn't work over a network file system.
All processes using a database must be on the same host computer; WAL does not work over a network filesystem.

Related

Discrepancy in disk usage

I'm troubleshooting an issue regarding disk size usage in a centOS system (one of the partitions was growing too fast), and I notice one of my directories has 3.1GB:
$ du -hs /var/log/mongodb/
3.1G /var/log/mongodb/
$ df -h /var/log/mongodb/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-log 4.0G 3.7G 324M 93% /var/log
However, when I analyse the directory contents, I realize it only has 1 file, and that file is not that large (2.1GB):
$ ls -larth /var/log/mongodb/
total 3.1G
drwxr-xr-x 2 mongod mongod 24 Jul 2 2019 .
drwxr-xr-x. 22 root root 4.0K May 1 03:50 ..
-rw-r----- 1 mongod mongod 2.1G May 1 08:41 mongod.log
How can this happen?
Stat command:
$ stat /var/log/mongodb/mongod.log
File: ‘/var/log/mongodb/mongod.log’
Size: 2448779949 Blocks: 4880912 IO Block: 4096 regular file
Device: fd08h/64776d Inode: 6291527 Links: 1
Access: (0640/-rw-r-----) Uid: ( 996/ mongod) Gid: ( 994/ mongod)
Access: 2020-05-01 10:02:57.136265481 +0000
Modify: 2020-05-04 10:05:37.409626901 +0000
Change: 2020-05-04 10:05:37.409626901 +0000
Birth: -
Another example in another host:
$ df -kh | grep var
/dev/dm-3 54G 52G 2.1G 97% /var[
$ du -khs /var/
25G /var/
Is this somehow related to the difference between file size and actual space on disk occupied (due to disk blocks)? If so, how can I perform a defragmentation/optimization?

How to set Airflow scheduler log file mode/permissions

I'm running airflow 1.10.3, on Red Hat Linux. I'm using a LocalExecutor, and the webserver and scheduler are both started via systemd.
The log files being generated by the scheduler are world-readable (i.e. mode "-rw-rw-rw-"). The log directories being created are "drwxrwxrwx".
This fails the security scans my organisation has in place. I need to be able to restrict the permissions on these files.
The umask in /etc/profile is 077. I've also added UMask=0007 to both the systemd unit files for the services. However, although this seems to be working for the logs in the dags/logs/scheduler/ directory, it is not affecting the DAG run logs.
[root#server logs]# ls -la s3_dag_test/
total 4
drwxrwxrwx. 4 airflow airflow 54 Aug 7 17:35 .
drwxrwx---. 46 airflow airflow 4096 Aug 7 20:00 ..
drwxrwxrwx. 5 airflow airflow 126 Aug 7 17:37 bash_test
drwxrwxrwx. 5 airflow airflow 126 Aug 7 17:29 check_s3_for_file_in_s3
[root#server logs]# ls -la s3_dag_test/bash_test/2019-08-07T17\:29\:27.988953+00\:00/
total 12
drwxrwxrwx. 2 airflow airflow 19 Aug 7 17:35 .
drwxrwxrwx. 5 airflow airflow 126 Aug 7 17:37 ..
-rw-rw-rw-. 1 airflow airflow 8241 Aug 7 17:35 1.log
This is probably too late to be a helpful answer for you, but I had the exact same issue. My organization raised the permissions of the Airflow log directories as a security finding. I likewise checked the umask, to no avail.
I did manage to find this:
https://anoopkm.wordpress.com/2020/03/26/world-readable-airflow-dag-logs-issue/
In a nutshell, it looks like Airflow hard-codes the permissions used for creating files and folders.
I edited this Python file: venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py and changed lines 242 and 247 to use the 0o770 and 0o660 instead of 0o777 and 0o666 for creating folders and files, respectively. Then I manually triggered a DAG and checked the folder permissions. The newest log folder no longer had global rwx permissions.
Can you let us know how airflow is installed as normal user or root user

how to apply salt states to just one environment

I'm trying to apply a salt state to my non prod environment at /srv/salt/non-prod
I'm getting this result:
[root#salt non-prod]# salt '*' state.apply
salt.localdomain:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or external nodes data matches found.
Changes:
Summary for salt.localdomain
------------
Succeeded: 0
Failed: 1
I have this location defined in my master config
non-prod:
- /srv/non-prod
- /srv/salt/non-prod/services
- /srv/salt/non-prod/states
I have a top file located here:
[root#salt ~]# cat /srv/salt/non-prod/top.sls
base:
'*':
- apache
- python
- ssh
- users
These are the contents of the non-prod directory
[root#salt ~]# ls -lh /srv/salt/non-prod/
total 16K
drwxr-xr-x. 2 root root 4.0K Oct 3 21:02 apache
drwxr-xr-x. 2 root root 45 Oct 3 20:57 python
drwxr-xr-x. 2 salt salt 6 Oct 3 14:10 services
drwxr-xr-x. 2 root root 54 Oct 3 18:23 ssh
drwxr-xr-x. 2 salt salt 6 Oct 3 14:10 states
-rw-r--r--. 1 root root 80 Oct 3 15:29 state.template
-rw-r--r--. 1 root root 174 Oct 3 15:30 test.sls
-rw-r--r--. 1 root root 61 Oct 3 21:14 top.sls
drwxr-xr-x. 2 root root 22 Oct 3 21:03 users
drwxr-xr-x. 2 salt salt 99 Oct 3 18:28 webserver
it contains a few salt modules
How can I apply salt states to just the non-prod environment?
If you check the syntax using some yaml validation tools, then we can go to next step.
Read saltstack top documentation thoroughly, you will notice setting different environment, you first explicitly define alternate environment name on /etc/salt/master and also specify it under top.sls
i.e., you file_roots specify the non-prod environment
file_roots:
#non-prod environment
non-prod:
- /srv/non-prod
- /srv/salt/non-prod/services
- /srv/salt/non-prod/states
Thus your top.sls should use the environment name non-prod , not base
non-prod:
'*':
- apache
- python
- ssh
- users
Since saltstack always use "base" environment by default, you should apply the state explicitly.
salt '*' state.highstate saltenv=non-prod

Make Redis unixsocket owned by redis user

I have installed Redis 3.0.6 on Debian. There's a /etc/init.d/redis file which starts the Redis server when the system starts or I can invoke it manually to start/stop the server. Problem is that this script is run as root user.
I have a redis user and group that I want to make Redis run under. But I can't figure out how (I have not found an option to make Redis switch user ID after startup). In my config file I use
unixsocket /home/redis/redis.sock
unixsocketperm 770
But, of course, the redis.sock is owned by root.
drwxr-xr-x 2 redis redis 4096 Jan 18 03:34 bin
drwxr-xr-x 2 redis redis 4096 Jan 18 03:55 data
-rw-r--r-- 1 redis redis 41638 Jan 18 03:52 redis.conf
-rw-r--r-- 1 redis redis 16348 Jan 18 03:55 redis.log
-rw-r--r-- 1 root root 5 Jan 18 03:55 redis.pid
srwxrwx--- 1 root root 0 Jan 18 03:55 redis.sock
And the process is, too.
root 7913 0.1 0.1 38016 1976 ? Ssl 03:55 0:00 /home/redis/bin/redis-server *:6379
Ultimately, I have a git user that is also in the redis group and thus should in the end have access to redis.sock. (This is for a manual deployment of GitLab CE).
How I can I configure the Redis server that way?
Update your /etc/init.d to use sudo during start service (line 33):
sudo -u redis $EXEC $CONF
You may need to cleanup old files (in /var/lib) or reset their permission to redis.

Determine Which Hard Drive the Database-Files/Transaction-Files are Stored

I have an old Sybase server whose database is acting up. I have tried rebuilding the file-system and the database file. But the problem returns. I want to replace the hard drive that the database-files and transaction-files are stored. I want to determine exactly which hard drive it is because I am not familiar with Unix. Moreover, I also want to see if those files are stored in the same hard drive as the operating system or not; if they are, I will need to re-install the operating system as well as restoring the database to the new hard drive. Obviously this will be better if the database-files and the transaction-files are not in the same hard drive as the operating system. Please help me to determine these two things.
So far, I have found these:
(1) I use sp_helpdb command and find that the database files and the transaction files are stored in these logical devices:
sybdbs
syblogs
master
sybdbs2
(2) I use sp_helpdevice command to look into those 4 logical devices shown above, and find that those logical devices are in these physical devices:
/dev/rdsk/c0t0d0s1
/dev/rdsk/c0t3d0s4
d_master
/dev/rdsk/sybdbs2
(3) When I use sp_helpdevice to show all the physical devices, I see this:
device_name physical_name description status cntrltype device_number low high
------------------ ------------------------------------------- ------------------------------------------------ ------ --------- ------------- -------- --------
historydump /export/home/syb11.dump/history.dump disk, dump device 16 2 0 0 0
isproddump /export/home/syb11.dump/isprod.dump disk, dump device 16 2 0 0 0
istestdump /export/home/syb11.dump/istest.dump disk, dump device 16 2 0 0 0
master d_master special, physical disk, 100.00 MB 2 0 0 0 51199
masterdump /export/home/syb11.dump/master.dump disk, dump device 16 2 0 0 0
modeldump /export/home/syb11.dump/model.dump disk, dump device 16 2 0 0 0
prodtestdump /export/home/syb11.dump/prodtest.dump disk, dump device 16 2 0 0 0
sybdbs /dev/rdsk/c0t0d0s1 special, default disk, physical disk, 2000.00 MB 3 0 3 50331648 51355647
sybdbs2 /dev/rdsk/sybdbs2 special, physical disk, 1.00 MB 2 0 5 83886080 83886591
syblogs /dev/rdsk/c0t3d0s4 special, physical disk, 850.00 MB 2 0 4 67108864 67544063
sybscurty /dev/rdsk/c0t3d0s5 special, physical disk, 100.00 MB 2 0 2 33554432 33605631
sybsecuritydump /export/home/syb11.dump/sybsecurity.dump disk, dump device 16 2 0 0 0
sybsystemprocsdump /export/home/syb11.dump/sybsystemprocs.dump disk, dump device 16 2 0 0 0
sysprocsdev /dev/rdsk/c0t0d0s4 special, physical disk, 100.00 MB 2 0 1 16777216 16828415
tapedump1 /dev/rmt4 tape, 625 MB, dump device 16 3 0 0 20000
tapedump2 /dev/rst0 disk, dump device 16 2 0 0 20000
uniface724dump /export/home/syb11.dump/uniface724.dump disk, dump device 16 2 0 0 0
uniface7dump /export/home/syb11.dump/uniface7.dump disk, dump device 16 2 0 0 0
(4) I want to know more about those physical devices. I use the df command to examine them:
df -k /dev/rdsk/c0t0d0s1
df -k /dev/rdsk/c0t3d0s4
df -k d_master
df -k /dev/rdsk/sybdbs2
The df command complains that the first three devices are “not a block device, directory or mounted resource”.
On the other hand, the df command shows the following info for the last device:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t3d0s0 576558 371019 147889 71% /
In any case, this doesn’t tell me which drive(s) those devices are on.
(5) When I use the mount command, I see this:
/ on /dev/dsk/c0t3d0s0 read/write/setuid on Mon Jul 6 11:10:46 2015
/usr on /dev/dsk/c0t3d0s6 read/write/setuid on Mon Jul 6 11:10:46 2015
/proc on /proc read/write/setuid on Mon Jul 6 11:10:46 2015
/dev/fd on fd read/write/setuid on Mon Jul 6 11:10:46 2015
/tmp on swap read/write on Mon Jul 6 11:10:49 2015
/export on /dev/dsk/c0t3d0s7 setuid/read/write on Mon Jul 6 11:10:49 2015
/freespace on /dev/dsk/c0t0d0s5 setuid/read/write on Mon Jul 6 11:10:49 2015
/sybase on /dev/dsk/c0t0d0s0 setuid/read/write on Mon Jul 6 11:10:49 2015
/usr/openwin on /dev/dsk/c0t3d0s3 setuid/read/write on Mon Jul 6 11:10:49 2015
I cannot figure out the connection between the mounted devices above to the physical devices for the database-files and the transaction-files. I also cannot link the mounted devices above to the hard drives shown in the next section.
(6) When I use the cat /etc/vfstab command, I see these:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
/proc - /proc proc - no -
fd - /dev/fd fd - no -
swap - /tmp tmpfs - yes -
/dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0 / ufs 1 no -
/dev/dsk/c0t3d0s6 /dev/rdsk/c0t3d0s6 /usr ufs 1 no -
/dev/dsk/c0t3d0s7 /dev/rdsk/c0t3d0s7 /export ufs 2 yes -
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /freespace ufs 2 yes -
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 /sybase ufs 2 yes -
/dev/dsk/c0t3d0s3 /dev/rdsk/c0t3d0s3 /usr/openwin ufs 2 yes -
/dev/dsk/c0t3d0s1 - - swap - no -
# The following lines have been commented-out to allow Sybase to access these
# partitions and Raw Partitions. Nov-24-1999
# /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /master ufs 2 yes -
# /dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /sybdbs ufs 2 yes -
# /dev/dsk/c0t3d0s4 /dev/rdsk/c0t3d0s4 /syblogs ufs 2 yes -
# /dev/dsk/c0t3d0s5 /dev/rdsk/c0t3d0s5 /sybscurty ufs 2 yes -
# /dev/dsk/c0t0d0s4 /dev/rdsk/c0t0d0s4 /sybtemproc ufs 2 yes -
(7) When I use the format command, I see these two hard drives:
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <IBM-DNES-309170-SA30 cyl 11195 alt 2 hd 5 sec 320>
/iommu#f,e0000000/sbus#f,e0001000/espdma#f,400000/esp#f,800000/sd#0,0
1. c0t3d0 <SEAGATE-ST34520N-1206 cyl 9004 alt 2 hd 4 sec 246>
/iommu#f,e0000000/sbus#f,e0001000/espdma#f,400000/esp#f,800000/sd#3,0
(8) I don’t see any external device attached to the Sybase server. Having said this, there is a backup Sybase server, and the backup Sybase server has an external device attached to it (through a SCSI cable). At this point, I assume the database-files and transaction-files are all stored inside the Sybase server.
By the way, the Sybase server uses this Unix operating system:
SunOS <my-server-name> 5.4 Generic_101945-62 sun4m sparc
And the Sybase version is:
SQL Server/11.0.3.2/P/Sun_svr4/OS 5.4/SWR 7578 Rollup/OPT/Mon Nov 3 22:19:21 PST 1997
By the way, what I have tried so far to repair the database are:
• Tried dbcc checkalloc(, fix). Unfortunately, this command could not fix and could not complete.
• Tried drop-db/add-new-db/restore-db-from-backup. Unfortunately the restore failed to complete.
• Tried fsck-to-fix-the-devices. It could not complete and complained about “MAGIC NUMBER WRONG”.
• Tried Analyze-option-in-format-command-to-repair-Disk-0, and then add-new-db and restore-db-from-backup. This method seemed to work. But after one week or so, I found a table has I/O error. Honestly, I don’t even know if the databases are really in Disk-0 or not.
Please help me to determine which hard drive those database-files and transaction-files are stored, and whether they are in the same hard drive as the Unix operating system.
Thanks in advance.
Jay Chan

Resources