Zimbra not releasing space - postfix-mta

We are using Zimbra 8.0 collaboration server open source edition. (Release 8.0.0_GA_5434.RHEL6_64_20120907144639 CentOS6_64 FOSS edition ) on centos6.
While we deleting an Account or Emptying Folder , the free Space in HDD is not increasing.
For Example, If we deleted an account having 10GB size, only 1GB is getting added to Total free space of HDD.
In detail, I Emptied an Account having size 10.1GB.
The Space occupied by the Store Folder before deleting is 10.1 GB
/opt/zimbra/store/0/443 - 10.1 GB
After Emptying inbox folder the size of the store folder reduced to 100 MB.
/opt/zimbra/store/0/443 - 100 MB
(Emptied using command zmmailbox -z -m xxx#xxx.com emptyFolder /Inbox)
But NO change in "df" output. The Total free space remains the same.
Accordingly, Space is not releasing while deleting an account.
Anyone Please help to find a solution in this issue. What need to be done to get free space added to HDD. Help needed.

I don't know what your dumpster settings are, though they could be determined by a command like this:
zmprov ga xxx#xxx.com |grep Dumpster
zimbraDumpsterEnabled: TRUE
zimbraDumpsterPurgeEnabled: TRUE
zimbraDumpsterUserVisibleAge: 10d
zimbraMailDumpsterLifetime: 10d
I suspect that the messages are sticking around in the dumpster though. You can clean them out with a command like this:
zmmailbox -z -m xxx#xxx.com -A emptyDumpster

Related

Cstack_info() output different between Rstudio Server and Rstudio Desktop on Ubuntu 20.04LTS

I am having trouble getting rid of the CStack limit when running my code.
I managed to get rid of the error by appending
* hard stack unlimited
* soft stack unlimited
* soft memlock unlimited
* hard memlock unlimited
root soft stack unlimited
root hard stack unlimited
root soft memlock unlimited
root hard memlock unlimited
to /etc/security/limits.conf which fixes the problem on RStudio Desktop.
I get the following output from running Cstack_info()
> Cstack_info()
size current direction eval_depth
NA NA 1 2
This is the output from ulimit -s on the desktop terminal
coolshades#coolshades-ws:~$ ulimit -s
unlimited
Code runs perfectly on RStudio Desktop.
On the same machine, I also am running RStudio Server (free) to run code remotely. It would seem that these settings are not sticking when running RStudio Server.
This is the output from Cstack_info() on the RStudio Server
> Cstack_info()
size current direction eval_depth
7969177 26336 1 2
This is the ulimit output from terminal on the RStudio Server
coolshades#coolshades-ws:~$ ulimit -s
8192
I am able to change the limit back to unlimited with ulimit -s unlimited. But it will only kick in after Rsession is restarted. However, when I restart the R session, the output of ulimit -s reverts back to 8192.
I am out of ideas as to how best to tackle this problem and hope a more experienced RStudio Server user will be able to advise on this matter.
I have solved this problem.
I had to make the following changes to the following files:
sudo nano /etc/systemd/user.conf add DefaultLimitSTACK=134217728
sudo nano /etc/systemd/system.conf add DefaultLimitSTACK=134217728
Make sure the number you define is a power of 2, else Ubuntu fails to login for some reason.
I have 128GB of RAM. So I have set my limit to 2^27.
Hope this helps someone with the same problem.

Mysterious mariadb 10.4.1 ram ussage

i have upgraded from mariadb 10.1.36 to 10.4.8 and i can see mysterious increasing ram ussage on that new version. I also edited innodb_buffer_pool_size ant seems there is no effect if its set to 15M or 4G, ram is just slowly increasing. After while it eat whole ram and oom killer kills mariadb and this is repeating.
My server has 8GB RAM and its increasing like 60-150MB per day. Its not terrible but i have around 150 database servers so its huge problem.
I can temporary fix problem by restarting mariadb and its start again.
Info about database server:
databases: 200+
tables: 28200(141 per database)
average active connections: 100-200
size of stored data: 100-350GB
cpu: 4
ram: 8GB
there is my config:
server-id=101
datadir=/opt/mysql/
socket=/var/lib/mysql/mysql.sock
tmpdir=/tmp/
gtid-ignore-duplicates=True
log_bin=mysql-bin
expire_logs_days=4
wait_timeout=360
thread_cache_size=16
sql_mode="ALLOW_INVALID_DATES"
long_query_time=0.8
slow_query_log=1
slow_query_log_file=/opt/log/slow.log
log_output=TABLE
userstat = 1
user=mysql
symbolic-links=0
binlog_format=STATEMENT
default_storage_engine=InnoDB
slave_skip_errors=1062,1396,1690innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=4G
innodb_buffer_pool_instances=5
innodb_log_file_size=1G
innodb_log_buffer_size=196M
innodb_flush_log_at_trx_commit=1
innodb_thread_concurrency=24
innodb_file_per_table
innodb_write_io_threads=24
innodb_read_io_threads=24
innodb_adaptive_flushing=1
innodb_purge_threads=5
innodb_adaptive_hash_index=64
innodb_flush_neighbors=0
innodb_flush_method=O_DIRECT
innodb_io_capacity=10000
innodb_io_capacity_max=16000
innodb_lru_scan_depth=1024
innodb_sort_buffer_size=32M
innodb_ft_cache_size=70M
innodb_ft_total_cache_size=1G
innodb_lock_wait_timeout=300
slave_parallel_threads=5
slave_parallel_mode=optimistic
slave_parallel_max_queued=10000000
log_slave_updates=on
performance_schema=on
skip-name-resolve
max_allowed_packet = 512M
query_cache_type=0
query_cache_size = 0
query_cache_limit = 1M
query_cache_min_res_unit=1K
max_connections = 1500
table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
open_files_limit=1020000
collation-server = utf8_general_ci
character-set-server = utf8
log-error=/opt/log/error.log
log-error=/opt/log/error.log
pid-file=/var/run/mysqld/mysqld.pid
malloc-lib=/usr/lib64/libjemalloc.so.1
I solved it! The problem is memory allocation library.
If you do this SQL query:
SHOW VARIABLES LIKE 'version_malloc_library';
You must to get value "jemalloc" library. If you get only "system", you may have problems.
To change that, you need edit any .conf file in this directory:
/etc/systemd/system/mariadb.service.d/
There, add this line:
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1"
(this library file may be in other folder)
Then you must to restart mysqld
service mysqld stop && systemctl daemon-reload && service mysqld start
You got carried away in increasing values in my.cnf.
Many of the caches grow until hitting their limit, hence the memory growth you experienced.
What is the value from SHOW GLOBAL STATUS LIKE 'Max_used_connections';? Having a large max_connections accentuates several of the other values; lower it.
But perhaps the really bad one(s) involve table caches -- which have units of tables, not bytes. Crank these down a lot:
table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
I have exactly the same problem. Is it due to a bad configuration? Or is it a bug of the new version?
mariadb 10.1 was updated to 10.3 just when I upgraded Debian 9 to Debian 10. I tried solve the problem with mariadb 10.4 but nothing changed.
I want to downgrade version but I think it's neccesary dump all database and restore it, and that means being hours without service.
I don't think Debian 10 has to do with the issue
Please read my previous comments about alternative memory allocators...
When jemalloc is used:
When default memory allocator used:
Try with tcmalloc
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4.5.3.1"

502 Gitlab is taking too much time to respond

After taking gitlab backup everyday gitlab is throwing 502 error.
I saw nginx logs but did not find that much information.
After gitlab-ctl restart it starts working again.
System Configurations:
OS : Ubuntu 16.04 LTS
4 GB Ram
200 GB Disk Space
can anyone give permanent solution for it.
There is a high possibility that it run out of shared memory. As each time after the backup you got the 502 error.
To check it with gitlab-ctl tail tail detail
It will show something like:
2019-04-12_12:37:17.27154 FATAL: could not map anonymous shared memory: Cannot allocate memory
2019-04-12_12:37:17.27157 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 4345470976 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
2019-04-12_12:37:17.27171 LOG: database system is shut down
Then check it with free -m, which shows there is no available shared memory.
total used free shared buffers cached
Mem: 16081 13715 2365 0 104 753
-/+ buffers/cache: 12857 3223
Then you need to check if there is some process take too many shared memory, or too many zomibe process, then kill it with command like ps -aef | grep ffmpeg | awk '{print $2}' | xargs kill 9
Check it with free -h, there is about 112M shared memory now.
total used free shared buffers cached
Mem: 15G 4.4G 11G 112M 46M 416M
-/+ buffers/cache: 3.9G 11G
Swap: 0B 0B 0B
At last,restart you gitlab with gitlab-ctl restart, after sometime the gitlab booted, the 502 gone.
After long search i got something about it. After taking backup my gitlab-workhorse is getting ideal and gitlab.socket is refusing the connection. As temporary solution i have installed a new cron job for restarting gitlab service after the complpetion of gitlab backup cronjob.
If the gitlab is installed in Virtual-Box - Ubuntu server either 18.04 or 20.04,
please increase the RAM to 4gb and the provide atleast 3 processors.

Spark - No Space Left on device error

I am getting the below error . The Spark_local_dir has been set and has enough space and inodes left.
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.spark.storage.TimeTrackingOutputStream.write(TimeTrackingOutputStream.java:58)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:294)
at org.xerial.snappy.SnappyOutputStream.compressInput(SnappyOutputStream.java:306)
at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:245)
at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
at org.apache.spark.io.SnappyOutputStreamWrapper.write(CompressionCodec.scala:190)
at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:218)
at org.apache.spark.util.collection.ChainedBuffer.read(ChainedBuffer.scala:56)
at org.apache.spark.util.collection.PartitionedSerializedPairBuffer$$anon$2.writeNext(PartitionedSerializedPairBuffer.scala:137)
at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:757)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
cat spark-env.sh |grep -i local
export SPARK_LOCAL_DIRS=/var/log/hadoop/spark
disk usage
df -h /var/log/hadoop/spark
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/meta 200G 1.1G 199G 1% /var/log/hadoop
inodes
df -i /var/log/hadoop/spark
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/meta 209711104 185 209710919 1% /var/log/hadoop
I also encountered the same issue. To resolve it, I first checked my hdfs disk usage by running hdfs dfsadmin -report.
The Non DFS Used column was above 250 GB. This implied that my logs or tmp or intermediate data was consuming too much space.
After running du -lh | grep G from root folder I figured that spark/work was consuming over 200 GB.
After looking at the folders inside spark/work I understood that by mistake I forgot to uncomment System.out.println statement and hence the logs were consuming high space.
If you're running YARN in yarn-cluster mode then the local dirs used by both Spark executors and driver will be taken from YARN config (yarn.nodemanager.local-dirs). spark.local.dir and your env variable will be ignored.
If you're running YARN in yarn-client mode then the executors will use the local dirs configured the in the YARN config again but the driver will use the one you specified in your env variable because in that mode the driver is not ran on the YARN cluster.
So try setting that config.
You can find a bit more information in the documentation
And there's even a whole section on running spark on yarn
Please check how many inodes were used by hadoop. If they all have gone, the generic error would be the same, no space left, while there is still a space.

How can I ensure that a "bind" filesystem is mounted after its parent filesystem?

I have recently added a couple of entries to my fstab to allow me to rebind some directories to elsewhere in my filesystem tree, like this
/mnt/smb/foo/bar /home/mishagale/sourcecode/bar bind defaults,bind 0 0
However, /mnt/smb/foo happens to be an SMB filesystem (on a Samba server), with a line earlier in fstab that looks like
//192.168.1.7/foo/ /mnt/smb/foo smbfs uid=1000,gid=1000,rw,auto,user,user=myuser,pass=mypass 0 0
(obviously, these lines have been anonymised)
The problem is, now I get an error at boot time "The disc drive for /home/mishagale/sourcecode/bar is not ready yet or not present." If I skip mounting by hitting S, the system boots fine, but I then have to manually mount the offending mountpoint.
Is there a way I can instruct Ubuntu not to attempt to mount bar until foo has been successfully mounted? I believe this should be possible with upstart, but I'm not certain how to go about that.
I could (and will for now) just put the noauto option on bar and set a script that mounts them to run later, but this seems like a kludge to me, and I'm interested in learning the "proper" way to do it with Upstart.
$ cat /etc/lsb-release ; uname -a
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=11.10
DISTRIB_CODENAME=oneiric
DISTRIB_DESCRIPTION="Ubuntu 11.10"
Linux myhostname 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
Ideally you would implement this with 'automount' so that if the directory isn't in use it would automatically be unmounted, but as soon as you 'cd' to it, it'd mount and stay active as long as it's being used.
Incidentally, if you can use NFS instead of SMB, I would strongly suggest it. SMB is a really unfriendly and doesn't handle disconnects very well.
For more info on auto mount, peep the Automount FAQ

Resources