Change maximum number of sessions for tmux - tmux

Is there a limit for number of concurrent sessions created by tmux?
Is this limit configurable or hard coded?
I already looked at this but could not find what I am looking for:
http://manpages.ubuntu.com/manpages/eoan/en/man1/tmux.1.html#options
Here is the error I get when try to create many tmux sessions:
create session failed: : Too many open files

Related

Is it ever possible to reduce pg_num for specific pool

It's sad that I found it's not allowed by ceph cli to decrease the value of pg_num for a specific pool.
ceph osd pool set .rgw.root pg_num 32
The error is shown:
Error EEXIST: specified pg_num 32 <= current 128
The tutorial from placement-groups is about to tell me what is it and how to set the best value of it. But there is seldom any tutorial about how to reduce the pg_num without re-installing ceph or delete the pool firstly, like ceph-reduce-the-pg-number-on-a-pool.
The existed SO thread ceph-too-many-pgs-per-osd shows us how to decide the best value. If I met the issue, how can I recover from the mess?
If it's not easy to reduce the value pg_num, what's the story behind it? Why doesn't ceph expose the interface to reduce it?
Nautilus version allows pg_num changes without restrictions (and pg_autoscale).
If you want to increase/reduce pg_num/pgp_num values without having to create, copy & rename pools (as suggested on your link), the best option is to upgrade to Nautilus.

Why isn't Carbon writing Whisper data points as per updated storage-schema retention?

My original carbon storage-schema config was set to 10s:1w, 60s:1y and was working fine for months. I've recently updated it to 1s:7d, 10s:30d, 60s,1y. I've resized all my whisper files to reflect the new retention schema using the following bit of bash:
collectd_dir="/opt/graphite/storage/whisper/collectd/"
retention="1s:7d 1m:30d 15m:1y"
find $collectd_dir -type f -name '*.wsp' | parallel whisper-resize.py \
--nobackup {} $retention \;
I've confirmed that they've been updated using whisper-info.py with the correct retention and data points. I've also confirmed that the storage-schema is valid using a storage-schema validation script.
The carbon-cache{1..8}, carbon-relay, carbon-aggregator, and collectd services have been stopped before the whisper resizing, then started once the resizing was complete.
However, when checking in on a Grafana dashboard, I'm seeing empty graphs with correct data points (per sec, but no data) on collectd plugin charts; but with the graphs that are providing data, it's showing data and data points every 10s (old retention), instead of 1s.
The /var/log/carbon/console.log is looking good, and the collectd whisper files all have carbon user access, so no permission denied issues when writing.
When running an ngrep on port 2003 on the graphite host, I'm seeing connections to the relay, along with metrics being sent. Those metrics are then getting relayed to a pool of 8 caches to their pickle port.
Has anyone else experienced similar issues, or can possibly help me diagnose the issue further? Have I missed something here?
So it took me a little while to figure this out. It had nothing to do with the local_settings.py file like some of the old responses, but it had to do with the Interval function in the collectd.conf.
A lot of the older responses mentioned that you needed to include 'Interval 1' inside each Plugin container. I think this would have been great due to the control of each metric. However, that would create config errors in my logs, and break the metric. Setting 'Interval 1' at top level of the config resolved my issues.

Symfony2 - Random Failed to start the session

We have a site with Symfony2 with some traffic.
Every day the site begins to fail with this error for 1 or 2 minutes (15-20 errors). This occurs at random hours, could not find a pattern. It does not fit even to peak hours.
2015-10-09 02:23:57.635 [2015-10-09 06:23:38] request.CRITICAL: Uncaught PHP Exception RuntimeException: "Failed to start the session" at /var/www/thing.com/httpdocs/app/cache/prod/classes.php line 121 {"exception":"[object] (RuntimeException(code: 0): Failed to start the session at /var/www/thing.com/httpdocs/app/cache/prod/classes.php:121)"} []
Doesn't seem to be a double header problem or double start problem.
Site does not interact with any PHP legacy code that could be messing with the sessions.
Sessions are stored in the database so a file problem is discarded.
Lowered the session duration so the session table does not get too big and the problem persists.
Think It could be a problem with HWIOAuthBundle and it's facebook login, but cannot find where is the conflict.
Also the site uses a lot of render_esi for caching with Symfony2 internal cache system.
Update -------------------------------------------------
Emptied the /var/lib/php/sessions folder of older session files than were not being used.
Lowered the session lifespan. Sql entries in the sessions table went from ~3 Million to ~1.3 Million.
Seems that the problem is gone but this is not a real solution.
My guess is that the pdo_handler in symfony2 has a performance problem.
Maybe someone with more knowledge in this matter (pdo_handler, table optimization) can point a real solution for high traffic.
Where does your PHP installation save sessions to?
[You can find this in your php.ini file in the session.save_path setting, assuming you have CLI access]
It is very likely PHP uses your servers /tmp folder. If this folder is full at any point, then PHP can't create new sessions.
You can see the current size of your /tmp folder with:
du -ch /tmp/ |grep total
If, as is common, the /tmp folder is on its own partition, you can see its maximum size with :
df -h
Some programs can suddenly guzzle Gbs of this folder for their purposes.

Session error in Teradata Fastload Script

My Fastload script is scheduled to run every week and when it starts the script failed because of the insufficient number of sessions every week. but, when I restart the script manually then it executed with no session error.
I don't what causes it to fail every week with the same reason of insufficient session. Can anyone let me know what all may be the reason for the same.
Check for:
1. Schedule job connection string, if it point to one Teradata Node (I.P) address. Sometimes based on the concurrent session you exceed PE session limit (120 session). Try using DNS/VIP to achive better load balancing
2. Number of Unilities running on system during schedule time. If your exceed the limit of threshold use SLEEP and TANACITY to plance your job in queue instead it fails
3. Limit the Fastload session limit using SESSIONS
Thanks!!

Deleted/Empty Graphite Whisper Files Automatically Re-Generating

I am trying to delete some old graphite test whisper metrics without any success. I can delete the metrics by removing the files. (See: How to cleanup the graphite whisper's data? ) But, within a few seconds of blowing away the files they regenerate (they are empty of metrics and stay that way since nothing is creating new metrics in those files). I've tried stopping carbon (carbon-cache.py stop) before deleting the files, but when I restart carbon (carbon-cache.py --debug start &) they just come back.
How do I permanently delete these files/metics so they never come back?
By default, Statsd will continue to send 0 for counters it hasn't received in the previous flush period. This causes carbon to recreate the file.
Lets say we want to delete a counter called 'bad_metrics.sent' from Statsd. You can use the Statsd admin interface running on port 8126 by default:
$ telnet <server-ip> 8126
Trying <server-ip>...
Connected to <server-name>.
Escape character is '^]'.
Use 'help' to get a list of commands:
help
Commands: stats, counters, timers, gauges, delcounters, deltimers, delgauges, quit
You can use 'counters' to see a list of all counters:
counters
{ 'statsd.bad_lines_seen': 0,
'statsd.packets_received': 0,
'bad_metrics.sent': 0 }
END
Its the 'delcounters', 'deltimers', and 'delgauges' commands that remove metrics from statsd:
delcounters bad_metrics.sent
deleted: bad_metrics.sent
END
After removing the metric from Statsd, you can remove the whisper file associated with it. In this example case, that would be:
/opt/graphite/storage/whisper/bad_metrics/sent.wsp
or (in Ubuntu):
/var/lib/graphite/whisper/bad_metrics/sent.wsp
Are you running statsd or something similar?
I had the same issue and it was because statsd was flushing the counters it had in memory after I deleted the whisper files. I recycled statsd and the files stay deleted now.
Hope this helps
The newest StatsD version has an option to not send zeroes after flush anymore, but only what is actually sent to it. If you turn that one the whisper files shouldn't get recreated: https://github.com/etsy/statsd/blob/master/exampleConfig.js#L39
We aren't running statsd, but we do run carbon-aggregator which serves a similar purpose. Restarting it solved a similar problem.

Resources