Increase behat performance with drush driver - drupal

I am running behat inside vagrant in a drupal installation.
When I use the drush driver, in order to authenticate an admin for example, the test runs extremelly slow(2').
My behat.yml is:
default:
suites:
default:
contexts:
- FeatureMinkContext
- FeatureContext:
- "/vagrant/images/behat"
- 813
- 1855
- Drupal\DrupalExtension\Context\DrupalContext
- Drupal\DrupalExtension\Context\MinkContext
- Drupal\DrupalExtension\Context\MessageContext
- Drupal\DrupalExtension\Context\DrushContext
extensions:
Behat\MinkExtension:
selenium2: ~
javascript_session: 'selenium2'
browser_name: firefox
base_url: http://solar.com/ # Replace with your site's URL
Drupal\DrupalExtension:
blackbox: ~
region_map:
search: ".form-search"
api_driver: 'drush'
drush:
root: /vagrant/drupal
selectors:
message_selector: '.messages'
error_message_selector: '.messages.messages-error'
success_message_selector: '.messages.messages-status'
drupal:
# Replace with your real Drupal root.
drupal_root: "/vagrant/drupal"
Test structure:
#javascript #api
Feature: Tests google maps and pois
#maps
Scenario: My tests
Given I am logged in as a user with the "administrator" role
...

Didn't used drush but,
The first thing you need to do is to identify the bottleneck, maybe the server is slow, maybe you are using some method that is slow or maybe you have some waits that are not right.
Definitely something is wrong if it takes 2' for an admin authentication.
Run the scenario, debug until you narrow to the method with the issue.
Some other things you could do are:
never use blind waits, only conditional waits
if you have to fill large forms try using a javascript method for fill, it will be very fast
try different browsers, in my case chrome is slightly faster

Is it a local site setup? If yes, then behat scripts for Drupal do run a bit slow on local as compared to running it on the actual site. I have faced this in almost every project. For me, the first step which is admin authentication takes upto 5 minutes at times. Increasing the RAM size reduces this problem to some extent.

Related

Symfony logging with Monolog, confused about STDERR

I am trying to align my logging with the best practice of using STDERR.
So, I would like to understand what happens with the logs sent to STDERR.
Symfony official docs (https://symfony.com/doc/current/logging.html):
In the prod environment, logs are written to STDERR PHP stream, which
works best in modern containerized applications deployed to servers
without disk write permissions.
If you prefer to store production logs in a file, set the path of your
log handler(s) to the path of the file to use (e.g. var/log/prod.log).
This time I want to follow the STDERR stream option.
When I was writing to a specific file, I knew exactly where to look for that file, open it and check the logged messages.
But with STDERR, I don't know where to look for my logs.
So, using monolog, I have the configuration:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404, 405]
nested:
type: stream
path: "php://stderr"
level: debug
Suppose next morning I want to check the logs. Where would I look?
Several hours of reading docs later, my understanding is as follows:
First, the usage of STDERR over STDOUT is preferred for errors because it is not buffered (gathering all output waiting for the script to end), thus errors are thrown immediately to the STDERR stream. Also, this way the normal output doesn't get mixed with errors.
Secondly, the immediate intuitive usage is when running a shell script, because in the Terminal one will directly see the STDOUT and STDERR messages (by default, both streams output to the screen).
But then, the non-intuitive usage of STDERR is when logging a website/API. We want to log the errors, and we want to be able to monitor the already occurred errors, that is to come back later and check those errors. Traditional practice stores errors in custom defined log-files. More modern practice recommends sending errors to STDERR. Regarding Symfony, Fabien Potencier (the creator of Symfony), says:
in production, stderr is a better option, especially when using
Docker, SymfonyCloud, lambdas, ... So, I now recommend to use
php://stderr
(https://symfony.com/blog/logging-in-symfony-and-the-cloud).
And he further recommends using STDERR even for development.
Now, what I believe to be missing from the picture (at least for me, as non-expert), is the guidance on HOW to access and check the error logs. Okay, we send the errors to STDERR, and then? Where am I going to check the errors next morning? I get it that containerized platforms (clouds, docker etc) have specific tools to easily and nicely monitor logs (tools that intercept STDERR and parse the messages in order to organize them in specific files/DBs), but that's not the case on a simple server, be it a local server or on a hosting.
Therefore, my understanding is that sending errors to STDERR is a good standardization when:
Resorting to using a third-party tool for log monitoring (like ELK, Grail, Sentry, Rollbar etc.)
When knowing exactly where your web-server is storing the STDERR logs. For instance, if you try (I defined a new STD_ERR constant to avoid any pre-configs):
define('STD_ERR', fopen('php://stderr', 'wb'));
fputs(STD_ERR, "ABC error message.");
you can find the "ABC error message" at:
XAMPP Apache default (Windows):
..\xampp\apache\logs\error.log
Symfony5 server (Windows):
C:\Users\your_user\.symfony5\log\ [in the most recent folder, as the logs rotate]
Symfony server (Linux):
/home/your_user/.symfony/log/ [in the most recent folder, as the logs rotate]
For Symfony server, you can actually see the logs paths when starting the server, or by command "symfony server:log".
One immediate advantage is that these STDERR logs are stored outside of the app folders, and you do not need to maintain extra writable folders or deal with the permissions etc. Of course, when developing/hosting multiple sites/apps, you need to configure the error log (the STDERR storage) location per app (in Apache that would be inside each <VirtualHost> conf ; with Symfony server, I am not sure). Personally, without a third-party tool for monitoring logs, I would stick with custom defined log files (no STDERR), but feel free to contradict me.

Is there a limit for "max number of OUTPUT plugin to be used in fluentbit 1.6.10"

I have been using fluentbit 1.6.10 for quite some time now and kept on adding OUTPUT splunk plugin one after another
Now it seems limit to add number of OUTPUT plugin is crossed because after addition of any new splunk plugin is causing inconsistent execution of fluent bit
Current:
Total number of OUTPUT plugins used: 37,
Name: kafka (4),
Name: null (1),
Name: splunk (32)
But now
Whenever a new splunk plugin is added then fluentbit execution behaviour is inconsistent.
Total number of OUTPUT plugins used: 37,
Name: kafka (4),
Name: null (1),
Name: splunk (33) this seems like creating problem
After adding 33rd OUTPUT plugin, one of the existing splunk output plugin stops working
But after manually editing config at runtime --> lets say adding a simple stdout --> restart pod then all splunk plugin starts working (even though this time stdout plugin doesn't work)
Working on getting the exact behaviour and will post steps to reproduce with config but same configuration works good in version 1.7.1
(I am aware that using these many plugins is not a good design and it will be addressed.)
Is there a known limit for number of plugins to be used in fluentbit 1.6.10 ?
There is no defined upper limit as such in fluent-bit for OUTPUT plugins to be added, but depending on resource consumption, every plugin has there own defined resource utilization configurations.
The resource can be managed through proper tagging and by maintaining caches.
In short, you need to test the configs depending upon the resource limit to set the limit in your environment.

salt stack if then else for service

I have a windows service which I deploy from salt stack. The state looks like this
create_executable:
module.run:
- name: service.create
- m_name: SVC1
- bin_path: d:\svc1.exe
- start_type: auto
Now, when I redeploy the service, it says "Exception: Service Already Exists: SVC1". Is there a way to test if the service is running? If the s
ervice is running, I would like to stop it, disable it, delete it and then recreate it.
Maybe module.wait together with a cmd.run that uses the unless feature might help in this case.
You need a nice way to check that the service is not installed and use this in the unless test. The command itself might something like echo - it does not really matter. You can now use watch or watch_in to make the module.wait relying on cmd.run.
untested draft (note that I don't know windows cli at all):
check_if_service_installed:
cmd.run:
name: echo 'not installed'
unless: 'if exists c:\svc1.exe'
create_executable:
module.wait:
- name: service.create
- m_name: SVC1
- bin_path: d:\svc1.exe
- start_type: auto
- watch:
- cmd: check_if_service_installed
After writing I think that this looks a little complicated for what you try to achieve. Maybe a small batch or powershell script which takes care of being able to be executed again is more straight forward. But this is a matter of taste IMO.

Monitoring SaltStack

Is there anything out there to monitor SaltStack installations besides halite? I have it installed but it's not really what we are looking for.
It would be nice if we could have a web gui or even a daily email that showed the status of all the minions. I'm pretty handy with scripting but I don't know what to script.
Anybody have any ideas?
In case by monitoring you mean operating salt, you can try one of the following:
SaltStack Enterprise GUI
Foreman
SaltPad
Molten
Halite (DEPRECATED by SaltStack)
These GUI will allow you more than just knowing whether or not minions are alive. They will allow you to operate on them in the same manner you could with the salt client.
And in case by monitoring you mean just whether the salt master and salt minions are up and running, you can use a general-purpose monitoring solutions like:
Icinga
Naemon
Nagios
Shinken
Sensu
In fact, these tools can monitor different services on the hosts they know about. The host can be any machine that has an ip address and the service can be any resource that can be queried via the underlying OS. Example of host can be a server, router, printer... And example of service can be memory, disk, a process, ...
Not an absolute answer, but we're developing saltpad, which is a replacement and improvement of halite. One of its feature is to display the status of all your minions. You can give it a try: Saltpad Project page on Github
You might look into consul while it isn't specifically for SaltStack, I use it to monitor that salt-master and salt-minion are running on the hosts they should be.
Another simple test would be to run something like:
salt --output=json '*' test.ping
And compare between different runs. It's not amazing monitoring, but at least shows your minions are up and communicating with your master.
Another option might be to use the salt.runners.manage functions, which comes with a status function.
In order to print the status of all known salt minions you can run this on your salt master:
salt-run manage.status
salt-run manage.status tgt="webservers" expr_form="nodegroup"
I had to write my own. To my knowledge, there is nothing out there which will do this, and halite didn't work for what I needed.
If you know Python, it's fairly easy to write an application to monitor salt. For example, my app had a thread which refreshed the list of hosts from the salt keys from time to time, and a few threads that ran various commands against that list to verify they were up. The monitor threads updated a dictionary with a timestamp and success/fail for each host after they ran. It had a hacked together HTML display color coded to reflect the status of each node. Took me a about half a day to write it.
If you don't want to use Python, you could, painfully, do something similar to this inefficient, quick, untested hack using command line tools in bash.
minion_list=$(salt-key --out=txt|grep '^minions_pre:.*'|tr ',' ' ') # You'
for minion in ${minion_list}; do
salt "${minion}" test.ping
if [ $? -ne 0 ]; then
echo "${minion} is down."
fi
done
It would be easy enough to modify to write file or send an alert.
halite was depreciated in favour of paid ui version, sad, but true - still saltstack does the job. I'd just guess your best monitoring will be the one you can write yourself, happily there's a salt-api project (which I believe was part of halite, not sure about this), I'd recommend you to use this one with tornado as it's better than cherry version.
So if you want nice interface you might want to work with api once you set it up... when setting up tornado make sure you're ok with authentication (i had some trouble in here), here's how you can check it:
Using Postman/Curl/whatever:
check if api is alive:
- no post data (just see if api is alive)
- get request http://masterip:8000/
login (you'll need to take token returned from here to do most operations):
- post to http://masterip:8000/login
- (x-www-form-urlencoded data in postman), raw:
username:yourUsername
password:yourPassword
eauth:pam
im using pam so I have a user with yourUsername and yourPassword added on my master server (as a regular user, that's how pam's working)
get minions, http://masterip:8000/minions (you'll need to post token from login operation),
get all jobs, http://masterip:8000/jobs (you'll n need to post token from login operation),
So basically if you want to do anything with saltstack monitoring just play with that salt-api & get what you want, saltstack has output formatters so you could get all data even as a json (if your frontend is javascript like) - it lets you run cmd's or whatever you want and the monitoring is left to you (unless you switch from the community to pro versions) or unless you want to use mentioned saltpad (which, sorry guys, have been last updated a year ago according to repo).
btw. you might need to change that 8000 port to something else depending on version of saltstack/tornado/config.
Basically if you want to have an output where you can check the status of all the minions then you can run a command like
salt '*' test.ping
salt --output=json '*' test.ping #To get output in Json Format
salt manage.up # Returns all minions status
Or else if you want to visualize the same with a Dashboard then you can see some of the available options like Foreman, SaltPad etc.

Optimize APC Caching

here is a link to how my APC is running : [removed]
As you can see, it fills up pretty quickly and my Cache Full Count goes over 1000 sometimes
My website uses Wordpress.
I notice that every time I make a new post or edit a post, 2 things happen.
1) APC Memory "USED" resets
2) I get a whole lot of Fragments
I've tried giving more Memory to APC (512 mb) but then it crashes sometimes, it seems 384 is best. I also have a Cron job that restarts apache, clearing all APC of fragments and used memory, every 4 hours. Again, my apache crashes if APC is running for a long period of time, I think due to the fragment buildup.
Should I use the apc.Filters and filter out some stuff that should not be cached?
I am really beginner at this sort of stuff, so if someone can explain with full instructions, Thank you very much !!!
I work as a Linux Systems Admin, the wordpress server runs 5 different WordPress installs. If you are running just one, I will comment the configurations to consider.
APC / PHP Versions, 3.1.9 / 5.3.7
Here is my complete apc.conf,
apc.enabled=1
apc.shm_segments=1
; I would try 32M per WP install, go from there
apc.shm_size=128M
; Relative to approx cached PHP files,
apc.num_files_hint=512
; Relative to approx WP size W/ APC Object Cache Backend,
apc.user_entries_hint=4096
apc.ttl=7200
apc.use_request_time=1
apc.user_ttl=7200
apc.gc_ttl=3600
apc.cache_by_default=1
apc.filters
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.file_update_protection=2
apc.enable_cli=0
apc.max_file_size=2M
;This should be used when you are finished with PHP file changes.
;As you must clear the APC cache to recompile already cached files.
;If you are still developing, set this to 1.
apc.stat=0
apc.stat_ctime=0
apc.canonicalize=1
apc.write_lock=1
apc.report_autofilter=0
apc.rfc1867=0
apc.rfc1867_prefix =upload_
apc.rfc1867_name=APC_UPLOAD_PROGRESS
apc.rfc1867_freq=0
apc.rfc1867_ttl=3600
;This MUST be 0, WP can have errors otherwise!
apc.include_once_override=0
apc.lazy_classes=0
apc.lazy_functions=0
apc.coredump_unmap=0
apc.file_md5=0
apc.preload_path
#Chris_O, your configuration is not optimal in a few aspects.
1. apc.shm_segments=3
If you run a modern Linux Distro, your SHM should be sufficiantly large enough.
If it is too small search how to set sysctl.conf entries, You can check like this.
#Check Max Segment size
cat /proc/sys/kernel/shmmax
Exception when running on certain BSD's, or Other Unix's, Or managed hosts you don't control. There is disadvantages to not having a contiguous segment, read details of APC for that info.
2. apc.enable_cli=1
BAD BAD BAD, this is for debug only! Every time you run php-cli, it clears the APC cache.
3. apc.max_file_size=10M
Unnecessary and ridiculous! If you had a file that big, it would eat 1/3rd of that small 32M SHM. Even though you specify 3, they don't just act like one big segment in three pieces. Regardless WP doesn't even have single PHP files even close to that size.
'hope I helped people with their apc.conf.
The APC ttl should take care of fragment build up. I usually set it at 7200. I am running it on a small VPS with WordPress and my settings are:
apc.enabled=1
apc.shm_segments=3
apc.shm_size=32
apc.ttl=7200
apc.user_ttl=7200
apc.num_files_hint=2048
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1
apc.max_file_size=10M
You will also get a lot more benefit from it by using WordPress's built in object cache and Mark Jaquith wrote a really good drop in plugin that should also help with some of your fragmentation issues when saving or editing a post.
You really should set apc.stat=0 on your production server and it will prevent APC from actually going to the IO to check if the file has been changed.
Check out documentation first: http://php.net/manual/en/apc.configuration.php

Resources