Attempts to use a sqlite file database in the app/cache/tests directory fail. This is determined by clearing the dev MySQl database and populating the test environment database via the console. Test database being populated was confirmed by an external Sqlite Manager. [Tests performed without the sqlite configuration pass.]
codeception.yml:
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: false
memory_limit: 1024M
extensions:
enabled:
- Codeception\Extension\RunFailed
modules:
config:
- Db:
dsn: 'sqlite:./app/cache/test/test.sqlite'
user: ''
password: ''
dump: tests/_data/test.sql
populate: true
cleanup: false
reconnect: true
Edit:
In an Ubuntu VM, adding the Symfony2 configuration to acceptance.yml allows for partial success - the test uses the Sqlite db but does not rewrite from the specified dump (test.sql). In Windows adding the Symfony configuration makes no difference.
acceptance.yml:
class_name: AcceptanceTester
modules:
enabled:
- PhpBrowser:
url: http://vol
- \Helper\Acceptance
# adding these lines enables Ubuntu to use sqlite db
- Symfony2:
app_path: ./app
environment: test
You haven't enabled Db module in acceptance.yml
modules:
enabled:
- Db
- PhpBrowser:
url: http://vol
- \Helper\Acceptance
Also don't add PhpBrowser and Symfony2 at the same time,
only one can be used.
Related
I am getting Invalid directory for database creation. when I start serverless-dynamodb-local.
Below is my local configuration
dynamodb:
stages:
- test
start:
port: 8000
sharedDb: true
dbPath: dynamodb-local
Is there anything I am missing?
I'm trying to clear redis cache via command line : bin/console cache:pool:clear cache.my_redis
1/ I created a custom provider as already mentioned under the doc: https://symfony.com/doc/4.4/cache.html#custom-provider-options
app.my_custom_redis_provider:
class: \Redis
factory: ['Symfony\Component\Cache\Adapter\RedisAdapter', 'createConnection']
arguments:
- 'redis://redis:6379'
2/ I configured my framework cache key:
framework:
cache:
app: cache.adapter.redis
default_redis_provider: app.my_custom_redis_provider
pools:
cache.my_redis:
adapter: cache.adapter.redis
provider: app.my_custom_redis_provider
After executing bin/console cache:pool:clear cache.my_redis the cache is notreally cleared.
I get: Clearing cache pool: cache.my_redis => Cache was successfully cleared But when I verify my redis storage, nothing is cleared, and it show me these keys:
PHPREDIS_SESSION_o5lpfttunv4r6sfsiktr046s1m
sf_sfmagu8rpbsmm3oplmhnuhl9slp
I am trying to build an .sls file which will always restart a service:
systemd-resolved:
service.running:
- restart: True
When deployed, this gives
ID: systemd-resolved
Function: service.running
Result: True
Comment: The service systemd-resolved is already running
Started: 23:46:49.999789
Duration: 53.068 ms
Changes:
This is correct, the service is already running. What I was trying to convey with this command is to restart it. How to do that?
Note: I would like to avoid, if possible, an explicit command to be ran (as I feel it i snot very salt-like - this should rather be handled by the appropriate module):
'systemctl restart systemd-resolved':
cmd.run
If you want your service to reload you need to set reload: True instead.
Beside, If you only want to restart the service if there is any change in any other state, you need to use watch instead.
for instance,
systemd-resolved:
service.running:
- enable: True
- reload: True
- watch:
- pkg: <abc>
I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?
After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.
I have the following states:
copy_over_systemd_service_files:
file.managed:
- name: /etc/systemd/system/consul-template.service
- source: salt://mesos/files/consul-template.service
- owner: consul
start_up_consul-template_service:
service.running:
- name: consul-template
- enable: True
- restart: True
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
when I run my state file I get the following error:
ID: start_up_consul-template_service
Function: service.running
Name: consul-template
Result: False
Comment: Service consul-template is already enabled, and is dead
Started: 17:27:38.346659
Duration: 2835.888 ms
Changes:
I'm not sure what this means. All I want to do is restart the service once it's been copied over and I've done this before without issue. Looking back through the stack trace just shows that Salt ran systemctl is-enabled consult-template
I think I was over complicating things. Instead I'm doing this:
consul-template:
service.running:
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service