My gcloud firebase test android run command is stuck uploading the app-debug-androidTest.apk. What is an example of the output for this command once it gets past the following point where it's stuck for me?
FirebaseTestLabPlayground[master]15:40:36 gcloud firebase test android run \
> --project locuslabs-android-sdk \
> --app app/build/outputs/apk/debug/app-debug.apk \
> --test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
> --device model=Pixel2,version=27,locale=en_US,orientation=portrait \
> --verbosity debug
INFO: Test Service endpoint: [None]
INFO: Tool Results endpoint: [None]
DEBUG: Running [gcloud.firebase.test.android.run] with arguments: [--app: "app/build/outputs/apk/debug/app-debug.apk", --device: "[OrderedDict([(u'model', u'Pixel2'), (u'version', u'27'), (u'locale', u'en_US'), (u'orientation', u'portrait')])]", --project: "locuslabs-android-sdk", --test: "app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk", --verbosity: "debug"]
Have questions, feedback, or issues? Get support by visiting:
https://firebase.google.com/support/
DEBUG: Applying default auto_google_login: True
DEBUG: Applying default performance_metrics: True
DEBUG: Applying default num_flaky_test_attempts: 0
DEBUG: Applying default record_video: True
DEBUG: Applying default timeout: 900
DEBUG: Applying default async: False
INFO: Raw results root path is: [gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/]
Uploading [app/build/outputs/apk/debug/app-debug.apk] to Firebase Test Lab...
Uploading [app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk] to Firebase Test Lab...
What will likely come next?
Here is the rest of the transcript in case it helps anyone else who's stuck trying gcloud firebase test android run for the first time:
FirebaseTestLabPlayground[master]15:40:36 gcloud firebase test android run \
> --project locuslabs-android-sdk \
> --app app/build/outputs/apk/debug/app-debug.apk \
> --test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
> --device model=Pixel2,version=27,locale=en_US,orientation=portrait \
> --verbosity debug
INFO: Test Service endpoint: [None]
INFO: Tool Results endpoint: [None]
DEBUG: Running [gcloud.firebase.test.android.run] with arguments: [--app: "app/build/outputs/apk/debug/app-debug.apk", --device: "[OrderedDict([(u'model', u'Pixel2'), (u'version', u'27'), (u'locale', u'en_US'), (u'orientation', u'portrait')])]", --project: "locuslabs-android-sdk", --test: "app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk", --verbosity: "debug"]
Have questions, feedback, or issues? Get support by visiting:
https://firebase.google.com/support/
DEBUG: Applying default auto_google_login: True
DEBUG: Applying default performance_metrics: True
DEBUG: Applying default num_flaky_test_attempts: 0
DEBUG: Applying default record_video: True
DEBUG: Applying default timeout: 900
DEBUG: Applying default async: False
INFO: Raw results root path is: [gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/]
Uploading [app/build/outputs/apk/debug/app-debug.apk] to Firebase Test Lab...
Uploading [app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk] to Firebase Test Lab...
Raw results will be stored in your GCS bucket at [https://console.developers.google.com/storage/browser/test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/]
DEBUG: TestMatrices.Create request:
<TestingProjectsTestMatricesCreateRequest
projectId: u'locuslabs-android-sdk'
requestId: '3c76ca4e247d4b38bf102ffcdbaa637b'
testMatrix: <TestMatrix
clientInfo: <ClientInfo
clientInfoDetails: [<ClientInfoDetail
key: u'Cloud SDK Version'
value: '242.0.0'>, <ClientInfoDetail
key: u'Release Track'
value: 'GA'>]
name: u'gcloud'>
environmentMatrix: <EnvironmentMatrix
androidDeviceList: <AndroidDeviceList
androidDevices: [<AndroidDevice
androidModelId: u'Pixel2'
androidVersionId: u'27'
locale: u'en_US'
orientation: u'portrait'>]>>
flakyTestAttempts: 0
resultStorage: <ResultStorage
googleCloudStorage: <GoogleCloudStorage
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/'>
toolResultsHistory: <ToolResultsHistory
projectId: u'locuslabs-android-sdk'>>
testExecutions: []
testSpecification: <TestSpecification
androidInstrumentationTest: <AndroidInstrumentationTest
appApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug.apk'>
orchestratorOption: OrchestratorOptionValueValuesEnum(ORCHESTRATOR_OPTION_UNSPECIFIED, 0)
testApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug-androidTest.apk'>
testTargets: []>
disablePerformanceMetrics: False
disableVideoRecording: False
testSetup: <TestSetup
account: <Account
googleAuto: <GoogleAuto>>
additionalApks: []
directoriesToPull: []
environmentVariables: []
filesToPush: []>
testTimeout: u'900s'>>>
DEBUG: TestMatrices.Create response:
<TestMatrix
clientInfo: <ClientInfo
clientInfoDetails: [<ClientInfoDetail
key: u'Cloud SDK Version'
value: u'242.0.0'>, <ClientInfoDetail
key: u'Release Track'
value: u'GA'>]
name: u'gcloud'>
environmentMatrix: <EnvironmentMatrix
androidDeviceList: <AndroidDeviceList
androidDevices: [<AndroidDevice
androidModelId: u'Pixel2'
androidVersionId: u'27'
locale: u'en_US'
orientation: u'portrait'>]>>
projectId: u'locuslabs-android-sdk'
resultStorage: <ResultStorage
googleCloudStorage: <GoogleCloudStorage
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/'>
toolResultsHistory: <ToolResultsHistory
projectId: u'locuslabs-android-sdk'>>
state: StateValueValuesEnum(VALIDATING, 1)
testExecutions: [<TestExecution
environment: <Environment
androidDevice: <AndroidDevice
androidModelId: u'Pixel2'
androidVersionId: u'27'
locale: u'en_US'
orientation: u'portrait'>>
id: u'matrix-fq9ojlzvta35a_execution-2kcgdj0bkm22a'
matrixId: u'matrix-fq9ojlzvta35a'
projectId: u'locuslabs-android-sdk'
state: StateValueValuesEnum(VALIDATING, 1)
testSpecification: <TestSpecification
androidInstrumentationTest: <AndroidInstrumentationTest
appApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug.apk'>
testApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug-androidTest.apk'>
testTargets: []>
testSetup: <TestSetup
account: <Account
googleAuto: <GoogleAuto>>
additionalApks: []
directoriesToPull: []
environmentVariables: []
filesToPush: []>
testTimeout: u'900s'>
timestamp: u'2019-04-19T08:42:36.638Z'>]
testMatrixId: u'matrix-fq9ojlzvta35a'
testSpecification: <TestSpecification
androidInstrumentationTest: <AndroidInstrumentationTest
appApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug.apk'>
testApk: <FileReference
gcsPath: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/app-debug-androidTest.apk'>
testTargets: []>
testSetup: <TestSetup
account: <Account
googleAuto: <GoogleAuto>>
additionalApks: []
directoriesToPull: []
environmentVariables: []
filesToPush: []>
testTimeout: u'900s'>
timestamp: u'2019-04-19T08:42:36.638Z'>
Test [matrix-fq9ojlzvta35a] has been created in the Google Cloud.
Firebase Test Lab will execute your instrumentation test on 1 device(s).
Creating individual test executions...done.
Test results will be streamed to [https://console.firebase.google.com/project/locuslabs-android-sdk/testlab/histories/bh.f0b3cb84d82b84d2/matrices/7272098978475034799].
15:42:48 Test is Pending
15:43:11 Starting attempt 1.
15:43:11 Test is Running
15:44:07 Started logcat recording.
15:44:07 Preparing device.
15:44:38 Logging in to Google account on device.
15:44:38 Installing apps.
15:44:53 Retrieving Pre-Test Package Stats information from the device.
15:44:53 Retrieving Performance Environment information from the device.
15:44:53 Started crash detection.
15:44:53 Started crash monitoring.
15:44:53 Started performance monitoring.
15:44:53 Started video recording.
15:44:53 Starting instrumentation test.
15:45:00 Completed instrumentation test.
15:45:14 Stopped performance monitoring.
15:45:29 Stopped crash monitoring.
15:45:29 Stopped logcat recording.
15:45:29 Retrieving Post-test Package Stats information from the device.
15:45:29 Logging out of Google account on device.
15:45:29 Done. Test time = 4 (secs)
15:45:29 Starting results processing. Attempt: 1
15:45:37 Completed results processing. Time taken = 4 (secs)
15:45:37 Test is Finished
INFO: Test matrix completed in state: FINISHED
Instrumentation testing complete.
More details are available at [https://console.firebase.google.com/project/locuslabs-android-sdk/testlab/histories/bh.f0b3cb84d82b84d2/matrices/7272098978475034799].
DEBUG:
TRHistoriesExecutions.Get response:
<Execution
completionTime: <Timestamp
nanos: 674000000
seconds: 1555663532>
creationTime: <Timestamp
nanos: 31000000
seconds: 1555663361>
executionId: u'7272098978475034799'
outcome: <Outcome
summary: SummaryValueValuesEnum(success, 4)>
specification: <Specification
androidTest: <AndroidTest
androidAppInfo: <AndroidAppInfo
name: u'FirebaseTestLabPlayground'
packageName: u'com.example.firebasetestlabplayground'
versionCode: u'1'
versionName: u'1.0'>
androidInstrumentationTest: <AndroidInstrumentationTest
testPackageId: u'com.example.firebasetestlabplayground.test'
testRunnerClass: u'android.support.test.runner.AndroidJUnitRunner'
testTargets: []>
testTimeout: <Duration
seconds: 900>>>
state: StateValueValuesEnum(complete, 0)
testExecutionMatrixId: u'matrix-fq9ojlzvta35a'>
DEBUG:
ToolResultsSteps.List response:
<ListStepsResponse
steps: [<Step
completionTime: <Timestamp
nanos: 849000000
seconds: 1555663531>
creationTime: <Timestamp
nanos: 232000000
seconds: 1555663361>
description: u'all targets'
dimensionValue: [<StepDimensionValueEntry
key: u'Model'
value: u'Pixel2'>, <StepDimensionValueEntry
key: u'Version'
value: u'27'>, <StepDimensionValueEntry
key: u'Locale'
value: u'en_US'>, <StepDimensionValueEntry
key: u'Orientation'
value: u'portrait'>]
labels: []
name: u'Instrumentation test'
outcome: <Outcome
summary: SummaryValueValuesEnum(success, 4)>
runDuration: <Duration
nanos: 617000000
seconds: 170>
state: StateValueValuesEnum(complete, 0)
stepId: u'bs.b2c854c31dd1dcd1'
testExecutionStep: <TestExecutionStep
testIssues: [<TestIssue
category: CategoryValueValuesEnum(common, 0)
errorMessage: u'Test is compatible with Android Test Orchestrator.'
severity: SeverityValueValuesEnum(suggestion, 2)
type: TypeValueValuesEnum(compatibleWithOrchestrator, 2)>]
testSuiteOverviews: [<TestSuiteOverview
totalCount: 1
xmlSource: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/test_result_1.xml'>>]
testTiming: <TestTiming
testProcessDuration: <Duration
seconds: 4>>
toolExecution: <ToolExecution
commandLineArguments: []
toolLogs: [<FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/logcat'>]
toolOutputs: [<ToolOutputReference
output: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/test_cases/0000_logcat'>
testCase: <TestCaseReference
className: u'com.example.firebasetestlabplayground.ExampleInstrumentedTest'
name: u'useAppContext'>>, <ToolOutputReference
output: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/test_result_1.xml'>>, <ToolOutputReference
output: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/video.mp4'>>, <ToolOutputReference
output: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/bugreport.txt'>>, <ToolOutputReference
output: <FileReference
fileUri: u'gs://test-lab-bcr7j9th055js-i215tdq3ht0hw/2019-04-19_15:41:26.364106_bmag/Pixel2-27-en_US-portrait/instrumentation.results'>>]>>>]>
INFO: Display format: "
table[box](
outcome.color(red=Fail, green=Pass, yellow=Inconclusive),
axis_value:label=TEST_AXIS_VALUE,
test_details:label=TEST_DETAILS
)
"
┌─────────┬──────────────────────────┬─────────────────────┐
│ OUTCOME │ TEST_AXIS_VALUE │ TEST_DETAILS │
├─────────┼──────────────────────────┼─────────────────────┤
│ Passed │ Pixel2-27-en_US-portrait │ 1 test cases passed │
└─────────┴──────────────────────────┴─────────────────────┘
FirebaseTestLabPlayground[master]15:45:45 gcloud firebase test android run --project locuslabs-android-sdk --app app/build/outputs/apk/debug/app-debug.apk --test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk --device model=Pixel2,version=27,locale=en_US,orientation=portrait --verbosity debug
Related
My ansible-playbook is running some long running task with async tag and also utilizes "creates:" condition, so it is run only once on the server. When I was writing the playbook yesterday, I am pretty sure, the task was skipped when the log set in "creates:" tag existed.
It shows changed now though, everytime I run it.
I am confused as I do not think I did change anything and I'd like to set up my registered varaible correctly as unchanged, when the condition is true.
Output of ansible-play (debug section shows the task is changed: true):
TASK [singleserver : Install Assure1 SingleServer role] *********************************************************************************************************************************
changed: [crassure1]
TASK [singleserver : Debug] *************************************************************************************************************************************************************
ok: [crassure1] => {
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
}
But if I check the actual results file on the target maschine, it correctly resolved condition and did not actually execute the shell script, so the task should be unchanged (shows message the task is skipped as the log exists):
[root#crassure1 assure1]# cat "/root/.ansible_async/637594935242.28556"
{"invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": "/opt/install/install_command.sh", "removes": null, "argv": null, "creates": "/opt/assure1/logs/SetupWizard.log", "chdir": null, "stdin_add_newline": true, "stdin": null}}, "cmd": "/opt/install/install_command.sh", "changed": false, "rc": 0, "stdout": "skipped, since /opt/assure1/logs/SetupWizard.log exists"}[root#crassure1 assure1]# Connection reset by 172.24.36.123 port 22
My playbook section looks like this:
- name: Install Assure1 SingleServer role
shell:
#cmd: "/opt/assure1/bin/SetupWizard -a --Depot /opt/install/:a1-local --First --WebFQDN crassure1.tspdata.local --Roles All"
cmd: "/opt/install/install_command.sh"
async: 7200
poll: 0
register: Assure1InstallWait
args:
creates: /opt/assure1/logs/SetupWizard.log
- name: Debug
debug:
msg: "{{ Assure1InstallWait }}"
- name: Check on Installation status every 15 minutes
async_status:
jid: "{{ Assure1InstallWait.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
delay: 900
when: Assure1InstallWait is changed
Is there something I am missing, or is that some kind of a bug?
I am limited by Ansible version available in configured trusted repo, so I am using ansible 2.9.25
Q: "The module shell shows changed every time I run it"
A: In async mode the task can't be skipped immediately. First, the module shell must find out whether the file /opt/assure1/logs/SetupWizard.log exists at the remote host or not. Then, if the file exists the module will decide to skip the execution of the command. But, you run the task asynchronously. In this case, Ansible starts the module and returns without waiting for the module to complete. That's what the registered variable Assure1InstallWait says. The task started but didn't finish yet.
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
The decision to set such a task changed is correct, I think because the execution on the remote host is going on.
Print the registered result of the module async. You'll see, that the command was skipped because the file exists (you've printed the async file at the remote instead). Here the attribute changed is set false because now we know the command didn't execute
job_result:
...
attempts: 1
changed: false
failed: false
finished: 1
msg: Did not run command since '/tmp/SetupWizard.log' exists
rc: 0
...
I'd like to disable the built-in windows local administrator account if it is enabled.
As salt.state.user.present doesn't support disabling accounts, I'm using salt.modules.win_useradd.update. However, it disables the account even if it is already disabled.
I can't use unless or onlyif because they only use results parsed from shell commands.
Is there a way to use the boolean value for [user.info][account_disabled] in salt.module.win_useradd.info's return data 'changes' dictionary as a requirement?
I'd like to do something like the following:
builtin_administrator:
module.run:
- user.info:
- name: Administrator
disable_builtin_administrator:
module.run:
- user.update:
- name: Administrator
- account_disabled: true
- require:
- module: builtin_administrator
- require:
- module: builtin_administrator['changes']['user.info']['account_disabled']['false']
You can see the results data changes dictionary from win_useradd.info in the output:
local:
----------
ID: builtin_administrator
Function: module.run
Result: True
Comment: user.info: Built-in account for administering the computer/domain
Started: 15:59:56.440000
Duration: 15.0 ms
Changes:
----------
user.info:
----------
account_disabled:
True
account_locked:
False
active:
False
comment:
Built-in account for administering the computer/domain
description:
Built-in account for administering the computer/domain
disallow_change_password:
False
expiration_date:
2106-02-07 01:28:15
expired:
False
failed_logon_attempts:
0L
fullname:
gid:
groups:
- Administrators
home:
None
homedrive:
last_logon:
Never
logonscript:
name:
Administrator
passwd:
None
password_changed:
2019-10-09 09:22:00
password_never_expires:
True
profile:
None
successful_logon_attempts:
0L
uid:
S-1-5-21-3258603230-662395079-3947342588-500
----------
ID: disable_builtin_administrator
Function: module.run
Result: False
Comment: The following requisites were not found:
require:
module: builtin_administrator['changes']['user.info']['account_disabled']['false']
Started: 15:59:56.455000
Duration: 0.0 ms
Changes:
Summary for local
------------
Succeeded: 1 (changed=1)
Failed: 1
------------
Total states run: 2
Total run time: 15.000 ms
I'm testing with a Windows 10 1903 masterless salt-minion 2019.2.1 (Fluorine) where I set use_superseded for module.run in the minion config file.
Thanks in advance!
I settled for this:
localuser.disable.administrator:
cmd.run:
- name: "Get-LocalUser Administrator | Disable-LocalUser"
- shell: powershell
- onlyif: powershell -command "if ((Get-LocalUser | Where-Object {($_.Name -eq 'Administrator') -and ($_.Enabled -eq $true)}) -eq $null) {exit 1}"
When I am trying to run a task asynchronously as another user using become in ansible plabook, I am getting "Job not found error". Can some one suggest me how can I successfully check the async job status.
I am using ansible version 2.7
I read in some articles suggesting use the async_status task with same become user as async task, to read the job status.
I tried that solution but still I am getting the same "job not found error"
- hosts: localhost
tasks:
- shell: startInstance.sh
register: start_task
async: 180
poll: 0
become: yes
become_user: venu
- async_status:
jid: "{{start_task.ansible_job_id}}"
register: start_status
until: start_status.finished
retries: 30
become: yes
become_user: venu
Expected Result:
I should be able to Fire and forget the job
Actual_Result:
{"ansible_job_id": "386361757265.15925428", "changed": false, "finished": 1, "msg": "could not find job", "started": 1}
I followed the steps on the blog to get wordpress going
https://blog.pivotal.io/pivotal-cloud-foundry/products/getting-started-with-wordpress-on-cloud-foundry
when I do a cf push it keeps crashing with the following lines in the error
2016-05-14T15:41:44.22-0700 [App/0] OUT total size is 2,574,495 speedup is 0.99
2016-05-14T15:41:44.24-0700 [App/0] ERR fusermount: entry for /home/vcap/app/htdocs/wp-content not found in /etc/mtab
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: mountpoint is not empty
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: if you are sure this is safe, use the 'nonempty' mount option
2016-05-14T15:41:44.64-0700 [DEA/86] ERR Instance (index 0) failed to start accepting connections
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
^[[A
my manifest file:
cf-ex-wordpress$ cat manifest.yml
---
applications:
- name: myapp
memory: 128M
path: .
buildpack: https://github.com/cloudfoundry/php-buildpack
host: near
services:
- mysql-db
env:
SSH_HOST: user#abc.com
SSH_PATH: /home/user
SSH_KEY_NAME: sshfs_rsa
SSH_OPTS: '["cache=yes", "kernel_cache", "compression=no", "large_read"]'
vagrant#vagrant:~/Documents/shared/cf-ex-wordpress$
Please check your SSH mount, more details at https://github.com/dmikusa-pivotal/cf-ex-wordpress/issues
I am trying to invoke a command on provisioning via Saltstack. If command fails then I get state failing and I don't want that (retcode of command doesn't matter).
Currently I have the following workaround:
Run something:
cmd.run:
- name: command_which_can_fail || true
is there any way to make such state ignore retcode using salt features? or maybe I can exclude this state from logs?
Use check_cmd :
fails:
cmd.run:
- name: /bin/false
succeeds:
cmd.run:
- name: /bin/false
- check_cmd:
- /bin/true
Output:
local:
----------
ID: fails
Function: cmd.run
Name: /bin/false
Result: False
Comment: Command "/bin/false" run
Started: 16:04:40.189840
Duration: 7.347 ms
Changes:
----------
pid:
4021
retcode:
1
stderr:
stdout:
----------
ID: succeeds
Function: cmd.run
Name: /bin/false
Result: True
Comment: check_cmd determined the state succeeded
Started: 16:04:40.197672
Duration: 13.293 ms
Changes:
----------
pid:
4022
retcode:
1
stderr:
stdout:
Summary
------------
Succeeded: 1 (changed=2)
Failed: 1
------------
Total states run: 2
If you don't care what the result of the command is, you can use:
Run something:
cmd.run:
- name: command_which_can_fail; exit 0
This was tested in Salt 2017.7.0 but would probably work in earlier versions.