I have a test suite of over 10,000 tests and sometimes only want to rerun the tests, that failed on the previous run, using the dotnet vstest CLI.
I ended up using the following PowerShell command, to run only the previously failed tests again, based on the newest trx file in .\TestResults\:
dotnet vstest '.\bin\Debug\netcoreapp3.0\MyTests.dll' /Logger:trx /Tests:"$((Select-Xml -Path (gci '.\TestResults\' | sort LastWriteTime | select -last 1).FullName -XPath "//ns:UnitTestResult[#outcome='Failed']/#testName" -Namespace #{"ns"="http://microsoft.com/schemas/VisualStudio/TeamTest/2010"}).Node.Value | % {$_ -replace '^My\.Long\.And\.Tedious\.Namespace\.', ''} | % {$_ -replace '^(.*?)\(.*$','$1'} | Join-String -Separator ','))"
Beware, that there is a character limit on the maximum command line length, that can easily be hit when many tests have previously failed.
Use the % {$_ -replace '^My\.Long\.And\.Tedious\.Namespace\.', ''} part, to get rid of namespace prefixes, if you can.
Related
Im trying to integrate Sagemaker pipeline with Jenkins. Im using aws-cli ( version - 2.1.24 ).
Since this version doesnt support --pipeline-definition-s3-location, Im trying to do something like below -
aws s3 cp s3://some_bucket/folder1/pipeine_definition.json - | \ jq -c . | \ tee /dev/stderr | \ xargs -0 -I{} aws sagemaker update-pipeline --pipeline-name "pipelinename" --role-arn "arn:aws:iam::<account_id>:role/sagemaker-role" --pipeline-definition '{}'
And I found this error -
An error occurred (ValidationException) when calling the UpdatePipeline operation: Pipeline definition: At least 1 step must be provided
When I recheck the definition.json, Im able to see the steps defined inside json.
Can someone help me?
I tried adding the quotes for --pipeline-definition, which isnt working.
Since jenkins has aws-cli 2.1.24 version, I want to someone copy the contents of json file in s3 and pass it to --pipeline-definition argument using aws sagemaker --update-pipeline command.
We use TFS on our project. I have set Parallelism -> Multi Agent in the phase settings. The command itself to run (.NET Core) is:
dotnet test --filter TestCategory="Mobile" --logger trx -m:1.
Do I understand correctly that these settings will not split the tests between the two agents, but run the command above on the two agents?
The Visual Studio Test (- task: VSTest#2) has built-in magic to distribute the test based on configurable criteria:
You could switch to using the vstest task instead; to run your tests to get this "magic".
The dotnet core task or invoking dotnet straight from the command line doesn't have this magic.
There is a github repo that shows how to take advantage of the default of the hidden variables that are set by the agent when running in parallel:
#!/bin/bash
filterProperty="Name"
tests=$1
testCount=${#tests[#]}
totalAgents=$SYSTEM_TOTALJOBSINPHASE
agentNumber=$SYSTEM_JOBPOSITIONINPHASE
if [ $totalAgents -eq 0 ]; then totalAgents=1; fi
if [ -z "$agentNumber" ]; then agentNumber=1; fi
echo "Total agents: $totalAgents"
echo "Agent number: $agentNumber"
echo "Total tests: $testCount"
echo "Target tests:"
for ((i=$agentNumber; i <= $testCount;i=$((i+$totalAgents)))); do
targetTestName=${tests[$i -1]}
echo "$targetTestName"
filter+="|${filterProperty}=${targetTestName}"
done
filter=${filter#"|"}
echo "##vso[task.setvariable variable=targetTestsFilter]$filter"
This way you can slice the tasks in your pipeline:
- bash: |
tests=($(dotnet test . --no-build --list-tests | grep Test_))
. 'create_slicing_filter_condition.sh' $tests
displayName: 'Create slicing filter condition'
- bash: |
echo "Slicing filter condition: $(targetTestsFilter)"
displayName: 'Echo slicing filter condition'
- task: DotNetCoreCLI#2
displayName: Test
inputs:
command: test
projects: '**/*Tests/*Tests.csproj'
arguments: '--no-build --filter "$(targetTestsFilter)"'
I'm not sure whether this will support 100.000's of tests. In that case you may have to break the list into batches and call dotnet test multiple times in a row. I couldn't find support for vstest playlists.
I am using oh-my-zsh and I have been trying to develop a custom completion script for sdkman.
However I have encountered a small problem when trying to mutualize some of the commands.
Below is the beginning of the completion script. There are three functions using the _describe method to output a completion help.
#compdef sdk
zstyle ':completion:*:descriptions' format '%B%d%b'
# Gets candidate lists and removes all unecessery things just to get candidate names
__get_candidate_list() {
echo `sdk list | grep --color=never "$ sdk install" | sed 's/\$ sdk install //g' | sed -e 's/[\t ]//g;/^$/d'`
}
__get_current_installed_list() {
echo `sdk current | sed "s/Using://g" | sed "s/\:.*//g" | sed -e "s/[\t ]//g;/^$/d"`
}
__describe_commands() {
local -a commands
commands=(
'install: install a program'
'uninstall: uninstal an existing program'
)
_describe -t commands "Commands" commands && ret=0
}
__describe_install() {
local -a candidate_list
candidate_list=( $( __get_candidate_list ) )
_describe -t candidate_list "Candidates available" candidate_list && ret=0
}
__describe_uninstall() { # FIXME THis is not working, it only displays candidate list
local -a candidates_to_uninstall
candidates_to_uninstall=( $( __get_current_installed_list ) )
_describe -t candidates_to_uninstall "Uninstallable candidates" candidates_to_uninstall && ret=0
}
The __get_candidate_list echoes the following values:
ant asciidoctorj bpipe ceylon crash cuba cxf gaiden glide gradle grails groovy groovyserv infrastructor java jbake kotlin kscript lazybones leiningen maven micronaut sbt scala spark springboot sshoogr vertx visualvm
The __get_current_installed_list echoes the following values:
gradle java kotlin maven sbt scala
The second part of the script below is where we call everything so that the completion script is used correctly by zsh:
function _sdk() {
local ret=1
local target=$words[2]
_arguments -C \
'1: :->first_arg' \
'2: :->second_arg' \
&& ret=0
case $state in
first_arg)
__describe_commands
;;
second_arg)
case $target in
install)
__describe_install
;;
uninstall)
__describe_uninstall
;;
*)
;;
esac
;;
esac
return $ret
}
_sdk "$#"
The problem is the following: when I type sdk install I get the right output, the one from the __get_candidate_list method BUT when I use sdk uninstall it still gives me the output from __get_candidate_list althought I am expecting __get_current_installed_list output.
EDIT: After a bit of debugging, it seems that zsh is not at fault here. I can't figure out why, but sdkman gives me the same output with both sdk list and sdk current (After the sed commands) from inside the completion script. IN my shell, both commands work properly with shell.
Is there something wrong with the way I use the _describe method ?
Is there anything else I am not seeing ?
Thanks for your help.
So I finally found a workaround to fix this but it is not ideal.
I chose to launch the commands in the background when launching the plugin, and fill text files with the results, so that completion scripts can use these after.
Below is the code I used in the zsh-sdkman.plugin.zsh file, in case my github repository disappears:
# --------------------------
# -------- Executed on shell launch for completion help
# --------------------------
# NOTE: Sdkman seems to always output the candidate list rather than the currently installted list when we do this through the completion script
# There are two goals in the code below:
# - Optimization: the _sdkman_get_candidate_list command can take time, so it is done once and in background
# - Bug correction: correct the problem with sdkman command output explained above
# WARNING: We are setting this as a local variable because we don't have it yet at the time of initialization
# A better approach would be welcome
SDKMAN_DIR_LOCAL=~/.sdkman
# Custom variables for later
export ZSH_SDKMAN_CANDIDATE_LIST_HOME=~/.zsh-sdkman.candidate-list
export ZSH_SDKMAN_INSTALLED_LIST_HOME=~/.zsh-sdkman.current-installed-list
_sdkman_get_candidate_list() {
(sdk list | grep --color=never "$ sdk install" | sed 's/\$ sdk install //g' | sed -e 's/[\t ]//g;/^$/d' > $ZSH_SDKMAN_CANDIDATE_LIST_HOME &)
}
_sdkman_get_current_installed_list() {
(sdk current | sed "s/Using://g" | sed "s/\:.*//g" | sed -e "s/[\t ]//g;/^$/d" > $ZSH_SDKMAN_INSTALLED_LIST_HOME &)
}
# "sdk" command is not found if we don't do this
source "$SDKMAN_DIR_LOCAL/bin/sdkman-init.sh"
# Initialize files with available candidate list and currently installted candidates
_sdkman_get_candidate_list "$#"
_sdkman_get_current_installed_list "$#"
For more information, you can see the complete repository of my plugin: https://github.com/matthieusb/zsh-sdkman
If you have another cleaner solution, I'll be willing to make the necessary modifications, or don't hesitate to make a pull request on the project.
In my Dockerfile I'm trying to download the latest WordPress version without any content inside it, but I'm having trouble automating the latest version number so that I don't have to manually change it when the new version of WordPress comes out.
In my Dockerfile I have
ARG LATESTWPVER="$(curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)"
ADD $(https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip) /var/www/latest.zip
But the problem is that my LATESTWPVER is not 4.9.8, and I get the error
ADD failed: stat /var/lib/docker/tmp/docker-builder962069305/$(https:/downloads.wordpress.org/release/wordpress-$(curl -s https:/api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)-no-content.zip): no such file or directory
It passes the entire command and I'd like to have the output of that command.
In my shell file the
#!/bin/bash
WP_LATEST="$(curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)"
echo $WP_LATEST
will return the number 4.9.8.
From the error, I'm guessing that you can only assign something to the variable, but not execute it. Is there a way to execute a command and assign it to a variable and pass it as an argument?
A Dockerfile is not a shell or a build script, so it will not execute what you pass in ARG. There is a workaround - define the version as an ARG and pass that during build.
Dockerfile:
--
FROM ubuntu:latest
ARG LATESTWPVER
RUN echo $LATESTWPVER
ADD https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip /var/www/latest.zip
docker build --build-arg LATESTWPVER=`curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1` .
Sending build context to Docker daemon 6.656kB
Step 1/4 : FROM ubuntu:latest
---> 113a43faa138
Step 2/4 : ARG LATESTWPVER
---> Using cache
---> 64f47dcfe7fa
Step 3/4 : RUN echo $LATESTWPVER
---> Running in eb5fdd005d77
4.9.8
Removing intermediate container eb5fdd005d77
---> 1015629b927e
Step 4/4 : ADD https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip /var/www/latest.zip
Downloading [==================================================>] 7.118MB/7.118MB
---> 72f0d3790e51
Successfully built 72f0d3790e51
I am planning to write a log processing application using RabbitMQ, Symfony2 and the RabbitMqBundle.
The tool I am working on has to be highly available and must process millions of entries per day, so it's important that the consumers are always up and running (short breaks are fine), otherwise my queue might overflow after a while.
Are there best practices on how to manage the consumers (written in PHP), start/restart them in case of an error etc?
Thanks
I use this bash script to make sure that there are all required consumers running on imagepush.to:
#!/bin/bash
NB_TASKS=1
SYMFONY_ENV="prod"
TEXT[0]="app/console rabbitmq:consumer primary"
TEXT[1]="app/console rabbitmq:consumer secondary"
for text in "${TEXT[#]}"
do
NB_LAUNCHED=$(ps ax | grep "$text" | grep -v grep | wc -l)
TASK="/usr/bin/env php ${text} --env=${SYMFONY_ENV}"
for (( i=${NB_LAUNCHED}; i<${NB_TASKS}; i++ ))
do
echo "$(date +%c) - Launching a new consumer"
nohup $TASK &
done
done
If I remember correctly I took base from KnpBundles code.
Stop consumer
To stop your consumer with name my_consumer use
kill `ps aux | less | grep 'rabbitmq:consumer my_consumer' | grep -v grep | awk '{print $2}'`
ps aux | less | grep 'rabbitmq:consumer my_consumer' - will find all running processes of the consumer
grep -v grep - will exclude your own search process
awk '{print $2}' - get only process id from the row
kill - will terminate all found processes
Start consumer
To start your consumer with name my_consumer use
nohup /usr/bin/env php app/console rabbitmq:consumer consumer --env=prod &
I have a lot of consumers in the project and it became hard to restart them after deploy. And I started using Capistrano + Symfony plugin to deploy my project. I wrote a few custom tasks to start/stop/restart the consumers based on the yaml config. Tasks are based on the commands from above.