running xunit from console - console

I have, what is probably, a stupid question.
I'm trying to run an xunit dll from the command prompt.
I find that I need the following dlls to be in the folder that the command prompt is in.
xUnit.Console.exe,xunit.console.exe.config,xunit.dll,xunit.runner.utility.dll
which is fine I guess but then I can't get it to run my tests.
At first I tried using a relative path to my test dll and it was not having that.
so then I put the test dll in a folder with the above dlls and ran it. Now the result is it says I"m missing a dependency for my test dll.
So then I put the xunit files in the bin folder with my test project dlls and it tells me that it can't even find the test dll that it's sitting next to.
This all seems very difficult what i want is to do this given the following structure
--src
----tools
------xUnit
--------all my xunit dlls
----projects
------MyTestProject
-------bin
---------MyTestProject.dll
lets say
c:\Src\Tools\xUnit>xunit.console ..\\..\Projects\MyTestProject\bin\MyTestProject.dll

Two solutions:
1) Add C:\src\Tools\xUnit to your PATH environment variable and run the xunit console app from a command prompt where the current directory is C:\src\projects\MyTestProject\bin.
2) As per the first suggestion but rather then putting it in your PATH environment variable, specify the whole path (relative or absolute) to the xunit.console.exe on the command line as the executable to run.

Run the Command Prompt
Run the next command and replace 'path_to_xunit_console' with the path of xunit console
set PATH=%PATH%;path_to_xunit_console_exe
move to location of your unit test binary project:
cd my_test_binary_folder
Run the next command and save the logs to xml file:
xunit.console your_test_dll_file -xml testlog.xml
To know the different options run:
xunit.console -?
You can automate these steps by creating a batch file, test.cmd, in the binary test folder:
set PATH=%PATH%;path_to_xunit_console_exe
xunit.console your_test_dll_file -xml testlog.xml

Way to find xUnitConsole.exe
Search the package in
%userprofile%\.nuget\packages
.nuget\packages\xunit.runner.console->Your Version
installed->Tools->xUnitConsole.exe

I wound up building an xUnit console test runner in C# to cycle through and run all the xUnit test assemblies in a given folder.
The structure I wound up with was the xUnit test runner, named RunXUnitTests, in a folder one level above the test assemblies, with the executable and the assorted support dlls it needed. For example NLog logging support and some email support dlls for sending out a results email were in this RunXUnitTests folder.
Immediately under the RunXUnitTests test runner folder, there is a "TestAssemblies" folder, and all the xUnit test assembly dlls went in that folder, along with whatever supporting dlls were required by the tests themselves. Also, all the xunit.console.exe runtime files were in the TestAssemblies folder. It was least confusing to have all the tests and their dependencies in the same TestAssemblies folder, separate from the test runner.
To run the tests, from the console, the C# test runner app would submit command lines to the System.Diagnostics process execution API, with the xunit.console.exe command as the process to run, and the test assembly and the (xml) results file as command line parameters.
A typical command line, for Operational Readiness Tests (ORT), that was formatted by the test runner, and submitted to a process.Start() method call (where the process object is of type System.Diagnostics.Process), is as below:
"C:\RunXUnitTests\TestAssemblies\xunit.console.exe" "C:\RunXUnitTests\TestAssemblies\SharePointBasicFeaturesORT.dll" -xml "C:\Users\Public\Documents\TestResults\SharePointBasicFeaturesORT.xml"
After the tests ran, the test runner program had some routines to spin through the XML results files, extract the results, format a results summary email (in HTML), and send the email to a distribution list.
I should mention that all this is packaged up into installable MSI files that can be deployed to Windows 7/10 PCs or VMs for test runs. We are using it to run SpecFlow+xUnit "Operational Readiness Tests" on our web applications on a scheduled daily basis. We used the Wix# ("WixSharp") installer to build our installers in C# and then have the Wix Installation Toolset build standard MSI installer files. See https://github.com/oleg-shilo/wixsharp for more info and the source and binaries. It works very well once you get the hang of it.
Yes, it's a fair amount of work to do this way. I wouldn't recommend it if your organization already has other DevOps type tools that can do this work, e.g., Jenkins, TeamCity, Bamboo, Azure for DevOps etc. My organization is still "in process" on bringing in such tools, and it was more doable in the near term to evolve the test runner than to get organizational decisions, financial commitment, and installation and configuration support for DevOps/CI tooling.
If you don't have ready access to DevOps/CI tools, this type of approach uses freely available open source tools (aside from paid versions of MS Visual Studio; I don't know what the community editions do and don't work for) and is workable as a bridge to a more sophisticated environment.

Related

How to source control Elasticsearch

I'm just starting work on a website which I want to integrate Elasticsearch into. In my development environment, I will need to install ES so that I and other devs can quickly get started with minimum effort.
We're using ASP.NET for the website (so I know all the devs will be running the website on Windows) and Git for source control.
Previously, on a another project, I have followed the installation guide and simply source controlled the following folders in ES:
/bin/
/config/
/lib/
/modules/
please note, the above folders were for ES 2.x so may slightly differ from 5.x
I then created a simple .bat file which devs run when they start working on the project:
cd %~dp0\elastic\bin\
start elasticsearch
cd %~dp0
All the script does is run ES.
However, I wonder if I should even be source controlling these files. Perhaps it would be better if I had a .bat file which would download a fresh copy of ES when a developer starts work?

Running a post deploy ps script or executable

I am in the process of converting our legacy custom database deployment process with custom built tools into a full fledged SSDT project. So far everything has gone very well. I have composite projects that can deploy a base database as well as projects that deploy sample and test data.
The problem I am having now is finding a solution for running some sort of code that can call a web service to get an activation code and add it to the database as the final step of the process. Can anyone point me to a hook that I might be able to use?
UPDATE: To be clearer I am doing this to make it easier to maintain and deploy our sample and test data to a local machine. We can easily use Jenkins to activate the sites when they are deployed nightly to our official testing environments. I'm just hoping to be able to do this in a single step to replace the homegrown database deploy tool that we use now.
In my deployment scenario I wrapped database deployment process in some powershell scripts which do necessary prerequisites. For example:
powershell script is started and then it stops some services
next it run sqlpackage.exe or preproduced sql deployment scripts
finally powershell script starts services.
You can pass some parameters from powershell to sql scripts or sqlpackage.exe as sqlcmd variables. So you can call webservice first, then pass activation code as sqlcmd variable and use the variable in postdeployment script.
Particularly if it's the final step, I'd be tempted to do this separately, using whatever tool you're using to do the deployment: Powershell scripts, msbuild, TFS, Jenkins, whatever. Presumably there's also a front-end of some description that gets provisioned this way?
SSDT isn't an eierlegende Wollmilchsau, it's a set of tools for managing database changes.
I suspect if the final step were "provision a Google App Engine Instance and deploy a Python script", for example, it wouldn't appear to be a natural candidate for inclusion in an SSDT post-deploy script, and I reckon this falls into the same category.

Using Jenkins to Deploy to Production Server

I have 3 stages (dev / staging / production). I've successfully set up publishing for each, so that the code will be deployed, using msbuild, to the correct location, with the correct web configs transformed - all within Jenkins.
The problem I'm having is that I don't know to deploy the code to staging from what was built on dev (and staging to production). I'm currently using SVN as the source control, so I think I would need to somehow save the latest revision number dev has built and somehow tell Jenkins to build/deploy staging based on that number?
Is there a way to do this, or a better alternative?
Any help would be appreciated.
Edit: Decided to use the save the revision number method, which parses a file containing the revision number to the next job -- to do this, I followed this answer:
How to promote a specific build number from another job in Jenkins?
It explains how to copy an artifact from one job to another using the promotion plugin. For the artifact itself, I added a "Execute Windows batch command" build step after the main build with:
echo DEV_ENVIRONMENT_CORE_REVISION:%SVN_REVISION%>env.properties
Then in the staging job, using that above guide, copied that file, and then using a plugin EnvInject, to read from that file and set an environment variable, which can then be used as a parameter to the SVN Repository URL.
You should be able to identify the changeset number that was built in DEV and manually pass that changeset to the Jenkins build to pull that same changeset from SVN. Obviously that makes your deployment more manual. Maybe you can setup Jenkins to publish the changeset number to a file and then have the later env build to read that file for the changeset number.
We used to use this model as well and it was always complex. Eventually we moved to a build once and deploy many times model using WebDeploy. This has made the process much more simple. Check it out - http://www.dotnetcatch.com/2016/04/16/msbuild-once-msdeploy-many-times/

TFS The process cannot access the file because it is being used by another process

I have created an automated build process using TFS which builds a web application.
As part of this process a batch file is used to call ASP Merge to merge my web pages into one dll. I'm using the TFS activity, Invoke Process to do this.
The following is a screenshot of what is output in the TFS build window:
TFS Build Output
Does anyone have any idea how to troubleshoot this issue?
I solved this issue by removing the "Start /high /wait" command that I had in place to start the aspnet_merge tool in a separate window. The reason this was being done is because in a local script we were compiling the code files first using aspnet_compiler before running aspnet_merge.
I also had to split the rest of the file out into a different command file as it was deleting config files I needed.

When using Team City snapshot dependencies, are you using the post build files of the snapshot or simply the SVN revision number?

I have 2 build configurations in one project:
Build & Test Code
Deploy Code
I want Deploy Code to run only if Build & Test Code built successfully, so I set up a snapshot dependency.
Does a snapshot dependency mean that Deploy Code will check out the same SVN revision as Build & Test Code and then run the NAnt script against that checkout, which will not contain the compiler generated post-build files? Or, will a snapshot dependency on Build & Test Code from Deploy Code mean that the NAnt will run against the post-build, working directory files of Build & Test Code on the build agent?
UPDATE:
It seems if I put a snapshot dependency on Build & Test Code for Deploy Code and I have a build of the latest revision for Build & Test Code, my NAnt script will deploy the post-build files for that build of Build & Test Code.
I would still like to confirm that I understand the concept, as I don't really understand the Team City documentation. I think I should probably make sure Deploy Code runs on the same build agent as Build & Test Code, otherwise I might run into a case where Deploy Code checks out the SVN revision and then just deploys the pre-build code files. Is this correct?
My confusion is mainly because it seems you have to have a VCS setup for Deploy Code. Is that because it needs it to compare revision numbers to the snapshot dependency?
From the Snapshot Dependency section of the Dependent Builds doco page:
A snapshot dependency from build configuration A to build
configuration B enforces that each build of A has a "suitable" build
of B so that both builds use the same sources snapshot (used sources
revisions correspond to the same moment).
So the idea of a snapshot dependency is that you can run a build against exactly the same codebase as another build which has successfully run against it.
If you want the "deploy code" build to run only after "build and test code" has successfully run, create a snapshot dependency in the second build and make sure it's set to "Only use successful builds from the suitable ones".
Keep in mind that this has nothing to do with artefacts; the second build will simply pull the same codebase and recompile it all over again. If you want to deploy the artefacts created from the first build, then you want to be looking at artefact dependencies instead. This is just what Paul has written about in his answer and is the correct approach.
Regarding your update, it sounds like those post-build files are only available because they're still on the build agent after the first build. Try running the first build then "cleaning sources" on the agent and running the second build. You'll find the original compilation output is no longer there and it will fail. This is important because if you have multiple build agents or go some time between the two builds you simply can't rely on the output that isn't saved as artefacts still being there.
And yes, the TeamCity documentation is confusing :)
I have a very similar setup in TeamCity except that I use MSBuild not NAnt but I use the same 2 step build process and if I explain how I've configured it then hopefully it will allow you to understand what you need to do.
So in my setup, Build 1 pulls the code from source control, compiles it and runs the unit tests. It then publishes all the files required for deployment as artifacts.
Build 2 has a snapshot and an artifact dependency on Build 1 and this means that it pulls no code, it just simply takes the artifacts from Build 1 and deploys them.
In practice this means I can trigger Build 2 and one of two things happen. If Build 1 is up to date then it simply deploys the artifacts from the last successful build of Build 1. However if Build 1 is not up to date then TeamCity will automatically trigger Build 1 and then run Build 2 straight afterwards using the artifacts from that build.

Resources