A "simple" question: how can I automatically run PHPunit tests with Arcanist?
According to the documentation I should first load a custom library. As stated here I should create a .arcconfig file and load the appropriate library.
So: I've create a dir "arc_libs" in my project and in the dir "src" I used arc liberate to generate the needed files. My config is now:
{
"project.name" : "arc_libs",
"phabricator.uri" : "https://phabricator.xxx.xxx.net/",
"unit.engine" : "PhpunitTestEngine",
"load" : ["arc_libs/src"]
}
The libary DOES get loaded because I can run arc unit
[matthijs#xx xxx]$ arc unit
No tests to run.
But as you can see there are no tests to run. We keep our tests in "project_root/tests" and as far as I understand the documentation I should create a __tests__ dir in "the module" (probably my arc_libs dir ?)
However I want to run my existing PHPunit test files, not new tests I need to create. I tried using a symlink etc but I cannot get it to work. Arcanist doesn't detect my tests.
So my question: How can I automatically run my EXISTING PHPunit tests with arcanist?
(note we use arc diff that should run arc unit automatically)
The documentation you linked won't be very useful - it's aimed at Phabricator developers who want to test their libraries. There is some user-facing documentation for customising unit test tasks, but it's not great. Fortunately, it's quite easy to get Arcanist to run your project's unit tests using the included PhpunitTestEngine:
Prepare a phpunit.xml file in your project root. This should be a standard PHPUnit configuration file. You can test this by running phpunit -c phpunit.xml.
Add a phpunit_config option to your .arcconfig:
{
"phabricator.uri": "https://phabricator.xxx.xxx.net/",
"unit.engine": "PhpunitTestEngine",
"phpunit_config": "phpunit.xml"
}
Run arc unit to test it out.
Although user documentation is thin on the ground, the source code for PhpunitTestEngine has some comments and is fairly concise. If you run into problems, reading through the test engine code can help you track it down.
$ arc unit --help
unit [options] [paths]
unit [options] --rev [rev]
Supports: git, svn, hg
Run unit tests that cover specified paths. If no paths are specified,
unit tests covering all modified files will be run.
By default, arc lint and arc unit are meant to be used as part of a process of making changes, so by default it only acts on changed files. Odds are, you don't have any changed files. You probably want to specify some paths, or run arc unit --everything to run all tests.
Related
When building R packages, we use testthat to write tests. We have 2 files: a test file for the specific package (specific.R), and one that we use to make sure all packages continue to work together and the overall result is fine (overall.R). Both tests are currently run when we push to github or create a PR through Travis, which implicitly runs this line of code(R CMD check *tar.gz). check runs all the tests in the test folder, and thus both files are run.
Now, I'm a bit new to testing... but it seems that we have more or less recreated the difference b/w a unit test and an integration test.
Considering that the tests for overall.R do take a lot longer to run, we would like to restrict it so that they only run when we do a pull-request to the package (when we have introduced new functionality on a different dev branch) while the package-specific tests keep running every time we commit/push to the repo.
Is this possible in github or Travis?
You could put your overall.R script into a different directory and then specify this folder as the new tests directory for pull-request hooks, but this will then only run your integration tests and not the unit tests. See R CMD check --help
R CMD check --test-dir integration_tests package.tar.gz
I'm currently an undergraduate researcher and I've been tasked with researching knowledge defined networking. The research in particular deals with very advanced code that's way beyond my minimal knowledge of omnet. The first instruction to build the network is to run the makefile (found here: https://bpaste.net/show/d26a592a563a) to generate the "networkRL" needed by the python script.
I've imported all of the files needed for the simulation but whenever I try to run the makefile I get an error:
"Error starting process.
Cannot run program "C:\Users\Sierra\DRL\omnet\router\makefile": Launching failed"
Or when I try to run the entire simulation it asks:
"Enter parameter 'NetworkAll.node0.tcontroller.folderName':"
I'm not sure if these are simple problems to solve and I'm just inexperienced, but any help would be greatly appreciated. I can post all of the source, ned, and header files if necessary. I didn't want to pack this entire post with 15+ code links if the makefile was the only one needed to solve this issue.
I'm using OMNeT version 4.6 on Windows 10 if that information is relevant
The term "run the makefile" means: run make in the directory where makefile is located. In OMNeT++ one can do this in two ways.
First way:
Open mingwenv.cmd from OMNeT++ main directory.
In the mingw console go to main directory of the project, for example:
cd /C/Users/Sierra/DRL/
In the mingw console type:
make
Second way:
In OMNeT++ choose File | Import.. |Existing Project into Workspace and select the project.
Build the project choosing Project | Build Project.
According the second error: open omnetpp.ini and set value for folderName parameter, for example:
**.folderName = "/c/some/directory"
or
**.node0.tcontroller.folderName = "/c/some/directory"
I currently have a command line sbt -Dsome.configuration.option test doing what I want, but I would like it to apply that configuration option automatically for sbt test (and no other sbt phase). If my terminology is correct, then I want to set a Java Option for the Test Configuration. How do I do this?
Searching on these terms has led me to http://www.scala-sbt.org/release/docs/Testing.html but I have not yet been able to understand it.
This question looks similar to mine: Define custom test configurations in sbt
Try this:
testOptions in Test +=
Tests.Setup(() => sys.props += "some.configuration.option" -> "true")
Caveat:
Because you're not forking this mutates the state of the system property in the JVM running sbt itself.
Which means that after running test the first time it that system property will also be set if you, for instance, run your main from within sbt (run/runMain).
On windows after running the grunt build command for creating brackets shell it gives done without errors but i dont see any .exe file generated..
What might be the problem???
Here are some possible solutions:
Are you following the full brackets-shell build instructions, including all prerequisites?
Make sure Brackets isn't running at the same time. The build will fail silently if the .exe file is currently in use (see bug).
Try with a fresh git clone of the repo. If your brackets-shell local copy has been around for a while, sometimes the build & deps folders can get in a bad state. (I'm assuming you haven't modified the source at all. If you have, try with an unmodified copy of the source first to make sure it builds correctly without any of your changes).
Check that python --version shows 2.7.x
Verbose build output would also be helpful in diagnosing issues like this, but unfortunately there's not yet an easy way to get that...
If you follow the instructions on bracket-shell's wiki page, the Windows executable should be created in the Release directory.
I'm running a Capifony deployment. However, I notice that Capifony's in-built commands are running against the previous release, whereas my custom commands are correctly targeting the current release.
For example, if I run cap -d staging deploy, I see some commands output like this (linebreaks added):
--> Updating Composer.......................................
Preparing to execute command: "sh -c 'cd /home/myproj/releases/20130924144349 &&
php composer.phar self-update'"
Execute ([Yes], No, Abort) ? |y|
You'll see that this is referring to my previous release - from 2013.
I also see commands referring to this new release's folder (from 2014):
--> Running migrations......................................
Preparing to execute command: "/home/myproj/releases/20140219150009/
app/console doctrine:migrations:migrate --no-interaction"
Execute ([Yes], No, Abort) ? |y|
In my commands, I use the #{release_path} variable, whereas looking at Capifony's code, it's using #{latest_release}. But obviously I can't change Capifony's code.
This issue against Capistrano talks about something similar, but I don't think it really helps, as again I can't change Capifony's code.
If I delete my releases folder on the server, I have a similar problem - #{latest_release} doesn't have any value, so it attempts to do things like create a folder /app/cache (since the code is something like mkdir -p #{latest_release}/app/cache).
(Assuming I don't delete the current symlink and the release folder, the specific error I see is when it fails to copy vendors: cp: cannot copy a directory, /home/myproj/current/vendor, into itself. However, this is just the symptom of the bigger problem - if it thinks the new release is actually the previous one, that explains why current also points there!)
Any ideas? I'm happy to provide extracts from my deploy.rb or staging.rb (I'm using the multistage extension) but didn't just want to dump in the whole thing, so let me know what you're interested in! Thanks
I finally got to the bottom of this one!
I had a step set to run before deployment:
before "deploy", "maintenance:enable"
This maintenance step (correctly) sets up maintenance mode on the existing site (in the example above, my 2013 one).
However, the maintenance task was referring to the previous release by using the latest_release variable. Since the step was running before deployment, latest_release did indeed refer to the 2013 release. However, once latest_release has been used, its value is set for the rest of the deployment run - so it remained set to the 2013 release!
I therefore resolved this by changing the maintenance code so that it didn't use the latest_release variable. I used current_release instead (which doesn't seem to have this side-effect). However, another approach would be to define your own variable which gets its value in the same way as latest_release would:
set :prev_release, exists?(:deploy_timestamped) ? release_path : current_release
I worked out how latest_release was being set by looking in the Capistrano code. In my environment, I could find this by doing bundle show capistrano (since it was installed with bundler), but the approach will differ for other setups.
Although the reason for my problem was quite specific, my approach may help others: I created an entirely vanilla deployment following the Capifony instructions and gradually added in features from my old deployment until it broke!