So I have followed everything that I need to make automatic unit test discovery work
my project structure is like this
projectroot
|---src
| |--app.py
| |--__init__.py
|----__init__.py
|---test
| |--test_app.py
| |--__init__.py
I run the following command from projectroot
projectroot>> python -m unittest discover -s test
works fine in windows. It is able to discover all the tests under test folder and runs them successfully.
however, when I try the same on an ubuntu machine, it says Ran(0) tests and never discovers any unit tests under test folder.
Does anyone know if there is anything operating system specific going on here ?
Looked like it was about using -p switch to scan the right pattern of files and discover.
The following worked. Used -p (pattern) attribute to discover/run the unit tests in ubuntu
python -m unittest discover -s test -p "T*.py"
Note: 1. all my test cases start with "T" e.g. Test_check.py 2. "test" is the package where all my test cases are.
Related
I was exploring thIS cordapp example
https://github.com/corda/corda-training-template.git
There is a total of 4 nodes (Notary, A, B, and C) in this example. I am trying to open all the nodes in a single run using runnodes script from the terminal.
But all the nodes are not opening at a time. It's like alternatively they are opening, onetime only Notary and C nodes are opening and another time A and B nodes are opening.
Any specific reason?
And also I am getting this message in the webserver terminal. Please explain.
"The Corda specific webserver is deprecated and will be removed in future".
The runnodes script is not a very reliable way of starting up the nodes. Its mostly used for development purposes to make development faster.
It might sometimes not work as expected. The script works by opening up a terminal window and running the command to start the node in that particular terminal. Depending on the speed of the system, the command sometimes gets executed before the new terminal is opened up.
The reliable way to start a Corda node, however, is to use the java -jar corda.jar command. So just go into each individual nodes folder and run the command to start the node.
Here's the script :
#!/usr/bin/env bash
set -eo pipefail
# Allow the script to be run from outside the nodes directory.
basedir=$( dirname "$0" )
cd "$basedir"
if [ -z "$JAVA_HOME" ] && which osascript >/dev/null; then
# use default version of java installed on mac
/usr/libexec/java_home --exec java -jar runnodes.jar "$#"
else
"${JAVA_HOME:+$JAVA_HOME/bin/}java" -jar runnodes.jar "$#"
fi
Its the same one that you use to start the node, found in build/nodes folder
I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest
Consider following:
$ cd /home/mydir
$ jupyter notebook --port=8888
In plain English, I am running jupyter server from /home/mydir directory.
Is there a simple way to get this directory from within a notebook regardless if it's a R notebook or a Python notebook or whatever? Maybe there is some magic command or variable?
NOTE: getwd() is not an answer as it returns directory of a current notebook but not the jupyter server root.
I have a similar problem and found your post, although I don't see a viable solution yet. Eventually I did found a solution, although it works only because I only care about Linux and I only care about Python. The solution is this magic line:
J_ROOT = os.readlink('/proc/%s/cwd' % os.environ['JPY_PARENT_PID'])
(I put it in a module in my PYTHONPATH so that I can easily use it in any Python notebook.) See if it is good for you.
Remember that your iPtyhon is just a Python module, so you can execute any valid Python code in a cell. So, if you started your notebook and haven't executed any directory changes in your code, you should be able to retrieve your cwd with the following in a cell:
import os
os.getcwd()
But furthermore, you can execute shell commands in cells, so you can retrieve other information in the cell. For example:
!which jupyter
should give you the path to your jupyter executable.
Which then leads you to running something like:
!jupyter --paths
which should give you something similar to:
config:
/Users/yourdir/.jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/yourdir/Library/Jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/yourdir/Library/Jupyter/runtime
Frankly I'm surprised that all this time later there is still no built-in way to do this. I have used Isaac To's solution on Linux but recently had to make a jupyter notebook portable to Windows as well. Simply using os.getcwd() is fragile because a cell using it to set your JUPYTER_ROOT_DIRECTORY can potentially be called again, after you have changed your working directory.
Here is what I came up with:
try:
JUPYTER_ROOT_DIRECTORY
except NameError:
JUPYTER_ROOT_DIRECTORY = os.getcwd()
I put it in one of the first couple cells with the initial import statements. If the cell gets called again, it will not re-set the variable value because the variable exists and does not throw an exception.
It should be noted that unlike Isaac To's solution it sets the value to the directory the current .ipynb was run from, which is not necessarily the same directory as the top-level dir the user can access in the left hand file pane.
My suggestion is to use an intuitive approach.
Create a new folder within the Jupyter environment with a very unique name, for example, T246813579.
You can now locate the Jupyter working path by searching in your file explorer. For example, you can use the Windows Explorer in order to locate your new folder.
The expected result should look something like this:
C:\Users\my_user_name\JupyterHome\T246813579
The answer from #Isaac works well for Linux, but not all systems have /proc. For a solution that works on macOS and Linux, we can use shell commands, taking advantage of the ! shell assignment syntax in Jupyter:
import os
JPY_ROOT = ! lsof -a -p {os.environ['JPY_PARENT_PID']} -d cwd -F n | tail -1 | cut -c 2-
JPY_ROOT = JPY_ROOT[0]
print(JPY_ROOT) # prints Jupyter's dir
Explanation:
Get the process ID (pid) of the current jupyter instance with os.environ['JPY_PARENT_PID']
Call lsof to list the process's open files, keeping only the current working dir (cwd)
Parse the output of lsof using tail and cut to keep just the directory name we want
The ! command returns a list, here having only one element
Alternate Version
To save the os import, we could also use shell commands to get the PID. We could also do the subsequent string wrangling in python, rather than calling tail and cut from the shell, as:
JPY_ROOT = ! lsof -a -p $(printenv | grep JPY_PARENT_PID | cut -d '=' -f 2) -d cwd -F n
JPY_ROOT = JPY_ROOT[2][1:]
I am new to robot framework and wanted to see if i can run test cases without RIDE. I want to create test suite and run test cases sequentially without using RIDE. I went through documentation but could not understand it.
Ex: Test Suite
Test Case 1
Test Case 2
Test Case 3
i would like to put my references to all my resource files to test suite and run all test cases. I can do this using RIDE but wanted to know if i can do this without using it. Do i need to create batch file to do this or any other method to run? Any example will help me. Thank you for advance.
When you install robot, you also installed a program called robot (or pybot in older versions), which is the official robot test runner.
If you have a test suite named "my_tests.robot", you can open up a command prompt (bash on *nix, powershell or command.exe on windows) and type the following command (assuming that robot is in your PATH environment variable, which it probably is):
$ robot my_tests.robot
If you have a collection of suites in a folder, you can give pybot the name of the folder rather than the name of a test file.
To see a list of all robot command line options, use the --help option:
$ robot --help
For more information see Starting test execution in the robot framework users guide.
if you use sublime-text/bracket/Atom/Emacs/vim ... you also have some plug-in :
http://robotframework.org/#tools
Show the path of test suite
exmp: Location of my 'catiav6' test script_
cd C:\Users....\Desktop\catiav6
Run following command from cmd
For single test case:
pybot --test 'Name_of_single_test_case' 'Name_of_test_suite'
For all test cases:
pybot --test * 'Name_of_test_suite'
Making a JavaFX app that does remote EJB, and it turns out I need the java executable in the bundle. Building for Mac, Windows, deb, and rpm.
Three questions:
1) Is there a way to get the java executable to end up in the bundle without using the post-image scripts?
2) If not, are there post-image scripts for Linux? (deb & rpm both) These don't show up in the verbose notes like they do for Mac and Windows.
3) Still having a problem with my mac script today. I've verified that the java executable is copied to the right place in the dmg-image tree, but doesn't end up in the final. Yesterday they worked, and I cannot for the life of me figure out what I did to make it work or again to make it stop working today.
Yes, I'm using verbose and have tried all sorts of bash tricks to expose inner workings.
More detail here:
https://blogs.oracle.com/talkingjavadeployment/entry/native_packaging_cookbook_using_drop
Thanks for any help,
Tim
True, there is no post-image script for linux (that I could find).
However, you can modify the rpm spec file to do what you want by adding a custom script to the %install section.
For example, a default spec file %install section might look like this:
%install
rm -rf %{buildroot}
mkdir -p %{buildroot}/opt
cp -r %{_sourcedir}/YOURAPPNAME %{buildroot}/opt
To add the java executable to the build, just add these 2 lines:
mkdir -p %{buildroot}/opt/YOURAPPNAME/runtime/jre/bin
cp -p ${JAVA_HOME}/bin/java %{buildroot}/opt/YOURAPPNAME/runtime/jre/bin/java