Corda integration test never finishes - corda

After reaching the following line, the test just seems to endlessly churn.
partyBHandle.rpc.startFlow(::MyIssueFlow, state).returnValue.getOrThrow()
Seeing a lot of console output to the effect of:
13:28:41.930 [InProcessNode-3-2] DEBUG net.corda.node.services.network.NodeInfoWatcher - Number of removed NodeInfo files 0
13:28:43.862 [InProcessNode-1-2] DEBUG net.corda.node.services.network.NodeInfoWatcher - Read 0 NodeInfo files from NotaryService\additional-node-infos
13:28:43.862 [InProcessNode-1-2] DEBUG net.corda.node.services.network.NodeInfoWatcher - Number of removed NodeInfo files 0
13:28:45.828 [InProcessNode-2-2] DEBUG net.corda.node.services.network.NodeInfoWatcher - Read 0 NodeInfo files from BankA\additional-node-infos
13:28:45.828 [InProcessNode-2-2] DEBUG net.corda.node.services.network.NodeInfoWatcher - Number of removed NodeInfo files 0
I know the flow works because a regular Flow test using MockNetwork completes successfully. In this case I want to use integration testing so I can test some client API methods.

I realized the issue when trying to manually start nodes. There was an error present when starting my nodes, that did not display when starting the nodes through the integration test. In my case there was a missing jar in my cordapp.

Related

How to extract the maps via AC Dashboard?

I ran into the problem that everything went well with the compilation and the database. But when I start the worldserver, I get an error
Loading world information...
> RealmID: 1
> Version DB world: ACDB 335.6-dev
Will clear `logs` table of entries older than 1209600 seconds every 10 minutes.
Using DataDir /azerothcore-wotlk/data/
WORLD: VMap support included. LineOfSight:true, getHeight:true, indoorCheck:true PetLOS:true
Map file '/azerothcore-wotlk/data/maps/0004331.map': does not exist!
exit code: 1
worldserver terminated, restarting...
worldserver Terminated after 1 seconds, termination count: : 6
worldserver Restarter exited. Infinite crash loop prevented. Please check your system
What could be the problem? I rechecked the permissions to the directory including the owner and everything is fine. Tried different paths DataDir, now it set to **DataDir = "/home/azcore/azerothcore-wotlk/data". I get an error, how to fix that?
**
First of all, if you only need to get latest maps compatible with AzerothCore you can download them from here.
Otherwise, change in your config.sh file CTOOLS_BUILD='all', afterwards, run again the build using
./acore.sh compiler build
This will generate the binaries to extract the data inside azerothcore-wotlk/env/dist/bin/.
Having the binaries you can follow the guide here to extract them manually, you only need to move the binaries into the WoW directory and run the binaries in the right order.

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

How to get sbt to pick up tests written in groovy?

I've found the sbt-groovy plugin and it properly compiles both the test and the main sources just fine. However, the definedTests key is always empty; SBT never discovers any groovy tests. I've verified this with a very simple single src/test/groovy/Test.groovy with a single method annotated #Test which should be picked up by the junit-interface.
I think the root of the issue is that the sbt-groovy plugin needs to define the task "definedTests" in its own plugin source code. This task provides a Seq[TestDefinition].
Looking at how SBT itself populates the sequence reveals it uses additional output from the scala compiler (which also happens to compile java files, so it works out of the box for java) in an Analysis class which is populated by output from the IncrementalCompiler
I've fiddled around with the taskdef, but I'm not sure I'm even on the right path. Documentation on this stuff is pretty sparse, or heavily connected to the IncrementalCompiler.
What code do I need in sbt-groovy to produce a Seq[TestDefinition] that satisfies SBT so that I can run tests (picked up by the junit-interface) that are written in Groovy?
The test detection code is in Tests.discover, which you might be interested in.
It seems like all you need is the list of methods with annotations and list of subclasses.
If you have some way of finding them out, you probably could mimic what's going on in the code.
The discovery code, as you mentioned, relies on the the Analysis datatype, which is the inner gut of
the incremental compiler. You might be able to take advantage of the fact that it is
sbt (not Scala or Java compiler) that is responsible for incremental compilation.
For Java compilation, AnalyzingJavaCompiler.compile calls the compiler and then does the analysis.
In theory, you could define an AnalyzingGroovyCompiler that uses the same mechanism
as the one used for the Java compilation. This is not exactly a walk in the park since
some of the parts are hidden behind private[sbt].
Long story short, I put together a hacky proof-of-concept that wrangles the incremental compiler into generating Analysis for Groovy code, and was able to detect a test.
https://github.com/eed3si9n/sbt-groovy-test
I've only tested with one simple use case
import org.junit.Test
import org.junit.Assert
class Foo {
#Test
public void foo() {
Assert.assertEquals(1, 2)
}
}
Running test from sbt yeilds the following output:
> test
[info] Start Compiling Test Groovy sources : /Users/xxx/sbt-2167-groovy/src/test/groovy
[error] Test Foo.foo failed: expected:<1> but was:<2>, took 0.062 sec
[error] Failed: Total 1, Failed 1, Errors 0, Passed 0
[error] Failed tests:
[error] Foo
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 1 s, completed Aug 23, 2015 5:05:01 AM
It might not work with future versions of sbt. Caveat emptor.

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

All the tests passed, but bamboo build fails with a statement "No failed tests found, a possible compilation error occurred."

I'm supposed to run some jbehave(automated) tests in bamboo. Once the tests run I'll generate some junit compatible xml files so that bamboo could understand the same. All the jbehave tests are ran as part of a script, because I need to run the jbehave tests in a separate display screen(remember these are automated browser tests). Example script is as follows.
Ex:
export DISPLAY=:0 && xvfb-run --server-args="-screen 0, 1024x768x24"
mvn clean integration-test -DskipTests -P integration-test -Dtest=*
I have one more junit parser task which points to the generated junit compatible xml files. So, once the bamboo build runs and even if all the tests pass, I get red build with the message "No failed tests found, a possible compilation error occurred."
Can somone please help me on this regard.
Your build script might be producing successful test reports, but one (or both, possibly) of your tasks is failing. That means that the failure is probably* occurring after your tests complete. Check your build logs for errors. You might also try logging in to your Bamboo server (as the bamboo user) and running the commands by hand.
I've seen this message in the past when our test task was crashing halfway through the test run, resulting in one malformed report that Bamboo ignored and a bunch of successful reports.
*Check the build log to make sure that your tests are indeed running. If mvn clean doesn't clean out the test report directory, Bamboo might just be parsing stale test reports.
EDIT: (in response to Kishore's links)
It looks like your task to kill Xvfb is what is causing the build to fail.
18-Jul-2012 09:50:18 Starting task 'Kill Xvfb' of type 'com.atlassian.bamboo.plugins.scripttask:task.builder.script'
18-Jul-2012 09:50:18
Beginning to execute external process for build 'Functional Tests - Application Release Test - Default Job'
... running command line:
/bin/sh
/tmp/FUNC-APPTEST-JOB1-91-ScriptBuildTask-4153769009554485085.sh
... in: /opt/bamboo-home/xml-data/build-dir/FUNC-APPTEST-JOB1
... using extra environment variables:
<..snip (no meaningful output)..>
18-Jul-2012 09:50:18 Failing task since return code was 1 while expected 0
18-Jul-2012 09:50:18 Finished task 'Kill Xvfb'
What does your "Kill Xvfb" script do? Are you trying something like pkill -f "[x]vfb"? pkill -f silently returns non-zero if it can't match the expression to any processes.
My solution was to make a 'script' task:
#!/bin/bash
/usr/local/bin/phpcs --report=checkstyle --report-file=build/logs/checkstyle.xml --standard=PSR2 ./lib | exit 0
Which always exits with status 0.
This is because PHP code sniffer return exit status 1 when only 1 coding violation (warning / error) is found which causes the built to fail.
Turns out to be a simple fix.
General bamboo behavior is to scan the entire log and see for any failure codes(1). For this specific configuration i had some 6 scripts out of which one of them was to kill the xvfb(frame buffer). For some reason server is not able to kill xvfb and that task was returning a failure code. Because of this, though all the tests passed, bamboo got one of this error codes from previous tasks and build was failing.
Current fix is to remove the task which kills xvfb and the build went green! \o/.

Resources