Getting a no package exists error while running a unit test - corda

I get this error while running my unit test, while initializing the network parameters, this package com.example.contract does exist in my cordapp.
network = MockNetwork(MockNetworkParameters(cordappsForAllNodes = listOf(
TestCordapp.findCordapp("com.example.contract"),
TestCordapp.findCordapp("com.example.schema")
java.lang.IllegalArgumentException: There are no CorDapps containing the package com.example.contract on the classpath. Make sure the package name is correct and that the CorDapp is added as a gradle dependency.

I had the same issue with .findCordapp() In my case the problem was the classpath. I was running the tests in Intellij with the “JAR manifest” option selected for the “shorten command line” option in my Run configurations and that was causing the issue apparently, so instead I selected "none" option and it worked fine. I'm still investigating around this but for the moment I hope this throws some light to your problem so you can go on with your testing.

Please check the intellij Run/Debug Configurations if the tests are being run as gradle task rather then a junit one. Since the gradle tasks are the one which can scan the packages for cordapps.

Related

Crashlytics NDK How to Enable native symbol uploading

I am tring to enable crashlytics for my NDK android app. Ive followed the the guide here. I got stuck on Step 2.
Step 2: Enable native symbol uploading To produce readable stack:
traces from NDK crashes, Crashlytics needs to know about the symbols
in your native binaries. Our Gradle plugin includes the
uploadCrashlyticsSymbolFileBUILD_VARIANT task to automate this process
(to access this task, make sure nativeSymbolUploadEnabled is set to
true).
For method names to appear in your stack traces, you must explicitly
invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each
build of your NDK library. For example:
>./gradlew app:assembleBUILD_VARIANT\
app:uploadCrashlyticsSymbolFileBUILD_VARIANT
What does For method names to appear in your stack traces, you must explicitly invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each build of your NDK library.mean? I also saw that they left a line with gradlew. Is this a command on a command line? I am very lost. Can anyone help me achieve step 2?
I was also at a lost, but finally understand.
This command should be like this.
At first, move to the directory
cd /YourProjectRootPath/proj.android/
You can find gradlew file in this directory.
Then execute gradlew to run two tasks.
Task1: assembleDebug or assembleRelease
Task2: uploadCrashlyticsSymbolFileDebug or uploadCrashlyticsSymbolFileRelease
the command is, (Example of debug)
./gradlew XXXXXX:assembleDebug XXXXXX:uploadCrashlyticsSymbolFileDebug
Please replace "XXXXXX" to your "app name".
If you don't know what is your app name, please run the command below.
./gradlew tasks --all
You can see all task names and can find these two tasks.
XXXXXX:assembleDebug
XXXXXX:uploadCrashlyticsSymbolFileDebug
This "XXXXXX" is your "app name".
I don't know why Google describes such a complicated command using ">" and "\", but it's just a simple command,
./gradlew <TASK1> <TASK2>
When you add "nativeSymbolUploadEnabled true" to your gradle file like mentioned in Step1 this will instruct the gradle plugin to generate a new task with the format "uploadCrashlyticsSymbolFileBUILD_VARIANT" for each build type and architectures. Check this screenshot where I only have one build type "release" but also have three architectures. The tasks generated are:
uploadCrashlyticsSymbolFileArm8Release
uploadCrashlyticsSymbolFileUniversalRelease
uploadCrashlyticsSymbolFileX86_64Release
To run these tasks, you will need to either execute the command in a terminal updated for the desired build variant, e.g.
>./gradlew app:assembleX86_64\
app:uploadCrashlyticsSymbolFileX86_64Release
Or manually calling those tasks in the gradle tab. They need to be executed in this order (first the assemble and then the uploadCrashlyticsSymbolFile...) to make sure the binaries have been created for Crashlytics to generate and upload the symbol files.
To answer your question: What does For method names to appear in your stack traces, you must explicitly invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each build of your NDK library.mean? Crashlytics will need the symbol files in order to convert the crash report into a readable stack trace with method names and line numbers.

OutOfMemory Error after updating corda version to 4.1

I had a corda 3.3 test installation and recently updated it to version 4.1 and after that when I run my nodes with deployNodes script and runnodes - I always receive the following exception in node's console as soon as it starts. What can this mean? I don't have a clue what this can be caused by.
I tried to build and run the nodes without cordapps and they work, so somehow my cordapps cause this error happen. What other information should I provide to help you figure out this issue?
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:191)
at kotlin.io.ByteStreamsKt.readBytes(IOStreams.kt:123)
at kotlin.io.ByteStreamsKt.readBytes$default(IOStreams.kt:120)
at net.corda.core.internal.InternalUtils.readFully(InternalUtils.kt:123)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getJarHash(JarScanningCordappLoader.kt:228)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.toCordapp(JarScanningCordappLoader.kt:153)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.loadCordapps(JarScanningCordappLoader.kt:106)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.access$loadCordapps(JarScanningCordappLoader.kt:44)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:56)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:44)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getCordapps(JarScanningCordappLoader.kt)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:422)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:389)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.CordappLoaderTemplate.getCordappSchemas(JarScanningCordappLoader.kt)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:153)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:126)
at net.corda.node.internal.Node.<init>(Node.kt:98)
at net.corda.node.internal.Node.<init>(Node.kt:97)
at net.corda.node.internal.NodeStartup.createNode(NodeStartup.kt:194)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:186)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartupLogging$DefaultImpls.attempt(NodeStartup.kt:509)
at net.corda.node.internal.NodeStartup.attempt(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartup.initialiseAndRun(NodeStartup.kt:185)
at net.corda.node.internal.NodeStartupCli.runProgram(NodeStartup.kt:128)
at net.corda.cliutils.CordaCliWrapper.call(CordaCliWrapper.kt:190)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:83)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:64)
at picocli.CommandLine.execute(CommandLine.java:1056)
Corda's usage of memory has been slowly creeping upwards. It is possible that your machine does not have enough memory to run 3/4+ nodes at the same time after upgrading to 4.
I recommend trying to run a single node with CorDapps installed and see what happens. If it is still happening then, then something else could be going wrong.
Looking at the stacktrace, it is also possible that your CorDapp itself is really, really, really big and it has gone out of memory reading and loading the CorDapp.

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

multiple version of spark on CDH5.10 is failing to launch spark-submit

I have installed spark with 2.0 on CDH5.10 By following the link https://www.cloudera.com/documentation/spark2/latest/topics/spark2_installing.html
after all configuration when I hit spark2-submit --version it gives me correct version which is 2.0
however when I submit a spark job . First it says
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
This is clearly indicating that hadoop libs are not in classpath. My question is it something wrong with my installation of spark 2. ? also once we add jars with sparkExtralibCLasspath for driver and core then it says SPARK_HADOOP_CONF Is not set.
How can I verify my installation is correct ?
I am also trying to understand where are my spark2 conf dirs
I saw few previous question on stackoverflow like https://community.cloudera.com/t5/Cloudera-Manager-Installation/CHD-5-7-spark-shell-java-lang-ClassNotFoundException-org-apache/td-p/42209 and NoClassDefFoundError com.apache.hadoop.fs.FSDataInputStream when execute spark-shell but this doesnt help
I am using spark2-shell and spark2-submit command
some more investigation with https://community.cloudera.com/t5/Cloudera-Manager-Installation/CDH-5-5-pyspark-java-lang-NoClassDefFoundError-org-apache-hadoop/td-p/34424 shows might be If I can correctly set SPARK_EXTRA_LIB_PATH for spark2 then I can fix this issue. can somebody guide me please. Thanks

SBT configuration failed to load in Typesafe Activator

I'm currently trying to start a play-slick application through the Typesafe Activator, but it fails to load the SBT configuration and I get this error;
/play-slick/build.sbt:30: error: reference to fork is ambiguous;
it is imported twice in the same scope by
import _root_.play.Project._
and import Keys._
fork in run := true
^
Type error in expression
Failed to load project.
Does this mean I have SBT downloaded twice and what can I do to resolve it? Thanks.
Just wanted to say that I came across this exact same issue when trying to use the Play-Slick example linked from the Play Tutorials page.
The solution to get it working seems to have indeed been to follow the suggestion in the Github link that Seth Tisue included in a comment above, where corruptmemory suggested removing the following line from build.sbt:
fork in run := true
In my case, this was enough to convince IntelliJ to open the project and let me tinker with it. (Just in case this is the first result for anyone else coming across this problem)
just remove
fork in run := true
from build.sbt and hit activator clean run from cmd

Resources