Need help fixing error IOUContractNull attachments cannot be found - corda

I'm currently working on Corda's Hello World tutorial pt.2 but I am encountering an error that I don't know how to fix. Whenever I try to create an IOUFlow between two nodes from the Corda shell:
>>> start IOUFlow iouValue: 99, otherParty: "O=PartyB,L=New York,C=US"
I get an error:
Cannot find contract attachments for com.template.IOUContractNull.
Any help in fixing this?

This error normally happens when your contract ID is not synced with your package name.
You should check your contract ID at the IOUContract file.

Related

Uploading manifest file failing

After making some changes in a JSON manifest file, I was trying to update it following the Amazon documentation:
ask smapi update-skill-manifest -g development -s amzn1.ask.skill.xxxx --manifest "skillManifest.json" --debug
I kept getting this error:
The error was not pointing to what the error was but my guess was that it was related to the parameters, but that was strange as I was following the documentation to letter.
I then tried, instead of passing the json file, to cat the content of the file, which would be either:
For Powershell: --manifest "$(type skillmanifest.json)"
For Linux: --manifest "$(cat skillmanifest.json)"
I still kept getting the same error.
Firstly, for debugging and getting a more accurate error, I checked my ASK-CLi version, which was outdated.
After updating ASK to the latest version I was still getting the same error.
At that point it started including an error object, which was saying:
When looking into Parsing error due to invalid body. and INVALID_REQUEST_PARAMETER through the error codes, it just said the body of the request cannot be parsed.
After research and playing around, the problem was the manifest parameter, changing it to "file:FILENAME" solved the issue:
--manifest "file:skillmanifest.json"
The documentation is not stating this but it seems necessary for it to go through.
I hope this helps someone out there avoid spending a full day troubleshooting.

Docker container failed to start when deploying to Google Cloud Run

I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error: 
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.

Bigquery crashlytics dataset schedule interval

We're currently looking into Firebase<>BigQuery (not sandboxed) for monitoring purposes. We've hooked up one of our projects using the Firebase integration and have gathered a few days worth of data.
Only the data is always a day off, which makes sense since the transfer only runs every 24 hours. But trying to change it through the bq cli:
bq update --transfer_config \
--target_dataset='crashlytics' \
--schedule='every 2 hours' \
projects/p/locations/l/transferConfigs/c
results into a 400 error:
Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
========================================
== Platform ==
CPython:2.7.16:Darwin-19.2.0-x86_64-i386-64bit
== bq version ==
2.0.53
== Command line ==
['/path/bq/bq.py', '--application_default_credential_file', '/path/e#mail.com/adc.json', '--credential_file', '/path/e#email.com/singlestore_bq.json', '--project_id=tde-psv-app', 'update', '--transfer_config', '--target_dataset=crashlytics', '--schedule=every 2 hours', 'projects/p/locations/l/transferConfigs/c']
== UTC timestamp ==
2020-02-24 08:47:23
== Error trace ==
Traceback (most recent call last):
File "/path/bq/bq.py", line 1116, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/path/bq/bq.py", line 4615, in RunWithArgs
schedule_args=schedule_args)
File "/path/bq/bigquery_client.py", line 3984, in UpdateTransferConfig
x__xgafv='2').execute()
File "/path/bq/bigquery_client.py", line 810, in execute
BigqueryHttp.RaiseErrorFromHttpError(e)
File "/path/bq/bigquery_client.py", line 788, in RaiseErrorFromHttpError
BigqueryClient.RaiseError(content)
File "/path/bq/bigquery_client.py", line 2385, in RaiseError
raise BigqueryError.Create(error, result, [])
BigqueryInterfaceError: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT', u'message': u'Request contains an invalid argument.', u'code': 400}}
========================================
Unexpected exception in update operation: Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned:
{u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
We might get the impression that this is not possible for this kind of datasets / Firebase projects, but we can't see to find any clean answer on that.
Right now the data export is only available once per 24 hours. We are looking into changing this behavior. Please stay up to date on the Firebase blog for any announcements.

Can't able to create table using ORE.create

I had executed the R program and when I try to push the result to a table using
ore.create(score, table="xyz")
I'm getting the following error:
Error in .oci.GetQuery(conn, statement, data = data, prefetch = prefetch, :
ORA-12801: error signaled in parallel query server P007, instance XY.ab.dc.cd:abc (2)
ORA-06520: PL/SQL: Error loading external library
ORA-06522: /app/oracle/product/11.2.0/dbhome_1/lib/librqe.so: cannot open shared object file: No such file or directory
ORA-06512: at "RQSYS.RQROWEVALIMPL", line 20
ORA-06512: at "RQSYS.RQROWEVALIMPL", line 16
ORA-06512: at line 4
Please help to solve this issue since I tried to solve this for the past 1 week and I cant able to as I am new to this.
Any help much appreciated
This looks like a problem with your installation of Oracle R package.
The message indicates your are running on 11gR2. ORE requires 11.2.0.3 or higher, or 11.2.0.1 with a specific patch applied. Check this OTN Forum thread for details.
You need an Oracle Support contract to get hold of these patches. If you don't have a contract you will need to migrate to database 12c in order to use R.

Getting InvalidProtocolBufferException while running oozie job

I'm getting the below exception while running the sample oozie examples.
I've modified the job.properties located at the /examples/apps/map-reduce with the appropriate nameNode and jobTracker details.
I'm using the below command to run the oozie job:
"sudo oozie job -oozie http://ip-10-0-20-143.ec2.internal:11000/oozie -config examples/apps/map-reduce/job.properties -run"
Error: E0501 : E0501: Could not perform authorization operation, Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "ip-10-0-20-143.ec2.internal/10.0.20.143"; destination host is: "ip-10-0-20-144.ec2.internal":50070;
The hadoop core-site.xml also has the correct proxyuser details for oozie user.
Really, dont know where it is going wrong?? :(
I will answer in case someone will google up this page.
In my case the cause was in using http address for Name Node.
You should check your job configuration and if there stays something like:
nameNode=yourhostname:50070
You should change it to something like this:
nameNode=hdfs://yourhostname:8020
Check your ports first of course!
Please notice that jobTracker parameter has different notation. In my case it's:
jobTracker=yourhostname:8021
and it works fine.
Hope it helps to someone.

Resources