Infrastructure error launching Blueprint template on fiware cloud lab - networking

for testing purposes I'm trying to launch a blueprint instance with the following specs:
Tier1: base_centos_7 + tomcat6 minimum of instance of 0 and a maximum of 1 (init value: 0)
Tier2: base_debian_7 + mysql1.2.4
minimum of instance: 1 and maximum: 2 (init value: 1)
Launching this blueprint template I receive the following error:
Error: Infrastructure error No entity NetworkInstance found with the following criteria: [name = mynetname]
But if I go in the Networks tab, I'm able to find mynetname in the list and it is with:
Shared: Yes
Status: ACTIVE
Admin State: UP
Any idea why I receive that error?
Thanks a lot in advance.

I recommend you to contact the specific FIWARE Lab region administrators. They could check if there is any issue with the selected network.

Related

why sanity start failed to compile in my nextjs app?

as a beginner i'm trying to follow up with a tutorial that uses sanity for backend , the problem is that when it come to the step to start sanity , so many errors are showing up here is the first one that i think is the major one that blocks the compilation
Error in ./schemas/schema.js (part:#sanity/base/schema)
Module build failed: Error: [BABEL]: Cannot find module 'C:\Users\ghars\tiktik-a
pp\sanity-backend\node_modules\#babel\helper-plugin-utils\lib\index.js'. Please
verify that the package.json has a valid "main" entry (While processing: C:\User
s\ghars\tiktik-app\sanity-backend\node_modules\#babel\plugin-proposal-class-prop
erties\lib\index.js)
i've searched about this and i think that sanity have a problem when it comes to node version 17 that i'm using , is there any solution without passing back to node 16 ?

Matic Mumbai deployment failed at migration with Replay-Protected (EIP-155) error

I'm looking to deploy dApp onto Polygon's Matic Mumbai test network, but I keep getting errors. Contracts deploy well on all Ethereum networks and I've made sure to have some MATIC (just in case even though it's not asking for any). Here's what I get:
Compiling your contracts...
=============================
all good no issues. then starts migration as per usual:
Starting migrations...
======================
> Network name: 'matic'
> Network id: 80001
> Block gas limit: 20000000 (0x1312d00)
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
Error: *** Deployment Failed ***
"Migrations" -- only replay-protected (EIP-155) transactions allowed over RPC.
in terminal i'm following their "how to" guide verbatim:
Truffle:
matic: {
provider: () =>
new HDWalletProvider(mnemonic, `https://rpc-mumbai.matic.today`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true,
},
and terminal:
truffle migrate --network matic
Any ideas of what I'm doing wrong and how to fix the issue? Thank you.
I ran into the same issue, and thanks to the folks here:
https://github.com/trufflesuite/truffle/issues/3913
I figured out that I just needed to update this NPM package:
"truffle-hdwallet-provider": "^1.0.17"
To be:
"#truffle/hdwallet-provider": "^1.4.0"

Docker container failed to start when deploying to Google Cloud Run

I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error: 
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.

Marklogic 8 : Scheduled task do not start and no logs

I scheduled a data extraction with an Xquery query on ML 8.0.6 using the "scheduler tasks".
My Xquery query (this query works if I copy/paste it in the ML web console and I get a file available on AWS S3):
xdmp:save("s3://XX.csv",let $nl := "
"
return
document {
for $book in collection("books")/books
return (root($book)/bookId||","||
$optin/updatedDate||$nl
)
})
My scheduled task :
Task enabled : yes
Task path : /home/bob/extraction.xqy
task root : /
task type: hourly
task period : 1
task start time: 8 past the hour
task database : mydatabase
task modules : file system
task user : admin
task host: XX
task priority : higher
Unfortunately, my script is not executed because no file is generated on AWS S3 (the storage used)and I do not have any logs.
Any idea to :
1/debug a job in the scheduler task?
2/ See the job running at the expected time ?
Thanks,
Romain.
First, I would try take a look at the ErrorLog.txt because it will probably show you where to look for the problem.
xdmp:filesystem-file(concat(xdmp:data-directory(),"/","Logs","/","ErrorLog.txt"))
Where is the script located logically: Has it been uploaded to the content database, modules database, or ./MarkLogic/Modules directory?
If this is a cluster, have you specified which host it's running on? If so, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of the specified host. Inf not, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of all the hosts of the cluster.
As for seeing the job running, you can check the http://servername:8002/dashboard/ and take a look at the Query Execution tab see the running processes, or you can get a snapshot of the process by taking a look at the Status page of the task server (Configure > Groups > [group name] > Task Server: Status and click show more button)

ESP8266 : WPA ENTERPRISE LOGIN WITHOUT CA CERTIFICATE

Thanks a lot for giving your time and reading this,
I have gone though the WPA enterprise question posted in the ESP8266 forum and the related links, but it's not able to help me so I'm starting another topic.
I am trying to connect my ESP to my office network.
To connect through mobile this is usually is the setting:
SSID:PanLAN
EAP methode : PEAP
Phase 2 authentication: None
Ca Certificate: None
Identity: Asia-Pacific\SLAIK
password: Badonkadong
I took the edurom.ino from joostd's github and tried to run it.
https://github.com/genomics-admin/esp8266-eduroam/tree/master/Arduino
I got the following error.
error: 'wifi_station_set_username' was not declared in this scope
wifi_station_set_username(identity, sizeof(identity));
^
ESP8266_PEAPAuth:323: error: 'wifi_station_set_cert_key' was not declared in this scope
if( wifi_station_set_cert_key(testuser_cert_pem, testuser_cert_pem_len, testuser_key_pem, testuser_key_pem_len, NULL, 0) == 0 ) {
^
ESP8266_PEAPAuth:337: error: 'wifi_station_clear_cert_key' was not declared in this scope
wifi_station_clear_cert_key();
^
ESP8266_PEAPAuth:338: error: 'wifi_station_clear_username' was not declared in this scope
wifi_station_clear_username();
^
exit status 1
'wifi_station_set_username' was not declared in this scope.
I'm new to ESP/Arduino however to my eyes it looks like i'm missing the user_interface.h and c_type.h file.
The github account don't have them, so where can I get
them from?
I know user_interface.h is available on github
from other projects, but can I use them?
If yes, then
where shall I put the user_interface.h after downloading?
I found ctype.h in my computer # "C:\Program Files
(x86)\Arduino\hardware\tools\avr\avr\include" is it the same as
c_type.h mentioned in the code? can I rename and use ctype.h?
Is there any other/better example available for me to
follow and get my ESP connected to the network?

Resources