Error running source run-gateway for Google IOT - google-cloud-iot

I have been trying to find help for this problem and there hasn't been too much success. I keep getting this error. I was following this guide: https://cloud.google.com/community/tutorials/cloud-iot-gateways-rpi and I haven't been able to get past step 14.
source run-gateway
Creating JWT using RS256 from private key file rsa_private.pem
on_publish, userdata None, mid 1
Unable to find key 1
connect status False
on_connect Connection Refused: not authorised.
on_disconnect 5: The connection was refused.
connect status False
connect status False
connect status False
^CTraceback (most recent call last):
File "./cloudiot_mqtt_gateway.py", line 356, in <module>
main()
File "./cloudiot_mqtt_gateway.py", line 284, in main
time.sleep(1)

I was able to successfully get the gateway running but had to manually modify the script. Make sure that you have updated the run-gateway shell script to point to your registry ID and project ID.
If any of the parameters are incorrect (e.g. device, project, region) then your device will be disconnected from the device bridge.

Related

"Bad Request-Error" when trying to connect to Azure Data Lake with Airflow

I try to connect to Azure Data Lake using Airflow. I use Airflow connection via the Web UI.
When I try to connect using the test button, I get an error Bad Request. As seen below
I use the correct UUIDs. These UUIDs have been verified in other cases. I also checked the firewall.
When I execute the DAG, I use the Azure Data Lake connection id to check if a file exists: If I apply the method as described here: What is the best way to check if a file exists on an Azure Datalake using Apache Airflow?
This is the error I get
[2022-05-06, 17:27:33 UTC] {log.py:127} ERROR - 99ec1d77-e91c-4fd3-a1c7-fa751ca1e779 - OAuth2Client:The token response from the server is unparseable as JSON: ***
Traceback (most recent call last):
File "/opt/airflow/lib/python3.8/site-packages/adal/oauth2_client.py", line 168, in _validate_token_response
wire_response = json.loads(body)
File "/usr/lib/python3.8/json/init.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 3 column 1 (char 4)
[2022-05-06, 17:27:33 UTC] {log.py:127} ERROR - 99ec1d77-e91c-4fd3-a1c7-fa751ca1e779 - OAuth2Client:Error validating get token response: ***
Traceback (most recent call last):
File "/opt/airflow/lib/python3.8/site-packages/adal/oauth2_client.py", line 238, in _handle_get_token_response
return self._validate_token_response(body)
File "/opt/airflow/lib/python3.8/site-packages/adal/oauth2_client.py", line 168, in _validate_token_response
Authenticating to Azure Data Lake is by token credentials i.e. add specific credentials (client_id, secret, tenant) and account name to the Airflow connection.
Information about how to set it up can be found in this doc.
You can see code example in the source code test function.
Other method of authentication are currently not supported.
I was trying to get the connection running using the Airflow implementation. My impression was that it was buggy and did not work out well. The above situation happened with Airflow 2.2.5. When I upgraded to Airflow 2.3.0, the test button was grayed out.
The final solution was to use Access Tokens instead.

Google Cloud Endpoint Extensible Service Proxy does not start

I am trying to install Extensible Service Proxy on my compute instance. I am following this guide for installing the ESP Nginx service: https://cloud.google.com/endpoints/docs/quickstart-compute-engine#running_the_extensible_service_proxy. I was able to install the ESP service without problems but when I try to start the service with the command service nginx start, the service does not start up.
First it gave this error in /var/log/nginx/error.log:
Traceback (most recent call last):
File "/usr/sbin/start_esp.py", line 48, in <module>
from mako.template import Template
ImportError: No module named mako.template
The error went away after I installed the mako template module using the command pip install mako.
Now it is giving this error:
INFO:Fetching the service name from the metadata service
ERROR:Fetching service name failed (status code 404)
Any help would be much appreciated. Thanks
Did you forget to put in your service name in your metadata?
From https://cloud.google.com/endpoints/docs/quickstart-compute-engine:
In the Metadata section and add the following Endpoints metadata key/value pairs:
Specify endpoints-service-name as a key and YOUR-PROJECT-ID.appspot.com as its value, replacing YOUR-PROJECT-ID with your project ID.
Click Add item.
Specify endpoints-service-version as a key and the service version returned when you deployed as the key's value.

Error Connecting to Google Datastore outside GAE

Hi I am trying to connect to Datastore using the below command
DatastoreOptions options = DatastoreOptions.builder().projectId("project-id").authCredentials(AuthCredentials.createForJson(new FileInputStream("c:\GoogleCredentials\.json"))).build();
I have created a folder GoogleCredentials in my C: Drive and downloaded the Service account's JSON keyfile into this folder
But I am getting the below error:
Line 38, Column 67: No applicable constructor/method found for actual parameters "java.lang.String"; candidates are: "com.google.cloud.ServiceOptions$Builder com.google.cloud.ServiceOptions$Builder.projectId(java.lang.String)"

Can't bind port in Google App Engine Launcher

When I try to deploy my app locally I can't get it running. This is what it's telling me:
2014-10-24 13:16:08 Running command: "['C:\\Python27\\python.exe', 'C:\\Program Files (x86)\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=3306', '--admin_port=8000', 'D:\\Documents\\Clever-CV Project\\wp39 - Copy']"
INFO 2014-10-24 13:16:15,315 devappserver2.py:733] Skipping SDK update check.
WARNING 2014-10-24 13:16:15,345 api_server.py:383] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2014-10-24 13:16:15,368 api_server.py:171] Starting API server at: http://localhost:49717
INFO 2014-10-24 13:16:15,381 api_server.py:583] Applying all pending transactions and saving the datastore
INFO 2014-10-24 13:16:15,381 api_server.py:586] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 82, in <module>
_run_file(__file__, globals())
<--ommitted-->>
raise BindError('Unable to bind %s:%s' % self.bind_addr)
google.appengine.tools.devappserver2.wsgi_server.BindError: Unable to bind localhost:3306
2014-10-24 13:16:15 (Process exited with code 1)
And I am a noob - so there's a very good chance I did something stupid, but I'm at my wits end and have found everything I could online to try.
My SQL instance is running, the database passwords work and connect locally and to the remote app CloudSQL instance.
My app.yaml file has a new version name
the wordpress config file has the root names/passwords set-up correctly
The issue with running your app locally is in this part: '--port=3306' which is mysql's default port and you said mysql is already running which means that port is already taken and cannot be used by your app. Instead of 3306 try the default 8080 port and increase it by one (i.e. 8081, 8082, etc) in case you need to run multiple applications at once.

error creating container in openstack swift

I am trying to install the latest version of swift following instructions from http://docs.openstack.org/icehouse/install.../general-installation-steps-swift.html. I am able to authenticate with keystone and also able to successfully run the command swift stat. But, when I run the command swift upload myfiles temp, I get the following error
Error trying to create container 'myfiles': 404 Not Found: {"error": {"message": "The
resource could not be found.", "c
Object PUT failed: 9.109.124.109:5000:5000/v2.0/myfiles/temp 400 Bad Request
[first 60 chars of response] {"error": {"message": "Expecting to find application/json
in
In /var/log/syslog, I find the following information:
May 28 18:11:40 datafed3 account-server: ERROR __call__ error with PUT /sdb1/100869
/AUTH_system/myfiles : #012Traceback (most recent call last):#012 File "/usr/lib
/python2.7/dist-packages/swift/account/server.py", line 284, in __call__#012 res =
method(req)#012 File "/usr/lib/python2.7/dist-packages/swift/common/utils.py", line
2217, in wrapped#012 return func(*a, **kw)#012 File "/usr/lib/python2.7/dist-
packages/swift/common/utils.py", line 837, in _timing_stats#012 resp = func(ctrl,
*args, **kwargs)#012 File "/usr/lib/python2.7/dist-packages/swift/account/server.py",
line 128, in PUT#012 req.headers['x-bytes-used'])#012 File "/usr/lib/python2.7/dist-
packages/swift/account/backend.py", line 210, in put_container#012 raise
DatabaseConnectionError(self.db_file, "DB doesn't exist")#012DatabaseConnectionError:
DB connection error (/srv/node/sdb1/accounts/100869/80d/62816079be0fc97a4557f52b3b12380d
/62816079be0fc97a4557f52b3b12380d.db, 0):#012DB doesn't exist
One situation may cause this problem is: when create tenant, one or more storage node is down. then when you upload an object, proxy get 404 from at least one storage node.
On my test, even the storage node are all up after tenant creation, 404 error still exist. So, make sure all storage nodes are up, and create another tenant to test.

Resources