iot_v1.types.FieldMask() How did it come? - google-cloud-iot

code:
device_path = client.device_path(
project_id, cloud_region, registry_id, device_id)
mask = iot_v1.types.FieldMask()
mask.paths.append('config')
mask.paths.append('gateway_config')
device = client.get_device(client.list_devices(parent=path, field_mask=mask))
Where does this FieldMask come from?

The field mask is: The fields of the Device resource to be returned in the response. The fields id and num_id are always returned, along with any other fields specified. Source.
It comes from the defined protobuf types, which is part of the IoT package.

Related

Web2py: Sending JSON Data via a Rest API post call in Web2py scheduler

I have a form whose one field is type IS_JSON
db.define_table('vmPowerOpsTable',
Field('launchId',label=T('Launch ID'),default =datetime.datetime.now().strftime("%d%m%y%H%M%S")),
Field('launchDate',label=T('Launched On'),default=datetime.datetime.now()),
Field('launchBy',label=T('Launched By'),default = auth.user.email if auth.user else "Anonymous"),
Field('inputJson','text',label=T('Input JSON*'),
requires = [IS_NOT_EMPTY(error_message='Input JSON is required'),IS_JSON(error_message='Invalid JSON')]),
migrate=True)
When the user submits this Form, this data is also simultaneously inserted to another table.
db.opStatus.insert(launchId=vmops_launchid,launchDate=vmops_launchdate
,launchBy=vmops_launchBy,opsType=operation_type,
opsData=vmops_inputJson,
statusDetail="Pending")
db.commit()
Now from the scheduler, I am trying to retrieve this data and make a POST request.
vm_power_opStatus_row_data = vm_power_opStatus_row.opsData
Note in the above step I am able to retrieve the data. (I inserted it in a DB and saw the field exactly matches what the user has entered.
Then from the scheduler, I do a POST.
power_response = requests.post(vm_power_op_url, json=vm_power_opStatus_row_data)
The POST request is handled by a function in my controller.
Controller Function:
#request.restful()
def vmPowerOperation():
response.view = 'generic.json'
si = None
def POST(*args, **vars):
jsonBody = request.vars
print "Debug 1"+ str(jsonBody) ##-> Here it returns blank in jsonBody.
But if I do the same request from Outside(POSTMAN client or even python request ) I get the desired result.
Is anything going wrong with the data type when I am trying to fetch it from the table.
power_response = requests.post(vm_power_op_url,
json=vm_power_opStatus_row_data)
It appears that vm_power_opStatus_row_data is already a JSON-encoded string. However, the json argument to requests.post() should be a Python object, not a string (requests will automatically encode the Python object to JSON and set the content type appropriately). So, the above should be:
power_response = requests.post(vm_power_op_url,
json=json.loads(vm_power_opStatus_row_data))
Alternatively, you can use the data argument and set the content type to JSON:
power_response = requests.post(vm_power_op_url,
data=vm_power_opStatus_row_data,
headers={'Content-Type': 'application/json')
Also, note that in your REST POST function, request.vars is already passed to the function as **vars, so within the function, you can simply refer to vars rather than request.vars.

How to properly make POST request

I'm trying to make my first POST request to an API. For some reason, I always get status 403 in return. I suspect it's the signature that is being incorrectly generated. The api-key and client id is for sure correct.
My code
nonce <-as.integer(Sys.time())
post_message <- paste0(nonce, data_client.id, data_key) # data_client.id = client id # data_key = key
sha.message <- toupper(digest::hmac(data_secret, object = post_message, algo = 'sha256', serialize = TRUE))
url <- 'https://www.bitstamp.net/api/v2/balance/'
body = list('API-KEY' = data_key, 'nonce' = nonce, 'signature' = sha.message)
httr::POST(url, body = body, verbose())
Output
<- HTTP/1.1 403 Authentication Failed
I'm trying to access the Bitstamp API: https://www.bitstamp.net/api/?package=Rbitcoin&version=0.9.2
All private API calls require authentication. For a successful
authentication you need to provide your API key, a signature and a
nonce parameter.
API KEY
To get an API key, go to "Account", "Security" and then "API Access".
Set permissions and click "Generate key".
NONCEN
once is a regular integer number. It must be increased with every
request you make. Read more about it here. Example: if you set nonce
to 1 in your first request, you must set it to at least 2 in your
second request. You are not required to start with 1. A common
practice is to use unix time for that parameter.
SIGNATURE
Signature is a HMAC-SHA256 encoded message containing nonce, customer
ID (can be found here) and API key. The HMAC-SHA256 code must be
generated using a secret key that was generated with your API key.
This code must be converted to it's hexadecimal representation (64
uppercase characters).
I'm not sure if your question is still standing, but based on your code, I managed to get it working. In fact, the main problem is in the body, the API documentation shows it expects 'key' instead of 'API-KEY'.
Also, serialize should be FALSE instead of TRUE.
At the moment this works (but the API may change):
nonce <-as.integer(Sys.time())
post_message <- paste0(nonce, data_client.id, data_key) # data_client.id = client id # data_key = key
sha.message <- toupper(digest::hmac(data_secret, object = post_message, algo = 'sha256', serialize = FALSE))
url <- 'https://www.bitstamp.net/api/v2/balance/'
body = list('key' = data_key, 'nonce' = nonce, 'signature' = sha.message)
httr::POST(url, body = body, verbose())

How to get the table name in AWS dynamodb trigger function?

I am new with AWS and working on creating a lambda function on Python. The function will get the dynamodb table stream and write to a file in s3. Here the name of the file should be the name of the table.
Can someone please tell me how to get the table name if the trigger that is invoking the lambda function?
Thanks for help.
Since you mentioned you are new to AWS, I am going to answer descriptively.
I am assuming that you have set 'Stream enabled' setting for your DynamoDB table to 'Yes', and have set up this as an event source to your lambda function.
This is how I got the table name from the stream that invoked my lambda function -
def lambda_handler(event, context):
print(json.dumps(event, indent=2)) # Shows what's in the event object
for record in event['Records']:
ddbARN = record['eventSourceARN']
ddbTable = ddbARN.split(':')[5].split('/')[1]
print("DynamoDB table name: " + ddbTable)
return 'Successfully processed records.'
Basically, the event object that contains all the information about a particular DynamoDB stream that was responsible for that particular lambda function invoke, contains a parameter eventSourceARN. This eventSourceARN is the ARN (Amazon Resource Number) that uniquely identifies your DynamoDB table from which the event occurred.
This is a sample value for eventSourceARN -
arn:aws:dynamodb:us-east-1:111111111111:table/test/stream/2020-10-10T08:18:22.385
Notice the bold text above - test; this is the table name you are looking for.
In the line ddbTable = ddbARN.split(':')[5].split('/')[1] above, I have tried to split the entire ARN by ':' first, and then by '/' in order to get the value test. Once you have this value, you can call S3 APIs to write to a file in S3 with the same name.
Hope this helps.
Please note that eventSourceArn is not always provided. From my testing today, I didn't see eventSourceArn presented in record. You can also refer to the links:
Issue: https://github.com/aws/aws-sdk-js/issues/2226
API: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html
One way to do it will be via pattern matching in Scala using regex:
val ddbArnRegex: Regex = """arn:aws:dynamodb:(.+):(.+):table/(.+)/stream/(.+)""".r
def parseTableName(ddbARN: String): Option[String] = {
if (null == ddbARN) None
ddbARN match {
case ddbArnRegex(_, _, table, _) => Some(table)
case _ => None
}
}

Convert string to map in lua

I am quite new to lua. I trying to convert a string of the form
{"result": "success", "data":{"shouldLoad":"true"}"}
into lua map. So that I can access it like json. e.g. someMap[data][shouldLoad] => true
I dont have any json bindings in lua. I also tried loadstring to convert string of the form {"result" = "success", "data"={"shouldLoad"="true"}"}, which is not working.
Following, is the code snippet, where I am calling getLocation hook, which in turn returns json stringified map. Now I want to access some keys from this response body and take some decisions accordingly.
access_by_lua "
local res = ngx.location.capture('/getLocation')
//res.body = {"result"= "success", "data" = {"shouldLoad" = "true"}}
local resData = loadstring('return '..res.body)()
local shoulLoad = resData['data']['shouldLoad']
"
When I try to load shouldLoad value, nginx error log reports error saying trying to index nil value.
How do I access key value with either of the string formats. Please help.
The best answer is to consider a pre-existing JSON module, as suggested by Alexey Ten. Here's the list of JSON modules from Alexey.
I also wrote a short pure-Lua json module that you are free to use however you like. It's public domain, so you can use it, modify it, sell it, and don't need to provide any credit for it. To use that module you would write code like this:
local json = require 'json' -- At the top of your script.
local jsonStr = '{"result": "success", "data":{"shouldLoad":"true"}"}'
local myTable = json.parse(jsonStr)
-- Now you can access your table in the usual ways:
if myTable.result == 'success' then
print('shouldLoad =', myTable.data.shouldLoad)
end

Lucene.Net MoreLikeThis returns 0 interesting terms and no clauses in the query

I'm trying to implement the Lucene.Net MoreLikeThis query but it doesn't seem to be able to find anything interesting in the document to search the index.
In my scenario, the user has clicked "More Like This" link on the search results webpage, this passes the document id on the query string. My Lucene.Net code looks like this:
var similarSearch = new MoreLikeThis(reader);
similarSearch.SetFieldNames(new[] { "Place", "Subject", "Description", "Name", "Town", "Occupation" });
similarSearch.MinWordLen = 3;
similarSearch.Boost = true;
var terms = similarSearch.RetrieveInterestingTerms(docid);
var doc = reader[docid];
var searchQuery = similarSearch.Like(docid);
Following execution; the terms variable is an empty array, the doc variable contains the document and the searchQuery has no clauses. When I run the search using the query it returns no documents.
My conclusion is I am able to get the document from the reader, but the MoreLikeThis object is unable to find anything to build a query from.
Any idea why?
I think you may need to set one or more of the following parameters on the MoreLikeThis object: Analyzer, MinTermFreq, and/or MinDocFreq
I had the same issue - no results being returned. Once I set the above parameters (try setting both of the minimums to 1), it worked.

Resources