gRPC property order not matching proto? - grpc

I'm starting out on gRPC, my proto looks like:
message Customer {
int64 customerId = 1;
string firstName = 2;
string lastName = 3;
repeated string roles = 4;
}
but BloomRPC is displaying as:
{
"roles": [
"User",
"Admin"
],
"customerId": "100000",
"firstName": "Bob",
"lastName": "Jenkins"
}
Shouldn't roles be last?

The order of the fields has no effect on how the messages are serialized. Take a look at this and this for more information.

Related

Adding multiple DynamoDB items through terraform

How to add multiple items in dynamoDB table?
table_name = "${var.environment}-kaleidos-dynamodb-MappingConfig"
hash_key = "eventType"
item = <<EOF
json
EOF
}````
DynamoDB always expects one item. Is there a way to provide multiple items?
There's no way to add multiple items using a single resource "aws_dynamodb_table_item". You can have multiple resource statements in the same file, as long as you give them different names, for example:
resource "aws_dynamodb_table_item" "item1" {
...
}
resource "aws_dynamodb_table_item" "item2" {
...
}
If you are trying to create items based on an array or map or a specific number, you can use count or for_each (for_each was introduced in 0.12.6)
count example:
resource "aws_dynamodb_table_item" "items" {
count = 4
item <<EOF
{
"pk": {"S": "${count.index}"}
}
EOF
for_each example:
resource "aws_dynamodb_table_item" "items" {
for_each = {
item1 = {
something = "hello"
}
item2 = {
something = "hello2"
}
}
item = <<EOF
{
"pk": {"S": "${each.key}"},
"something": {"S": "${each.value.something}"}
}
EOF
}
If someone is looking to insert set datatype here is an example
resource "aws_dynamodb_table_item" "employee" {
table_name = "${var.environment}-employee"
hash_key = "id"
for_each = {
"1234abcd" = {
fistName = "Jack"
lastName = "Dorsey"
projects = ["Twiter", "Square"]
}
"34589fsd" = {
fistName = "Jack"
lastName = "MA"
projects = ["Alibaba", "Ant"]
}
}
item = <<ITEM
{
"id": {"S": "${each.key}"},
"fistName": {"S": "${each.value.fistName}"},
"lastName": {"S": "${each.value.lastName}"},
"projects": {"SS": ${jsonencode(each.value.projects)}}
}
ITEM
}

How to filter google calendar REST API events by attendee's email

I'm accessing Google Calendar REST API for Calendar Events, trying to figure out a proper notation for q parameter, to filter all events where one of the attendees is identified by email (let's say foo#bar.com)
I've tried: q=attendee.email:foo#bar.com, q=attendee.email=foo#bar.com, q=attendees.email=foo#bar.com, q=attendees.email="foo#bar.com"...
but with no results (empty list, once the q parameter is filled in)
Is it supported at all?
Is there a list of valid q parameter fields to filter by?
You cannot use any Calendar API call to directly search for attendees.
However, you can achieve this by code. You have to list all the events, loop through them and filter the events if the email you wrote coincides with the email in the attendees. For example:
function searchEvents() {
var calendarId = "primary";
var email = "test#email.com";
var result = Calendar.Events.list(calendarId).items;
for (var i = 0; i < result.length; i++){
if (result[i].attendees != undefined){ //Filters out the events without attendees
for (var j = 0; j < result[i].attendees.length; j++){
if (result[i].attendees[j].email == email){
Logger.log(result[i]); //It returns all the event information
}
}
}
}
}
The full resource object returned:
{
"kind": "calendar#calendarListEntry",
"etag": etag,
"id": string,
"summary": string,
"description": string,
"location": string,
"timeZone": string,
"summaryOverride": string,
"colorId": string,
"backgroundColor": string,
"foregroundColor": string,
"hidden": boolean,
"selected": boolean,
"accessRole": string,
"defaultReminders": [
{
"method": string,
"minutes": integer
}
],
"notificationSettings": {
"notifications": [
{
"type": string,
"method": string
}
]
},
"primary": boolean,
"deleted": boolean,
"conferenceProperties": {
"allowedConferenceSolutionTypes": [
string
]
}
}
REFERENCES:
Events List
List Resource
The "q" parameter is working like a text search in event list.
Free text search terms to find events that match these terms in the
following fields: summary, description, location, attendee's
displayName, attendee's email. Optional.
It possible to search events with specified email:
calendar.events.list(
{
q: 'attendee#email.test',
calendarId: 'primary',
timeMin: new Date().toISOString(),
maxResults: 10,
singleEvents: true,
orderBy: 'startTime',
}
It should return events where 'attendee#email.test' is specified

Querying Cosmos Nested JSON documents

I would like to turn this resultset
[
{
"Document": {
"JsonData": "{\"key\":\"value1\"}"
}
},
{
"Document": {
"JsonData": "{\"key\":\"value2\"}"
}
}
]
into this
[
{
"key": "value1"
},
{
"key": "value2"
}
]
I can get close by using a query like
select value c.Document.JsonData from c
however, I end up with
[
"{\"key\":\"value1\"}",
"{\"key\":\"value2\"}"
]
How can I cast each value to an individual JSON fragment using the SQL API?
As David Makogon said above, we need to transform such data within our app. We can do as below:
string data = "[{\"key\":\"value1\"},{\"key\":\"value2\"}]";
List<Object> t = JsonConvert.DeserializeObject<List<Object>>(data);
string jsonData = JsonConvert.SerializeObject(t);
Screenshot of result:

Passing object as array in body in proto3

I'm wondering how I can pass an array as the body of the message without having to specify a key. I can easily do:
message TypeResponse {
message Type {
string ID = 1;
string Name = 2;
string Description = 3;
string IsMobile = 4;
string IsTablet = 5;
string IsDesktop = 6;
}
repeated Type types = 1;
}
That would response with:
{
"types": [
{
"ID": 1
...
}
]
}
I'd like to structure my response as the following to match my REST API:
[
{
"ID": 1
...
},
{
"ID": 2
...
}
]
Proto requires that the top level concept is a message, which spills into the JSON mapping.
Something you could do is just skip the first characters until you reach a [ character, and then drop the very last character which will be a ]. The output format for JSON is specified by the Proto3 spec, so you can reasonably depend on the format.

Creating a custom tokenizer in ElasticSearch NEST

I have a custom class in ES 2.5 of the following:
Title
DataSources
Content
Running a search is fine, except with the middle field - it's built/indexed using a delimiter of '|'.
ex: "|4|7|8|9|10|12|14|19|20|21|22|23|29|30"
I need to build a query that matches some in all fields AND matches at least one number in the DataSource field.
So to summarize what I currently have:
QueryBase query = new SimpleQueryStringQuery
{
//DefaultOperator = !operatorOR ? Operator.And : Operator.Or,
Fields = LearnAboutFields.FULLTEXT,
Analyzer = "standard",
Query = searchWords.ToLower()
};
_boolQuery.Must = new QueryContainer[] {query};
That's the search words query.
foreach (var datasource in dataSources)
{
// Add DataSources with an OR
queryContainer |= new WildcardQuery { Field = LearnAboutFields.DATASOURCE, Value = string.Format("*{0}*", datasource) };
}
// Add this Boolean Clause to our outer clause with an AND
_boolQuery.Filter = new QueryContainer[] {queryContainer};
}
That's for the datasources query. There can be multiple datasources.
It doesn't work, and returns on results with the filter query added on. I think I need some work on the tokenizer/analyzer, but I don't know enough about ES to figure that out.
EDIT: Per Val's comments below I have attempted to recode the indexer like this:
_elasticClientWrapper.CreateIndex(_DataSource, i => i
.Mappings(ms => ms
.Map<LearnAboutContent>(m => m
.Properties(p => p
.String(s => s.Name(lac => lac.DataSources)
.Analyzer("classic_tokenizer")
.SearchAnalyzer("standard")))))
.Settings(s => s
.Analysis(an => an.Analyzers(a => a.Custom("classic_tokenizer", ca => ca.Tokenizer("classic"))))));
var indexResponse = _elasticClientWrapper.IndexMany(contentList);
It builds successfully, with data. However the query still isn't working right.
New query for DataSources:
foreach (var datasource in dataSources)
{
// Add DataSources with an OR
queryContainer |= new TermQuery {Field = LearnAboutFields.DATASOURCE, Value = datasource};
}
// Add this Boolean Clause to our outer clause with an AND
_boolQuery.Must = new QueryContainer[] {queryContainer};
And the JSON:
{"learnabout_index":{"aliases":{},"mappings":{"learnaboutcontent":{"properties":{"articleID":{"type":"string"},"content":{"type":"string"},"dataSources":{"type":"string","analyzer":"classic_tokenizer","search_analyzer":"standard"},"description":{"type":"string"},"fileName":{"type":"string"},"keywords":{"type":"string"},"linkURL":{"type":"string"},"title":{"type":"string"}}}},"settings":{"index":{"creation_date":"1483992041623","analysis":{"analyzer":{"classic_tokenizer":{"type":"custom","tokenizer":"classic"}}},"number_of_shards":"5","number_of_replicas":"1","uuid":"iZakEjBlRiGfNvaFn-yG-w","version":{"created":"2040099"}}},"warmers":{}}}
The Query JSON request:
{
"size": 10000,
"query": {
"bool": {
"must": [
{
"simple_query_string": {
"fields": [
"_all"
],
"query": "\"housing\"",
"analyzer": "standard"
}
}
],
"filter": [
{
"terms": {
"DataSources": [
"1"
]
}
}
]
}
}
}
One way to achieve this is to create a custom analyzer with a classic tokenizer which will break your DataSources field into the numbers composing it, i.e. it will tokenize the field on each | character.
So when you create your index, you need to add this custom analyzer and then use it in your DataSources field:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"number_analyzer": {
"type": "custom",
"tokenizer": "number_tokenizer"
}
},
"tokenizer": {
"number_tokenizer": {
"type": "classic"
}
}
}
},
"mappings": {
"my_type": {
"properties": {
"DataSources": {
"type": "string",
"analyzer": "number_analyzer",
"search_analyzer": "standard"
}
}
}
}
}
As a result, if you index the string "|4|7|8|9|10|12|14|19|20|21|22|23|29|30", you DataSources field will effectively contain the following array of token: [4, 7, 8, 9, 10, 12, 14, 191, 20, 21, 22, 23, 29, 30]
Then you can get rid of your WildcardQuery and simply use a TermsQuery instead:
terms = new TermsQuery {Field = LearnAboutFields.DATASOURCE, Terms = dataSources }
// Add this Boolean Clause to our outer clause with an AND
_boolQuery.Filter = new QueryContainer[] { terms };
At an initial glance at your code I think one problem you might have is that any queries placed within a filter clause will not be analysed. So basically the value will not be broken down into tokens and will be compared in its entirety.
It's easy to forget this so any values that require analysis need to be placed in the must or should clauses.

Resources