I use Dapper with dynamics because the table to be queries is not known until runtime, so POCO classes aren't possible.
I am returning Dapper's results via WebAPI. To save bandwidth, I need to return just the values from Dapper, not the property names, e.g.:
{
7,"2013-10-01T00:00:00",0,"AC",null,"ABC","SOMESTAGE"
},...
And not:
{
TID: 7,
CHANGE_DT: "2013-10-01T00:00:00",
EFFECTIVE_APPTRANS: 0,
EFFECTIVE_APPTRANS_STATUS: "AC",
DEVICE: null,
PROCESS: "ABC",
STAGE: "SOMESTAGE"
},...
I'm having some trouble figuring out a reasonable way to do this. I've tried abusing Dapper's mapping feature:
var tableData = Database.Query<dynamic, dynamic, dynamic>(connectInfo, someResource.sql,
(x, y) =>
{
List<object> l = new List<object>();
object o = x;
foreach(var propertyName in o.GetType().GetProperties().Select(p => p.Name)) {
object value = o.GetType().GetProperty(propertyName).GetValue(o, null);
l.Add(value);
}
return l as dynamic;
},
new {implantId, pendingApptrans},
splitOn: "the_last_column");
I've also tried having Dapper return a List using the same base code.
The idea was that I'd extract property values within the map function because it allows me to play with rows before anything is returned, but I get empty results without error:
[
[ ], [ ], [ ], [ ], [ ], [ ], [ ]
]
Additionally, I don't know any column names to split on, which the mapping feature wants. However even when I enter the last column name from a test query, the results are the same.
How can I return only values from Dapper's dynamic return value?
Do I need to resort to post-processing the dynamic after the Dapper call?
For that particular set of requirements, yes you would need to post-process; for example, you could cast to DapperRow or IDictionary<string,object>. Alternatively, you could use the IDataReader API that dapper now exposes.
Related
I've got state with a nested array that looks like the following:
{
list: [
{
id: '3546f44b-457e-4f87-95f6-c6717830294b',
title: 'First Nest',
key: '0',
children: [
{
id: '71f034ea-478b-4f33-9dad-3685dab09171',
title: 'Second Nest',
key: '0-0
children: [
{
id: '11d338c6-f222-4701-98d0-3e3572009d8f',
title: 'Q. Third Nest',
key: '0-0-0',
}
],
}
],
],
selectedItemKey: '0'
}
Where the goal of the nested array is to mimic a tree and the selectedItemKey/key is how to access the tree node quickly.
I wrote code to update the title of a nested item with the following logic:
let list = [...state.list];
let keyArr = state.selectedItemKey.split('-');
let idx = keyArr.shift();
let currItemArr = list;
while (keyArr.length > 0) {
currItemArr = currItemArr[idx].children;
idx = keyArr.shift();
}
currItemArr[idx] = {
...currItemArr[idx],
title: action.payload
};
return {
...state,
list
};
Things work properly for the first nested item, but for the second and third level nesting, I get the following Immer console errors
An immer producer returned a new value *and* modified its draft.
Either return a new value *or* modify the draft.
I feel like I'm messing up something pretty big here in regards to my nested array access/update logic, or in the way I'm trying to make a new copy of the state.list and modifying that. Please note the nested level is dynamic, and I do not know the depth of it prior to modifying it.
Thanks again in advance!
Immer allows you to modify the existing draft state OR return a new state, but not both at once.
It looks like you are trying to return a new state, which is ok so long as there is no mutation. However you make a modification when you assign currItemArr[idx] = . This is a mutation because the elements of list and currItemArr are the same elements as in state.list. It is a "shallow copy".
But you don't need to worry about shallow copies and mutations because the easier approach is to just modify the draft state and not return anything.
You just need to find the correct object and set its title property. I came up with a shorter way to do that using array.reduce().
const keyArr = state.selectedItemKey.split("-");
const target = keyArr.reduce(
(accumulator, idx) => accumulator.children[idx],
{ children: state.list }
);
target.title = action.payload;
How can we store an nested JSON object in SQLite database. In Android Room we used to have Embedded and Relation to store and retrieve complex data objects. But in flutter how can we achieve the same?
I tried exploring sqflite, floor, and moor. But none seem to help except for Moor, which allows us Map the values to Object using Joins. something like below code.
Stream<List<TaskWithTag>> watchAllTasks() {
return (select(tasks)
..orderBy(
[
(t) =>
OrderingTerm(expression: t.dueDate, mode: OrderingMode.desc),
(t) => OrderingTerm(expression: t.name),
],
))
.join(
[
leftOuterJoin(tags, tags.name.equalsExp(tasks.tagName)),
],
)
.watch()
.map((rows) => rows.map(
(row) {
return TaskWithTag(
task: row.readTable(tasks),
tag: row.readTable(tags),
);
},
).toList());
}
So What exactly is the right way to do this?
Consider the following document item / syntax in a DynamoDB table:
{
"id": "0f00b15e-83ee-4340-99ea-6cb890830d96",
"name": "region-1",
"controllers": [
{
"id": "93014cf0-bb05-4fbb-9466-d56ff51b1d22",
"routes": [
{
"direction": "N",
"cars": 0,
"sensors": [
{
"id": "e82c45a3-d356-41e4-977e-f7ec947aad46",
"light": true,
},
{
"id": "78a6883e-1ced-4727-9c94-2154e0eb6139",
}
]
}
]
}
]
}
My goal is to update a single attribute in this JSON representation, in this case cars.
My approach
I know all the sensors IDs. So, the easiest way to reach that attribute is to find, in the array, the route which has a sensor with any of the ids. Having found that sensor, Dynamo should know which object in the routes array he has to update. However, I cannot run this code without my condition being rejected.
In this case, update attribute cars, where the route has a sensor with id e82c45a3-d356-41e4-977e-f7ec947aad46 or 78a6883e-1ced-4727-9c94-2154e0eb6139.
var params = {
TableName: table,
Key:{
"id": "0f00b15e-83ee-4340-99ea-6cb890830d96",
"name": "region-1"
},
UpdateExpression: "set controllers.intersections.routes.cars = :c",
ConditionExpression: ""controllers.intersections.routes.sensors.id = :s",
ExpressionAttributeValues:{
":c": 1,
":s": "e82c45a3-d356-41e4-977e-f7ec947aad46"
},
ReturnValues:"UPDATED_NEW"
};
docClient.update(params, ...);
How can I achieve this?
Unfortunately, you can't achieve this in DynamoDB without knowing the array index. You have very complex nested structure. The DynamoDB API doesn't have a feature to handle this scenario.
I think you need the array index for controllers, routes and sensors to get the update to work.
Your approach may work in other databases like MongoDB. However, it wouldn't work on DynamoDB. Generally, it is not recommended to have this complex structure in DynamoDB especially if your use case has update scenario.
TableName : 'tablename',
Key : { id: id},
ReturnValues : 'ALL_NEW',
UpdateExpression : 'set someitem['+`index`+'].somevalue = :reply_content',
ExpressionAttributeValues : { ':reply_content' : updateddata }
For updating nested array element need to fing out array index . Then you can update nested array element in dynamo db.
Let's say I have a DocumentDB collection populated with documents that have this shape:
[{ "Name": "KT", "Dob": "5/25/1990", "Children": [], "IsMale": false },
{ "Name": "Chris", "Dob": "10/1/1980", "Children": [], "IsMale": true }]
Now let's say I don't the structure of the documents above.
Is there a query I can write that will return me a distinct list of those property names ("Name", "Dob", "Children", "IsMale")?
In other words, is there a way for be to sniff out the schema of those documents?
This might be a duplicate of this question. In any case, the answers there might give you some ideas.
tl;dr; The only way to do it is to read all of the docs. You can pull them all back to your machine or you can read them inside of a stored procedure and only send the calculated schema back to your machine.
You need a dynamic ORM or ODM for Azure DocumentDB like Slazure to do something like this. Example follows:
using SysSurge.Slazure.AzureDocumentDB.Linq;
using SysSurge.Slazure.Core;
using SysSurge.Slazure.Core.Linq.QueryParser;
public void EnumProperties()
{
// Get a reference to the collection
dynamic storage = new QueryableStorage<DynDocument>("URL=https://contoso.documents.azure.com:443/;DBID=DDBExample;TOKEN=VZ+qKPAkl9TtX==");
QueryableCollection<DynDocument> collection = storage.TestCustomers;
// Build collection query
var queryResult = collection.Where("SignedUpForNewsletter = true and Age < 22");
foreach (DynDocument document in queryResult)
{
foreach (KeyValuePair<string, IDynProperty> keyValuePair in document)
{
Console.WriteLine(keyValuePair.Key);
}
}
}
I've created an index in sense which I'm happy with and am trying to implement a typed query in the NEST client as follows:
var node = new Uri("http://elasticsearch-blablablamrfreeman");
var settings = new ConnectionSettings(node)
.SetTimeout(300000)
.SetDefaultIndex("films")
.MapDefaultTypeIndices(d => d
.Add(typeof(film), "films"))
.SetDefaultPropertyNameInferrer(p=>p);
Inject it (amongst the searcher and indexer) with my DI:
builder.Register(c => new ElasticClient(settings)).Named<ElasticClient>("esclient");
Search using any query, such as the below:
var result = _client.Search<film>(s => s
.AllIndices()
.From(0)
.Size(10)
.Query(q => q
.Term(p => p.Title, query)
));
The indexer seems to work fine so code not included here. I've swapped in any number of settings parameters so I know that there's some redundancy in the code set above (or at least the default index would've sufficed).
The result var contains nothing whatsoever, with a big fat 0 across all it's properties, despite my having a wealth of data across my indices (including the "films" index).
I've even tried a raw QueryRaw method with a matchall and nada!
EDIT (Chris Pratt was along the right lines here)
Running:
var result = _client.Search<film>(s => s
.From(0)
.Size(10)
.QueryRaw(#"{ ""match_all"": {} }"));
And having:
var settings = new ConnectionSettings(node)
.SetTimeout(300000)
.MapDefaultTypeIndices(d => d
.Add(typeof (film), "chosen_index"))
.MapDefaultTypeNames(t => t
.Add(typeof (film), "en"));
Returns debug info as:
[Elasticsearch.Net.ElasticsearchResponse<Nest.SearchResponse<film>>] = {StatusCode: 200,
Method: POST,
Url: http://elasticsearch-blablablamrfreeman/chosen_index/film/_search,
Request: {
"from": 0,
"size": 10,
"query": { "match_all": {} }
},
Response: <Response stream not captured or already read...
My question being: It seemed I was in fact querying the wrong URL as per Chris Pratt's comment, but why isn't the type inference working for the type but it is for the index?
/chosen_index/film/_search
should read
/chosen_index/en/_search
If my inferencing is correct.
Should it POST or GET? I usually GET via the search API on sense. And finally, what if I want to write my queries against my native film type but have it override the ES-type in the URL in some instances.
For example if I inject a different language parameter and wish to now query the same index but both "en" and "de" ES-types etc (which are all valid types under the same index as already constructed via sense).
Thanks in advance!
Nothing obvious is jumping out at me for why this isn't working for you. However, I can give you a few avenues to pursue to attempt to resolve the issue.
I'm not familiar with the particular DI container that you're using, but it's possible that it's not binding properly, resulting some of your settings options not actually being utilized in the instance that's created. Might be a long shot, but I'd recommend digging in and at least verifying that the client instance you're getting is setup the way it should be.
It sort of side-steps the issue in a way, but Elasticsearch explicitly recommends you don't handle localization via different types. You should either use different indexes, i.e. chosen_index_en, chosen_index_es, etc., or use multifields:
"title": {
"type": "string",
"fields": {
"en": {
"type": "string",
"analyzer": "english"
},
"es": {
"type": "string",
"analyzer": "spanish"
}
}
Then you can search on things like title.en or title.es.
As I see you are using the default mappings for the film type. That is, the data are analyzed by the standard analyzer before being indexed.
In the query, you are using the Term query which finds documents that contain the exact term (not analyzed) specified in the inverted index (see here). So be careful what your query is.
Try to use a match query like below:
var result = _client.Search<film>(s => s
.AllIndices()
.From(0)
.Size(10)
.Query(q => q
.Match(p => p.Title, query)
));
The query is now analyzed by the standard analyzer before being applied (see here).