firebase database set method appending to array instead of replacing the elements - firebase

I'm using firebase database to store a complex data structure, it looks like this:
../thousand_island/
genomes:[
0:{
"somthing": "some content"
},
... 49 more
],
version: 1223
The genomes array always have 50 elements. Every time I call set to replace all the data in /my_record_name then 50 new records will be inserted into the genomes array instead of replacing.
Code sample:
// set the ref into the instance
this.ref= firebase.database.ref('/genomes/thousand_island');
...
// record = { genomes:[...], version: *** }
if(version> record.version){
this.ref.set(record,function(){
console.log('updated thousand_island ', version, '->', record.version);
})
}
So how to make it replace it instead of appending to the array? I was thinking of deleting the data then insert again but it sounds tedious and it costs 2 requests.

Related

Entity Framework reverse poco on table without primary key and null column

I am using Entity Framework Reverse POCO generator v2.37.5.
I need to map an external database. It is not possible for me modify the schema. But the tables don't have primary key and all columns are set to null.
However, the combination of certain columns will always be unique.
For example, the following 3 columns can be combine to form the primary key:
EnrollNumb
CubNumb
SeqVal
Is there any setting in the template that helps me to set combination of columns as primary key?
Any suggestion / direction will be greatly appreciated.
First, find your Entities.ttinclude file.
Protip / entirely optional side-quest and distraction for better T4 editing:
Extract all the C# code (inside the T4 <#+ #> blocks) and move it to a new file named Entities.ttinclude.cs
Add <## Include File = "Entities.ttinclude.cs" #> to the Entities.ttinclude file.
Change the project build-action for Entities.ttinclude.cs to None.
Now you can get (basic) syntax-coloring for the C# code.
Now back to your regularly scheduled stackoverflowing:
Look for this somewhere around lines 250-320:
Settings.UpdateColumn = (Column column, Table table) =>
Inside the function you can tell EF that there totally is a PK defined on this table, I pinky-swear! like so:
Settings.UpdateColumn = (Column column, Table table) =>
{
// ...
if( column.ParentTable.Name == "Memb" )
{
switch( column.Name )
{
case "EnrollNumb":
case "CubNumb":
case "SeqVal":
column.IsNullable = false; // PK columns cannot be NULLable.
column.IsPrimaryKey = true;
column.PrimaryKeyOrdinal = column.Ordinal;
column.ParentTable.HasPrimaryKey = true
break;
}
}
// ...
}
Run Model.tt
Assuming no errors happened, take a look at the folder with the generated .cs files, look for Memb.Configuration.cs. It should look something like this:
[System.CodeDom.Compiler.GeneratedCode("EF.Reverse.POCO.Generator", "2.37.1.0")]
internal class MembConfiguration : System.Data.Entity.ModelConfiguration.EntityTypeConfiguration<Memb>
{
public MembConfiguration( String schema )
{
ToTable( "Memb", schema );
HasKey( x => new { x.EnrollNumb, x.CubNumb, x.SeqVal } );
Property( x => x.EnrollNumb ).HasColumnName( "EnrollNumb" ).HasColumnType( "nvarchar" ).IsRequired().MaxLength(10);
Property( x => x.CubNumb ).HasColumnName( "CubNumb" ).HasColumnType( "nvarchar" ).IsRequired().MaxLength(50);
Property( x => x.SeqVal ).HasColumnName( "SeqVal" ).HasColumnType( "nvarchar" ).IsRequired().MaxLength(5);
// other columns here
}
}
And you should be able to build your project, then run it, and it should just work.
You can also define fake Foreign Key constraints and set-up other kinds of relationships and EF will believe you, which is handy for when you want EF to handle a VIEW as though it were a TABLE, especially as a VIEW in SQL Server cannot be imbued with PK and FK constraints (you can also get DML working provided your VIEW is updatable).

Is there a way to input multiple environment variables into a url for testing on Postman?

I am testing an endpoint in Postman using a url like this, {{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}.
I have the api_url, customer_id, and client_id stored in my environment variables. I would like to test multiple customer_id and client_id without having to change the environment variables manually each time. I created a csv to store a list of customer_id and one to store client_id. When I go to run collection, it will only allow me to add one file. Is there another way to do this if I want to iterate through my tests to automate them?
You can add both customer_id & client_id in one csv file. Postman will iterate n times (n = number of csv lines, except header)
you can use postman.setNextRequest to control the flow. The below code runs the request with different values in the arr variable
url:
{{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}
now add pre-request:
// add values for the variable in an array
const tempArraycustomer_id = pm.variables.get("tempArraycustomer_id")
const tempArrayclient_id = pm.variables.get("tempArrayclient_id")
//modify the array to the values you want
const arrcustomer_id = tempArraycustomer_id ? tempArraycustomer_id : ["value1", "value2", "value3"]
const arrclient_id = tempArrayclient_id ? tempArrayclient_id : ["value1", "value2", "value3"]
// testing variable to each value of the array and sending the request until all values are used
pm.variables.set("customer_id", arrcustomer_id.pop())
pm.variables.set("client_id", arrclient_id.pop())
pm.variables.set("tempArraycustomer_id", arrcustomer_id)
pm.variables.set("tempArrayclient_id", arrclient_id)
//end iteration when no more elements are there
if (arrcustomer_id.length !== 0) {
postman.setNextRequest(pm.info.requestName)
}

Can't scan on DynamoDB map nested attributes

I'm new to DynamoDB and I'm trying to query a table from javascript using the Dynamoose library. I have a table with a primary partition key of type String called "id" which is basically a long string with a user id. I have a second column in the table called "attributes" which is a DynamoDB map and is used to store arbitrary user attributes (I can't change the schema as this is how a predefined persistence adapter works and I'm stuck working with it for convenience).
This is an example of a record in the table:
Item{2}
attributes Map{2}
10 Number: 2
11 Number: 4
12 Number: 6
13 Number: 8
id String: YVVVNIL5CB5WXITFTV3JFUBO2IP2C33BY
The numeric fields, such as the "12" field, in the Map can be interpreted as "week10", "week11","week12" and "week13" and the numeric values 2,4,6 and 8 are the number of times the application was launched that week.
What I need to do is get all user ids of the records that have more than 4 launches in a specific week (eg week 12) and I also need to get the list of user ids with a sum of 20 launches in a range of four weeks (eg. from week 10 to 13).
With Dynamoose I have to use the following model:
dynamoose.model(
DYNAMO_DB_TABLE_NAME,
{id: String, attributes: Map},
{useDocumentTypes: true, saveUnknown: true}
);
(to match the table structure generated by the persistence adapter I'm using).
I assume I will need to do DynamoDB "scan" to achieve this rather than a "query" and I tried this to get started and get a records where week 12 equals 6 to no avail (I get an empty set as result):
const filter = {
FilterExpression: 'contains(#attributes, :val)',
ExpressionAttributeNames: {
'#attributes': 'attributes',
},
ExpressionAttributeValues: {
':val': {'12': 6},
},
};
model.scan(filter).all().exec(function (err, result, lastKey) {
console.log('query result: '+ JSON.stringify(result));
});
If you don't know Dynamoose but can help with solving this via the AWS SDK tu run a DynamoDB scan directly that might also be helpful for me.
Thanks!!
Try the following.
const filter = {
FilterExpression: '#attributes.#12 = :val',
ExpressionAttributeNames: {
'#attributes': 'attributes',
'#12': '12'
},
ExpressionAttributeValues: {
':val': 6,
},
};
Sounds like what you are really trying to do is filter the items where attributes.12 = 6. Which is what the query above will do.
Contains can't be used for objects or arrays.

All worksheets in spreadsheet appear to be a reference to sheet 0

I am writing a price import script which reads from an Excel spreadsheet.
The spreadsheet is generated using Office 365 Excel however I am using LibreOffice Calc on Ubuntu 18.04 to view it during development - no issues here.
I'm using phpoffice/phpspreadsheet at version 1.10.1:
"name": "phpoffice/phpspreadsheet",
"version": "1.10.1",
"source": {
"type": "git",
"url": "https://github.com/PHPOffice/PhpSpreadsheet.git",
"reference": "1648dc9ebef6ebe0c5a172e16cf66732918416e0"
},
I am trying to convert the data of each worksheet within the spreadsheet to an array.
There are 3 worksheets, each representing 'Zones' - Zone 1, Zone 2 and Zone 3.
I appear to be getting the same data for Zone 2 and Zone 3 as Zone 1 - the worksheet title is correctly returned however the data is not changing between worksheets.
/**
* #param Spreadsheet $spreadsheet
*
* #return array
*/
private function parseZones(Spreadsheet $spreadsheet): array
{
$zones = [];
foreach ([0, 1, 2] as $sheetIndex) {
$sheet = $spreadsheet->getSheet($sheetIndex);
// this is correctly reporting 'Zone 1', 'Zone 2' and 'Zone 3' - sheet title is accurate
$sheetName = $sheet->getTitle();
// sheet 0 is accurate
$sheetData = $sheet->toArray();
// on sheet index 1 and 2 - $sheetData is identical to that of sheet index 0
// the XLSX file in OpenOffice / Excel has distinctly different row data - 50% less rows in both cases
// feels like a memory cache issue / some mis-referencing?
}
// retrieving rows using this approach yields the same result:
foreach ($spreadsheet->getAllSheets() as $sheet) {
// this is correctly reporting 'Zone 1', 'Zone 2' and 'Zone 3' - sheet title is accurate
$sheetName = $sheet->getTitle();
// on sheet index 1 and 2 - $sheetData is identical to that of sheet index 0
$sheetData = $sheet->toArray();
}
return $zones;
}
Any ideas?
Thanks
I'm a numpty - completely failed to see / check the row filtering in the spreadsheet.
It's returning the correct data.
None issue, sorry!
I've since started to investigate how to read a worksheet whilst obeying the filters embedded in the spreadsheet, and it appears Worksheet::toArray() does not automatically take filters in to account - nor does iterating columns and rows manually, see:
https://phpspreadsheet.readthedocs.io/en/latest/topics/autofilters/
You must manually test a row's visibility settings, as per the docs.
Hope this helps!
Try just change current active sheet before reads.
$spreadsheet->setActiveSheetIndex($sheetIndex);
$sheet = $spreadsheet->getActiveSheet();
$dataArray = $sheet
->rangeToArray(
'A4:O07', // The worksheet range that we want to retrieve
NULL, // Value that should be returned for empty cells
TRUE, // Should formulas be calculated (the equivalent of getCalculatedValue() for each cell)
TRUE, // Should values be formatted (the equivalent of getFormattedValue() for each cell)
TRUE // Should the array be indexed by cell row and cell column
);
PhpSpreadsheet

Filemaker Pro - Using Script to populate report layout

I have a problem where I have a list of fields from a table (not static, can be modified by user), and I need to generate a report using these user selected fields. The report can show all the rows, no need for aggregation or filtering.
I thought I could create a report layout then using a filemaker script to populate it but can't seem to find the right commands, can someone let me know how I could achieve this?
I'm using filemaker pro 18 advanced
Thanks in advance!
EDIT: Since you want a dynamic report, then I recommend you look up a technique called "Virtual List" for rendering the data.
Here's an example script that iterates over a found set of records and builds the virtual list data in a variable (it doesn't show how to render it though):
# Field names and delimiter
Set Variable [ $delim ; Value: Char(9) // tab character ]
# Set these dynamically with a script parameter
Set Variable [ $fields ; Value: List ( "Contacts::nameFirst" ; "Contacts::nameCompany" ; "Contacts::nameLast" ) ]
Set Variable [ $fieldCount ; Value: ValueCount ( $fields ) ]
Go to Layout [ “Contacts” (Contacts) ; Animation: None ]
Show All Records
Go to Record/Request/Page [ First ]
# Loop over all the records and append a row in the $data variable for each
Set Variable [ $data ; Value: "" ]
Loop
# Get the delimited field values
Set Variable [ $i ; Value: 0 ]
Set Variable [ $row ; Value: "" ]
Loop
Exit Loop If [ Let ( $i = $i + 1 ; $i > $fieldCount ) ]
Set Variable [ $value ; Value: GetField ( GetValue ( $fields ; $i ) ) ]
Insert Calculated Result [ Target: $row ; If ( $i > 1 ; $delim ) & $value ]
End Loop
enter code here
# Append the new row of data to the list variable
Insert Calculated Result [ Target: $data ; If ( Get ( RecordNumber ) > 1 ; ¶ ) & $row ]
Go to Record/Request/Page [ Next ; Exit after last: On ]
End Loop
# Save to a global variable to show in a virtual list layout
Set Variable [ $$DATA ; Value: $data ]
Exit Script [ Text Result: ]
please note this code is just one of many possible formats the virtual list can take. A lot of people, myself included, prefer to use JSON objects or arrays for each row of the list since it automatically handle field values with carriage returns. This is sort of the old-fashioned way. Kevin Frank at FileMaker Hacks has some good recent articles about virtual list techniques if you're interested.
PS, another great technique for rendering table data dynamically is to collect the data in a JSON array and render it in a webviewer with https://datatables.net/
I did something like this for the oncology department of UM om 1980 or so using 4th Dimension and a new plug in that used one line of code to create a web browser with all the functions that a doctor might want. The data was placed inside a variable as it was sent/returned and 4D could use a variable in the report to display the data.
FileMaker does not have this ability built in as 4D did so you will have to do it yourself.JSON is the most likely tool that I am familiar with. YouTube has many videos on JSON.
You have two classes of variables for your report: Column headers and column data to display. Fortunately Filemaker is quite good and very easy to design. Just make a typical report and replace the text/header or field names with a JSON variable or any. $ColumnName = JSON variable.
Create a JSON calculated field in the database. In that calculated field set the JSON variable and this can be used for all of the columns.
This is the essence of the idea with the final result to be determined by you. What you are asking for is not easy and would require serious work by a skilled JSON scripter.

Resources