Show pivot columns - mikro-orm

I have 3 tables
accounts
id
account
accounts_has_devices
account_id
device_id
status
is_master
devices
id
name
info
The accounts_has_devices aka pivot has some information about the device associated to the user, and the device table has info to the device itself:
exist a way to get the pivot data + device ?

Once you have more columns than the FKs in your pivot table, it is no longer a pivot table. You need to model this as three entities, not two.
https://mikro-orm.io/docs/composite-keys/#use-case-3-join-table-with-metadata

Related

DynamoDB - query based on multiple columns

I have a requirement to find all users in a table that have same Id, Email or Phone.
Right now the data looks like this:
Id //hash
Market //sort
Email //gsi
Phone //gsi
I want to be able to do a query and say:
Get all items that have matching Id, email or phone.
From the docs it seems that you can only do a single query based on keys or one index. And it seems that even if I was to combine phone and email into one column and GSI that column I would still be limited to a begin with filter expression, is this correct? Are there any alternatives?
it seems that you can only do a single query based on keys or one index
Yes.
if I was to combine phone and email into one [GSI] I would still be limited to a begin with filter expression, is this correct?
Essentially, yes. Query constraints apply equally to indexes and the table keys. You must specify one-and-only-one Partition Key value, and optionally a range of Sort Key values.
Are there any alternatives?
Overload the Partition Key and denormalise the data. Redefine the Partition Key column (renamed PK) to hold Id, Email and Phone values. Each record is (fully or partially) repeated 3 times, each time with a different PK type.
PK Market Id More fields
Id-1 A Id-1 foo
zaphod#42.com A Id-1 # foo or blank
13015552572 A Id-1 # foo or blank
Querying PK = <something> AND Market > "" will return any matching id, email or phone number value.
If justified by your query patterns, repeat all fields 3x. Alternatively, use a hit on a truncated email/phone record to identify the Id, then query other fields using the Id.
There are different flavours of this pattern. For instance, you could also overload the Sort Key column (renamed to SK) with the Id value for Email and Phone records, which would permit multiple Ids per email/phone.

Show values instead of ids for multiple fields sqlite

How do I resolve user IDs to user names for multiple fields in an SQLite query. I have 2 tables, "Tickets" and "Users". "Tickets" has the user IDs, "Users" links the ID to the user's name. So I have the query below, but how do I show the user names instead of the ID numbers in the "created_by" and "assigned_to" columns.
SELECT tickets.id, tickets.summary, tickets.created_by, tickets.assigned_to
FROM tickets
I don't think joining is the solution as joining on one field leaves me with a problem with the other.
You could try to join them like this two different alias
SELECT tickets.id, tickets.summary, u1.name, u2.name
FROM tickets
LEFT JOIN users AS u1 ON tickets.created_by = u1.id
LEFT JOIN users AS u2 ON tickets.assigned_to = u2.id

How to cluster raw events tables from Firebase Analytics in BQ in field event_name?

I would like to cluster raw table with raw data of events from Firebase in BQ, but without reprocessing/creating another tables (keeping costs at minimum).
The main idea is to find a way to cluster tables when they create from intraday table.
I tried to create empty tables with pre-defined schema (same as previous events tables), but partitioned by _partition_time column (NULL partition) and clustered by event_name column.
After Firebase inserts all the data from intraday table, the column event_name stays in details tab of table as cluster field, but no reducing costs happens after querying.
What could be another solution or way how to make it working ?
Thanks in advance.
/edit:
Our table has detail tab as:
detail tab of table
After running this query:
SELECT * FROM 'ooooooo.ooooooo_ooooo.events_20181222'
WHERE event_name = 'screen_view'
the result is:
how query processed whole table
So no cost reducing.
But if I try to create the same table clustered by event_name manually with:
Create TABLE 'aaaa.aaaa.events_20181222'
partition by DATE(event_timestamp)
cluster by event_name
AS
Select * from ooooooo.ooooooo_ooooo.events_20181222
Then the same query from first IMG applied to created table processes only 5mb - so clustering really works.

How to design DynamoDB table to facilitate searching by time ranges, and deleting by unique ID

I'm new to DynamoDB - I already have an application where the data gets inserted, but I'm getting stuck on extracting the data.
Requirement:
There must be a unique table per customer
Insert documents into the table (each doc has a unique ID and a timestamp)
Get X number of documents based on timestamp (ordered ascending)
Delete individual documents based on unique ID
So far I have created a table with composite key (S:id, N:timestamp). However when I come to query it, I realise that since my id is unique, because I can't do a wildcard search on ID I won't be able to extract a range of items...
So, how should I design my table to satisfy this scenario?
Edit: Here's what I'm thinking:
Primary index will be composite: (s:customer_id, n:timestamp) where customer ID will be the same within a table. This will enable me to extact data based on time range.
Secondary index will be hash (s: unique_doc_id) whereby I will be able to delete items using this index.
Does this sound like the correct solution? Thank you in advance.
You can satisfy the requirements like this:
Your primary key will be h:customer_id and r:unique_id. This makes sure all the elements in the table have different keys.
You will also have an attribute for timestamp and will have a Local Secondary Index on it.
You will use the LSI to do requirement 3 and batchWrite API call to do batch delete for requirement 4.
This solution doesn't require (1) - all the customers can stay in the same table (Heads up - There is a limit-before-contact-us of 256 tables per account)

ASP.NET Pivot Table: How to use it with just two tables in the database

I have an Excel sheet that list all the employees in the company with some required training courses. The list is very big and long and I need to incorporate it within the company website. therefore, I am thinking to use the pivot table with the stored procedures in order to make the table flexible for expanding with adding new employees or courses in the future.
The main problem now is how to use it with just two tables in the database which are Employee table and Courses Table.
Employee table consists of: employee name, id, organization, course id
Courses table consists of: course name, course id
I want a pivot table that lists employee name on the first column and lists courses on the first row. then it will show me (yes or no) values under each course for each employee which indicates that employee takes this course or not. Finally I want to see a total of yes on the last row of the table
I know the syntax of the pivot table and I tried to understand it and make it work for this case but I failed.
I am using this valuable resource:
http://www.kodyaz.com/articles/t-sql-pivot-tables-in-sql-server-tutorial-with-examples.aspx
How to use it with this case? Any hint please? I just wanna know the structure of the query
My initial query is:
select
*
from
(
select
employee.Name, employee.id, employee.Organization, courses.id, courses.name
from employee, courses
) DataTable
PIVOT
(
SUM(ID)
FOR Name
IN (
[safety awareness],[general safety orientation],[sms orientation],[emergency responses]
)
) PivotTable
I would definitely use a PivotGrid control like DevXpress has for winforms and ASP.NET.
With such control you can create pivots at design time and even allow end user to drag and drop fields around at runtime and decide for the pivoting logic than save their preferences. Used this for some advanced reporting tools and users loved it.

Resources