I want to combine two tables and display them as one table - google-app-maker

I want to combine 1 and 2, but I don't know how.
I created a model like this.
①CalendarMaster【field→Index,Date,CalenderHoliday,CompanyHoliday】
②AttendanceMaster【field→Index,EmployeeId,Date,GoingTime,LeavingTime】
I want to combine the date of the CalendarMaster and the date of the AttendanceMaster as a key.
I want to know the type of model and where to write the SQL query script.
If you join the tables into one,
I want to display [Date,CalenderHoliday,CompanyHoliday,GoingTime,LeavingTime] in one table.
I looked at various sites and tried relations, but it didn't work, so please help someone.
I have been worried for another week.
Waiting for advice.

Related

Lost on .rds files/pulling data from tables

Very new using R for anything database related, much less with AWS.
I'm currently trying to work with this set of code here. Specifically the '### TEST SPECIFIC TABLES' section.
I'm able to get the code to run, but now I'm actually not sure how to pull data from the tables. I assume that I have to do something with 'groups' but not sure what I need to be doing next to pull the data out of it.
So even more specifically, how would I pull out specific data like revenue for all organizations within the year 2018 for example. I've tried readRDS to pull a table as a dataframe but I get no observations or variables for any table. So I'm sort of lost of what I need to do here to pull the data our of the tables.
Thanks in advance!

Kusto: Is it possible to prefix the columns coming from a specific table?

I'm performing a query and joining data from 3 different tables. In the final table, I would like to know which table each individual column comes from, since some of the column names repeat themselves. At the moment, I am doing a very long:
| project prefix1_column1=column1, prefix1_column2=column2
for each join.
In a perfect universe, I could add a parameter to the joins to specify a prefix but that doesn't exist. Is there a cleaner way to do this?
What you are currently doing is the best way to do it. For the feature you are asking for, please open a suggestion in the Azure Data Explorer user voice.
You should also look at the lookup operator, which does not repeat the columns that are part of the join keys, while this will not solve your use case it will reduce the noise.

Powerbi equivalent to SQL having function

Im a trainee working with databases.
Im working on PowerBI report based on SQL query where all of the needed joins are included for my data to be obtained. So Im working within one dataset.
I have made a table where I can show number of transaction(like invoice number) and name of person that made that transaction. My problem lies in creating a measure that will influence that table. It should work like a having clausule from SQL (well at least my boss said that).
I would like for this measure to force this table to show only data for people that have made more than 2 transactions (they have more than 2 invoice numbers [so there are more than two rows for this person]) .
I tried to do it by writing a measure like that:
Measure = COUNTAX(
Query1;counta([Salesman])>2)
Or like that:
Measure 2 =
FILTER( Query1; counta(Query1[Salesman])>2 )
But i only got a bar graph that is showing me how many transactions were made by each person. When Im adding this measure to this table i see that for each row i got value 1.
Im new to the PowerBi and DAX so it's quite a big hurdle for me. Can someone share his/hers knowledge to help solve this problem? I would be much obliged.
I found a solution for my problem.
I created a second query that counted transactions for each person with their names. I created relationship between my two queries. Next I added counting attribute to my table with data from query one and I used filter on my counting attribute. After that this attribute can be just hidden and It works perfectly.
On top of that I created a measure and made a chart using this measure. It looks nice and clear.
The measure looks like that:
Measure =
COUNTAX(
Query1;counta([Salesman])
)
I filtered this measure too to get wanted result.

Adding new fields to historical tables in BigQuery

I'm getting daily exports of Google Analytics data into BigQuery and these form the basis for our main reporting dataset.
Over time i need to add new columns for additional things we use to enrich the data - like say a mapping from url to 'reporting category' for example.
This is easy to just add as a new column onto the processed tables (there is about 10 processing steps at the moment for all the enrichment we do).
This issue is if stakeholders then ask - can we add that new column to the historical data?
Currently i then need to rerun all the daily jobs which is very slow and costly.
This is coming up frequently enough that i'm seriously thinking about redesigning my data pipelines to tailor for the fact that i often need to essentially drop and recreate ALL the data from time to time when i need to add a new field or correct old dirty data or something.
I'm just wondering if there is better ways to
Add a new column to an old table in BQ (would be happy to do this by hand for these instances where i can just join the new column based on the ga [hit_key] i have defined which is basically a row key)
(Less common) Update existing tables based on some where condition.
Just wondering what best practices are and if anyone has had similar issues where you basically need to update an historic shema and if there are ways to do it without just dropping and recreating which is essentially what i'm currently doing.
To be clearer on my current approach: I'm taking the [ga_sessions_yyyymmdd] table and making a series of [ga_data_prepN_yyyymmdd] tables where is either add new columns at each step or reduce the data in some way. There is now 11 of these steps and each time i'm taking all the 100 or more columns along for the ride. This is what i'm going to try design away from as currently 90% of the columns at each stage dont even need to be touched as they can just be joined back on at the end maybe based on hit_key or something.
It's a little bit messy though to try and pick apart.
Adding new columns to the schema of the existing historical tables is possible, but the values for newly added columns will be NULLs. If you do need to populate values into these columns, probably the best approach is to use UPDATE DML statement. More details how to try it out is here: Does BigQuery support UPDATE, DELETE, and INSERT (SQL DML) statements?

how to write a edit query for a grid view

I am using grid view to fetch data from two tables.
It has got name, quantity, required quantity and lot number as few columns. Each name has got few different lot numbers and so the same name appears more than once with different lot numbers.
Now I would like to remove the required quantity from existing quantity. Need to re-arrange the old lot number first and then go for new one.
can someone please suggest me how to do it?

Resources