I would like to have a view of the amount of orders that where put to a completed state within the last months. I do not need very much details but just a very simple list with two columns.
"Month" & "Amount of completed orders"
No further details are required. The only important point is that this list should just contain orders that are in the completed state. Furthermore, I do not care when the order was made, but I need to know by when the order was set to completed. So the timing of the status change is important for me.
Example: customer buys product in May, business owner set order manually to completed in June. Then this purchase should be visible as part of June and not May.
Any chance I can create such a query?
Solved it myself:
SELECT Year(pm.meta_value), Month(pm.meta_value), Count(p.ID) FROM wp_posts p JOIN wp_postmeta pm on p.ID = pm.post_id where p.post_status = 'wc-completed' and p.post_type = "shop_order" and pm.meta_key = "_completed_date" group by DATE_FORMAT(pm.meta_value, '%Y-%mm') order by DATE_FORMAT(pm.meta_value, '%Y-%mM') DESC
Related
I'm trying to flatten analytics data in Bigquery and I've seen the other answers to unnest both hits and products however as soon as I include unnest(product) I get less results and only receiving the rows of hit.type = 'EVENT'.
If I comment out unnest(products) then I do receive more rows including
hit.type = 'PAGE' and
hit.type = 'EVENT'
but then I cannot reference any product level data in the select because unnest(product) is commented out.
This only happens for a particular period of the data set, I am seeing PAGE hit types in earlier data. I don't get it! help!
What could be happening?
select
h.type as hits_type
-- ,product.productSKU
-- ,product.v2ProductName as product_name
from `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`,
unnest(hits) h,
--unnest(h.product) as p
I solved my own problem.
It needed a left join because I think there are hits that don't contain product records
from `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` s, unnest(hits) h left join unnest(h.product) p
I've integrated my Firebase project with BigQuery. Now I'm facing a data discrepancy issue while trying to get 1 day active users, for the selected date i.e. 20190210, with following query from BigQuery;
SELECT COUNT(DISTINCT user_pseudo_id) AS 1_day_active_users_count
FROM `MY_TABLE.events_*`
WHERE event_name = 'user_engagement' AND _TABLE_SUFFIX = '20190210'
But the figures returned from BigQuery doesn't match with the ones reported on Firebase Analytics Dashboard for the same date. Any clue what's possibly going wrong here?
The following sample query mentioned my Firebase Team, here https://support.google.com/firebase/answer/9037342?hl=en&ref_topic=7029512, is not so helpful as its taking into consideration the current time and getting users accordingly.
N-day active users
/**
* Builds an audience of N-Day Active Users.
*
* N-day active users = users who have logged at least one user_engagement
* event in the last N days.
*/
SELECT
COUNT(DISTINCT user_id) AS n_day_active_users_count
FROM
-- PLEASE REPLACE WITH YOUR TABLE NAME.
`YOUR_TABLE.events_*`
WHERE
event_name = 'user_engagement'
-- Pick events in the last N = 20 days.
AND event_timestamp >
UNIX_MICROS(TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 20 DAY))
-- PLEASE REPLACE WITH YOUR DESIRED DATE RANGE.
AND _TABLE_SUFFIX BETWEEN '20180521' AND '20240131';
So given the small discrepancy here, I believe the issue is one of timezones.
When you're looking at a "day" in the Firebase Console, you're looking at the time interval from midnight to midnight in whatever time zone you've specified when you first set up your project. When you're looking at a "day" in BigQuery, you're looking at the time interval from midnight to midnight in UTC.
If you want to make sure you're looking at the events that match up with what's in your console, you should query the event_timestamp value in your BigQuery table (and remember that it might span multiple tables) to match up with what's in your timezone.
I would like to build the following table every day, to store some aggregate data on page performance of a website. However, each days worth of data is over 15 million rows.
What steps can I take to improve performance? I am intending to save them as sharded tables, but I would like to improve further, could I nest the data within each table to improve performance further? What would be the best way to do this?
SELECT
device.devicecategory AS device,
hits_product.productListName AS list_name,
UPPER(hits_product.productSKU) AS SKU,
AVG(hits_product.productListPosition) AS avg_plp_position
FROM `mindful-agency-136314.43786551.ga_sessions_20*` AS t
CROSS JOIN UNNEST(hits) AS hits
CROSS JOIN UNNEST(hits.product) AS hits_product
WHERE parse_date('%y%m%d', _table_suffix) between
DATE_sub(current_date(), interval 1 day) and
DATE_sub(current_date(), interval 1 day)
AND hits_product.productListName != "(not set)"
GROUP BY
device,
list_name,
SKU
Since you're using productSku and productListName as dimensions/groups there is no way around cross joining with product array.
You're also cross joining with product which can be dangerous because sometimes this array is missing and you destroy the whole row - typically you'd use a left join. But in this case, it's fine because you're only interested in product fields.
You should, however, be clear about whether you want to see list clicks or list impressions using hits.product.isImpression and hits.product.isClick. Atm I don't see a distinction there. Maybe filter for WHERE hits_product.isImpression in case of list views?
Instead of shards you might want to consider adding a date field and PARTITION BY date as well as CLUSTER BY list_name. See INSERT Statement for updating
and DDL Syntax to start the table. This is more performant than shards when it comes to querying the table later.
Starting the table could look something like this:
CREATE TABLE `x.y.z`
PARTITION BY date
CLUSTER BY list_name
AS (
SELECT
PARSE_DATE('%Y%m%d',date) AS date,
device.devicecategory AS device,
hits_product.productListName AS list_name,
UPPER(hits_product.productSKU) AS SKU,
AVG(IF(hits_product.isClick, hits_product.productListPosition, NULL)) AS avg_plp_click_position,
AVG(IF(hits_product.isImpression, hits_product.productListPosition, NULL)) AS avg_plp_view_position
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20*` AS t
CROSS JOIN UNNEST(hits) AS hits
CROSS JOIN UNNEST(hits.product) AS hits_product
WHERE
parse_date('%y%m%d', _table_suffix)
between
DATE_sub(current_date(), interval 1 day)
and DATE_sub(current_date(), interval 1 day)
AND hits_product.productListName != "(not set)"
GROUP BY
date,
device,
list_name,
SKU
)
Inserting new records is quite similar, you just need to mention the fields upfront as described in the documentation.
So I currently have a database that keeps tracks of projects, project updates, and the update dates. I have a form that with a subform that displays the project name and the most recent update made to said project. It was brought to my attention however, that the most recent update to a project does not display correctly. Ex: shows the update date of 4/6/2017 but the actual update text is from 3/16/2017.
Doing some spot research, I then learned that Access does not store records in any particular order, and that the Last function does not actually give you the last record.
I am currently scouring google to find a solution but to no avail as of yet and have turned here in hopes of a solution or idea. Thank you for any insight you can provide in advance!
Other details:
tblProjects has fields
ID
Owner
Category_ID
Project_Name
Description
Resolution_Date
Priority
Resolution_Category_ID
tblUpdates has these fields:
ID
Project_ID
Update_Date
Update
there is no built-in Last function that I am aware of in Access or VBA, where exactly are you seeing that used?
if your sub-form is bound directly to tblUpdates, then you ought to be able to just sort the sub-form in descending order based on either ID or Update_date.
if you have query joining the two tables, and are only trying to get a single row returned from tblUpdates, then this would do that, assuming the ID column in tblUpdates is an autonumber. if not, just replace ORDER BY ID with ORDER BY Update_Date Desc
SELECT a.*,
(SELECT TOP 1 Update FROM tblUpdates b WHERE a.ID = b.PROJECT_ID ORDER BY ID DESC ) AS last_update
FROM tblProjects AS a;
Good noon to every one
my query is that i have one table name Purchase&Sales and two different field
Purchase
Sales
the data which will be in Purchase text box will be fetch from Total purchase table
and the data will be in sales table will be fetch from Total Sales Table
means the both Value will come from different table to one table
So please Give me a syntax or some idea
Hoping for your Great and positive response
select sum(Purchase) Result from PurchaseTable
union all
select sum(Sales) Result from SalesTable
Using JOIN or try with ForeignKey concept if any.
SELECT
S.Total, -- selecting from one table
P.Total -- selecting from another table
FROM
Sales S
INNER JOIN -- inner join if you can or similar
Purchase P
ON
S.PurchaseId = P.ID
see here for more info http://www.techrepublic.com/article/sql-basics-query-multiple-tables/