I am updating transactions via Measurement Protocol when revenue changes. Unfortunately I haven't found a way to keep the date of the original transaction and just update their revenue.
How it is:
12/12/2017 ID#12345 $50
Update on 12/01/2018: revenue -$20
12/01/2018 ID#12345 $-20
= two transactions (and dates) with the correct combined revenue
How it should be
12/12/2017 ID#12345 $50
Update on 12/01/2018: -$20
12/12/2017 ID#12345 $30
= one transaction with the correct revenue
Comparisons to other months or years wouldn't otherwise be wrong, because the initial date of the transaction is more important than the date of the update.
Is there a way to update the original transaction data rather than pushing a new one?
Thanks in advance!
Related
I use Woocommerce to add orders from admin manually. Most of my clients pay after 15 to 90 days after order is created, in some cases longer than 90 days.
I notice that Woocommerce Analytics always shows revenue based on Order created date. I agree with it by one point of view as order was created on that day, so, the revenue belong there.
I think orders tab in Analytics shows it right, which is Date, Order, customer, amount, etc..
But I think Revenue should be based on order->get_paid_date() rather than created date as the money still comes in on the paid date. If Woocommerce changes the formula, it would make little to no difference for those whose orders are paid online immediately. And it will take care of those whose orders are paid later on.
Just curious, since logically Revenue is the money coming into account and Analytics>Orders tab shows Orders by created date well already.
Thanks for your input for me to understand how Woocommerce thinks.
Currently Woocommerce Analytics does not take paid date meta into account, it considers order creation date for revenue consideration.
It depends how one looks it, it's right and wrong.
For my use, where customers pay days or months later, it's wrong. but still the order was placed on the creation date, so, the revenue should still be linked to the creation date. so, it's right too.
One thing I did was from Analytics settings, I removed some custom status of the order to exclude from Analytics, that way until the order is actually either in Pending status or Completed status, it does not consider it in revenue. Not a perfect solution, but it helps me to exclude canceled and some custom status like Quotation not be included in Revenue.
I ended up programming my own custom admin page where I pulled all orders in completed status with paid date between start and end date and did the total manually to get my actual income for the year.
Just for someone else looking for similar question, it might help.
Currently, I can see only examples using the retention period specified in days. Can we specify the retention period of a table in years in Kusto? I mean will the below command sets the retention period to 10 years?
.alter-merge table Table1 policy retention softdelete = 10y recoverability = disabled
Also, soft delete keeps the data in DB and marks the record as deleted right. Is there a way to do hard delete and any issues with using it? My records do not refer to old data and hence I want to completely delete the records after the retention period.
The retention policy command receives a timespan literal and year is not one of the supported formats, so the command in your example does not work. You need to specify the period in days.
If Recoverability is set to Enabled, the storage artifacts will be deleted from storage 14 days after retention policy expires. If it is set to Disabled deletion will be done 1d after retention policy period.
Background:
I'm having the Firebase analytics data exported to BigQuery. And I'm using cron jobs to crunch data in BigQuery for getting insight.
Problem:
To be able to only crunch delta data i.e. the data that has arrived since last time I ran my cron job I need a way to figure out the time when the data arrived at server, since the event_timestamp is generated at client and can be cached at client before sent.
Insights:
I have laborated with event_server_timestamp_offset (offset) which I thought I could use together with event_timestamp. But I was expecting the offset to only be positive but it can also be negative. And when I look at the MAX and MIN for the offset in the entire exported Firebase analytics dataset and re-calculate it to years instead of microseconds I can get more than 18 years offset.
Query:
SELECT
MAX(event_server_timestamp_offset)/(1000000*60*60*24) max_days,
MIN(event_server_timestamp_offset)/(1000000*60*60*24) min_days
FROM
`analytics_<project_id>.events_*`
Result: max_days=6784.485790436655,
min_days=-106.95833052104166
Question:
How can I figure out the server arrival time for my Firebase exported BigQuery data so I can run cron jobs crunching only delta data?
Can I use event_server_timestamp_offset together with event_timestamp? If so, how?
Best regards,
Daniel
Surprisingly enough, this question not having a clear answer for almost 2 years, I am leaving here the answers I got from the Firebase support team. The format is - question asked followed by the answer of the support staff.
Q1. event_date - The date on which the event was logged (YYYYMMDD format in the registered timezone of your app). Does it mean that the event occurred on that date, or that it was actually collected on that date?
A1. Per documentation, event_date refers to the date on which the event is logged/occurred. Note that event_date is based on the Analytics timezone setting of your Firebase Project.
Q2. event_timestamp - The time (in microseconds, UTC) at which the event was logged on the client. Is it safe to assume that this is the exact timestamp the event occurred on client side (in the app timezone of course)?
A2. Yes, this is based on the device timezone setting. However, event_timestamp may be skewed if the device time is incorrect.
Q3. event_server_timestamp_offset - Timestamp offset between collection time and upload time in micros. This is the main field that causes all the misunderstandings - in our BigQuery table for the year 2020 this field takes values in a range between 5 days and -2 days. I mean how can the colleciton time be 2 days ahead?
A3. The event_server_timestamp_offset field in the export schema is the time difference between when the event took place and the app uploaded it to our server. In other words, this is the estimated difference between the client's local time and the actual time, according to our servers. The values of this field are usually positive, but can be negative as well if the device time setting is incorrect.
Q4. One last question is very important - can we ignore the
event_server_timestamp_offset field and just rely on event_timestamp -
as the exact date and time the event occurred on the clientside (not
collected, not uplaoded, etc). If not- please explain how we can get
the exact datetime of the event occuring on the clientside. But if yes
please let me know why do we need the event_server_timestamp_offset field?
A4. Yes, you may actually ignore it and use event_timestamp alone. However, as mentioned earlier, event_timestamp could be off if the device time setting incorrect, but it shouldn't really affect the bigger picture of your analytics data as cases like this are usually one-off.
We use the event_date as the indicator and load the data once a day.
I have an issue with the refunds import through standard Property menu "data import". I use refund import only by ga:transactionId.
As i have read in help in this case there is no need to supply any additional information about transaction (sku, quantity, revenue, etc.).
Whole transaction will be refunded.
In fact i see in report Sales Performance that all refunds are linked with correct transactionId, but have different date, source / medium, etc.
For me it seems to be absolutely random attribution (except date may be, it's date of import)
For example, one transactionId.
Originally it happened at June, 28 in yamarket / cpc. And refund is linked to June, 30 and webvisor.com / referral. Date of refunds common for all transactions and it is date of import (today). Source / medium is absolutely random.
May be i need to wait one day, but i doubt that.
And due to different dates if i choose date range that does not include both dates i will receive only refund or only transaction payment.
Does system work correct?
If system works correct, can i unload all refunds back?
All work as it supposed to be. While it is strange logic for my opinion.
Google will change help content to cover this topics.
Refunds are always linked to date of import, not to date of initial transaction.
And, unexpectedly for me, to the last session of buyer on the moment of import. So this session does not necessarily coincide with the purchase session if buyer visited site after transaction.
That's why i get different source/medium for refund and transaction.
I am investigating chances to use Firebase for my next project.I spent several days reading and building a "prove of concepts" project. In the demo project i build a shopping cart.In the admin section i can create products, and the client can buy it.When the client checks out i push to the closed-orders node a complex object which stores all data for the deal like this simplified version:
closed-orders
-order_id
-date
-client_id
-products
-product_id
-sale_price
-delivery_price
-qùantity
-product_id
-sale_price
-delivery_price
-quantity
....more products sold in this order
next order....
It is easy to do it that way and i can acces every different order and show it in the admin, but i want to make queries about the total sales, sales by product and a query about the profit.
Example question asked
1.What is the total quantites sold for every product from date1 to date2
2.What is the total turnover from date1 to date2
3.What is the profit for date1 to date2.
I want to answer this questions without downloading the whole dataset in the brwoser of course, because i do not think i can afford to pay for such bandwidth.Orders for one year could be tens of thousands:)
I wrote about Elastic search, keen.io but i am not sure exactly what functionality they offer and if it will answer my questions in a bandwidth friendly way.