Count number of requests in a 24h range to determine user sessions? - azure-data-explorer

Imagine a web server log containing lines like:
<timestamp> <ip> <user-agent> <product page>
I would like a report that
count the number of requests for product page per user session within a 24h window with the following criteria:
a unique user is defined as the combination of a number of columns ( )
24h window starts at the timestamp of first request for product page (24h window can start at any hour)
in case there are more 24h lapsed between timestamp of requests it would be considered a new user session
For the following logs:
2019-1-1 01:00 1.2.3.4 Netscape product 5
2019-1-1 01:01 1.2.3.4 Netscape product 5
2019-1-1 01:00 1.2.3.5 Chrome product 5
2019-1-1 01:01 1.2.3.5 Chrome product 5
2019-1-1 01:59 1.2.3.4 Netscape product 5
2019-1-1 02:00 1.2.3.4 Netscape product 4
2019-1-1 02:01 1.2.3.4 Netscape product 4
2019-1-1 02:02 1.2.3.4 Netscape product 4
2019-1-1 07:43 1.2.3.5 Chrome product 5
2019-1-2 2:01 1.2.3.4 Netscape product 5
would produce:
1.2.3.4/Netscape, product 4, 1
1.2.3.4/Netscape, product 5, 2
1.2.3.5/Chrome: product 5, 1
and perhaps a second query would output:
1.2.3.4/Netscape, 6
1.2.3.4/Netscape, 1
1.2.3.5/Chrome, 3
(the number of requests per user 24h window, hence 1.2.3.4/Netscape is listed twice)
What would be example queries that would delivery both above result sets?
Bonus/optional: in case the requests within a 24h period are more than 30m in between it would be considered another new session

here's something which may give you a direction (not necessarily too performant/efficient though, depending on the size of the input data set).
datatable(timestamp:datetime, ip:string, user_agent:string, product_page:string)
[
datetime(2019-01-01 01:00), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 01:01), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 01:00), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-01 01:01), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-01 01:59), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 02:00), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 02:01), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 02:02), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 07:43), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-02 02:01), '1.2.3.4', 'Netscape', 'product 5',
]
| extend user = strcat(ip, "/", user_agent)
| order by user asc, timestamp asc
| extend session_start = row_window_session(timestamp, 24h, 24h, user_agent != prev(user_agent) or product_page != prev(product_page) or ip != prev(ip))
| summarize session_count = dcount(session_start) by user, product_page
-->
| user | product_page | session_count |
|------------------|--------------|---------------|
| 1.2.3.4/Netscape | product 5 | 2 |
| 1.2.3.4/Netscape | product 4 | 1 |
| 1.2.3.5/Chrome | product 5 | 1 |
for the second query, the following could work:
datatable(timestamp:datetime, ip:string, user_agent:string, product_page:string)
[
datetime(2019-01-01 01:00), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 01:01), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 01:00), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-01 01:01), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-01 01:59), '1.2.3.4', 'Netscape', 'product 5',
datetime(2019-01-01 02:00), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 02:01), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 02:02), '1.2.3.4', 'Netscape', 'product 4',
datetime(2019-01-01 07:43), '1.2.3.5', 'Chrome', 'product 5',
datetime(2019-01-02 02:01), '1.2.3.4', 'Netscape', 'product 5',
]
| extend user = strcat(ip, "/", user_agent)
| summarize count() by user, startofday(timestamp)
| project-away timestamp
-->
| user | count_ |
|------------------|--------|
| 1.2.3.4/Netscape | 6 |
| 1.2.3.5/Chrome | 3 |
| 1.2.3.4/Netscape | 1 |

Related

MariaDB JSON_ARRAYAGG gives wrong result

I have 2 problems in MariaDB 15.1 when using JSON_ARRAYAGG
The brackets [] are omitted
Incorrect wrong result, values are duplicates or omitted
My database is the following:
user:
+----+------+
| id | name |
+----+------+
| 1 | Jhon |
| 2 | Bob |
+----+------+
car:
+----+---------+-------------+
| id | user_id | model |
+----+---------+-------------+
| 1 | 1 | Tesla |
| 2 | 1 | Ferrari |
| 3 | 2 | Lamborghini |
+----+---------+-------------+
phone:
+----+---------+----------+--------+
| id | user_id | company | number |
+----+---------+----------+--------+
| 1 | 1 | Verzion | 1 |
| 2 | 1 | AT&T | 2 |
| 3 | 1 | T-Mobile | 3 |
| 4 | 2 | Sprint | 4 |
| 5 | 1 | Sprint | 2 |
+----+---------+----------+--------+
1. The brackets [] are omitted
For example this query that gets users with their list of cars:
SELECT
user.id AS id,
user.name AS name,
JSON_ARRAYAGG(
JSON_OBJECT(
'id', car.id,
'model', car.model
)
) AS cars
FROM user
INNER JOIN car ON user.id = car.user_id
GROUP BY user.id;
Result: brackets [] were omitted in cars (JSON_ARRAYAGG has the behavior similar to GROUP_CONCAT)
+----+------+-----------------------------------------------------------+
| id | name | cars |
+----+------+-----------------------------------------------------------+
| 1 | Jhon | {"id": 1, "model": "Tesla"},{"id": 2, "model": "Ferrari"} |
| 2 | Bob | {"id": 3, "model": "Lamborghini"} |
+----+------+-----------------------------------------------------------+
However when adding the filter WHERE user.id = 1, the brackets [] are not omitted:
+----+------+-------------------------------------------------------------+
| id | name | cars |
+----+------+-------------------------------------------------------------+
| 1 | Jhon | [{"id": 1, "model": "Tesla"},{"id": 2, "model": "Ferrari"}] |
+----+------+-------------------------------------------------------------+
2. Incorrect wrong result, values are duplicates or omitted
This error is strange as the following conditions must be met:
Consult more than 2 tables
The DISTINCT option must be used
A user has at least 2 cars and at least 3 phones.
Duplicate values
for example, this query that gets users with their car list and their phone list:
SELECT
user.id AS id,
user.name AS name,
JSON_ARRAYAGG( DISTINCT
JSON_OBJECT(
'id', car.id,
'model', car.model
)
) AS cars,
JSON_ARRAYAGG( DISTINCT
JSON_OBJECT(
'id', phone.id,
'company', phone.company,
'number', phone.number
)
) AS phones
FROM user
INNER JOIN car ON user.id = car.user_id
INNER JOIN phone ON user.id = phone.user_id
GROUP BY user.id;
I will leave the output in json format and I will only leave the elements that interest.
Result: brackets [] were omitted and duplicate Verizon
{
"id": 1,
"name": "Jhon",
"phones": // [ Opening bracket expected
{
"id": 5,
"company": "Sprint",
"number": 2
},
{
"id": 1,
"company": "Verzion",
"number": 1
},
{
"id": 1,
"company": "Verzion",
"number": 1
}, // Duplicate object with the DISTINCT option
{
"id": 2,
"company": "AT&T",
"number": 2
},
{
"id": 3,
"company": "T-Mobile",
"number": 3
}
// ] Closing bracket expected
}
Omitted values
This error occurs when omit phone.id is omitted in the query
SELECT
user.id AS id,
user.name AS name,
JSON_ARRAYAGG( DISTINCT
JSON_OBJECT(
'id', car.id,
'model', car.model
)
) AS cars,
JSON_ARRAYAGG( DISTINCT
JSON_OBJECT(
--'id', phone.id,
'company', phone.company,
'number', phone.number
)
) AS phones
FROM user
INNER JOIN car ON user.id = car.user_id
INNER JOIN phone ON user.id = phone.user_id
GROUP BY user.id;
Result: brackets [] were omitted and Sprint was omitted.
Apparently this happens because it makes an OR type between the columns of the JSON_OBJECT, since the company exists in a different row and number in a other different row
{
"id": 1,
"name": "Jhon",
"phones": // [ Opening bracket expected
//{
// "company": "Sprint",
// "number": 2
//}, `Sprint` was omitted
{
"company": "Verzion",
"number": 1
},
{
"company": "AT&T",
"number": 2
},
{
"company": "T-Mobile",
"number": 3
}
// ] Closing bracket expected
}
GROUP_CONCAT instance of JSON_ARRAYAGG solves the problem of duplicate or omitted objects
However, by adding the filter WHERE user.id = 1, the brackets [] are not omitted and also the problem of duplicate or omitted objects is also solved:
{
"id": 1,
"name": "Jhon",
"phones": [
{
"id": 1,
"company": "Verzion",
"number": 1
},
{
"id": 2,
"company": "AT&T",
"number": 2
},
{
"id": 3,
"company": "T-Mobile",
"number": 3
},
{
"id": 5,
"company": "Sprint",
"number": 2
}
]
}
What am I doing wrong?
So far my solution is this, but I would like to use JSON_ARRAYAGG since the query is cleaner
-- 1
SELECT
user.id AS id,
user.name AS name,
CONCAT(
'[',
GROUP_CONCAT( DISTINCT
JSON_OBJECT(
'id', car.id,
'model', car.model
)
),
']'
) AS cars
FROM user
INNER JOIN car ON user.id = car.user_id
GROUP BY user.id;
-- 2
SELECT
user.id AS id,
user.name AS name,
CONCAT(
'[',
GROUP_CONCAT( DISTINCT
JSON_OBJECT(
'id', car.id,
'model', car.model
)
),
']'
) AS cars,
CONCAT(
'[',
GROUP_CONCAT( DISTINCT
JSON_OBJECT(
'id', phone.id,
'company', phone.company,
'number', phone.number
)
),
']'
) AS phones
FROM user
INNER JOIN car ON user.id = car.user_id
INNER JOIN phone ON user.id = phone.user_id
GROUP BY user.id;

Is there a place to report API data errors for the HERE browse search API?

When using requests.get('https://browse.search.hereapi.com/v1/browse?apiKey=' + YOUR_API_KEY + '&at=53.544348,-113.500571&circle:46.827727",-114.000519,r=3000&limit=10&categories=800-8200-0174') I get a response that shows Canadian postal codes - but only the first 3 characters.
For example, I get this data:
{'title': 'Boyle Street Education Centre', 'id': 'here:pds:place:124c3x29-d6c9cbd3d53a4758b8c953132db92244', 'resultType': 'place', 'address': {'label': 'Boyle Street Education Centre, 10312 105 St NW, Edmonton, AB T5J, Canada', 'countryCode': 'CAN', 'countryName': 'Canada', 'stateCode': 'AB', 'state': 'Alberta', 'county': 'Alberta', 'city': 'Edmonton', 'district': 'Downtown', 'street': '105 St NW', 'postalCode': 'T5J', 'houseNumber': '10312'}, 'position': {'lat': 53.54498, 'lng': -113.5016}, 'access': [{'lat': 53.54498, 'lng': -113.50105}], 'distance': 98, 'categories': [{'id': '800-8200-0174', 'name': 'School', 'primary': True}, {'id': '800-8200-0295', 'name': 'Training & Development'}], 'references': [{'supplier': {'id': 'core'}, 'id': '36335982'}, {'supplier': {'id': 'yelp'}, 'id': 'r3BvVKqluzrZeae9FE4tAw'}], 'contacts': [{'phone': [{'value': '+17804281420'}], 'fax': [{'value': '(780) 429-1458', 'categories': [{'id': '800-8200-0174'}]}], 'www': [{'value': 'http://www.bsec.ab.ca', 'categories': [{'id': '800-8200-0174'}]}]}], 'openingHours': [{'categories': [{'id': '800-8200-0174'}], 'text': ['Mon-Sat: 09:00 - 17:00', 'Sun: 10:00 - 16:00'], 'isOpen': False, 'structured': [{'start': 'T090000', 'duration': 'PT08H00M', 'recurrence': 'FREQ:DAILY;BYDAY:MO,TU,WE,TH,FR,SA'}, {'start': 'T100000', 'duration': 'PT06H00M', 'recurrence': 'FREQ:DAILY;BYDAY:SU'}]}]}
Notice that the postal code listed is "T5J". This is incorrect. Canadian postal codes are 3 characters, a space, and then 3 more characters. I'm guessing this is a parsing error that occurred when the data was captured. The correct postal code is "T5J 1E6".
Yes, HERE has a tool to modify the poi address information.
I reported this poi postal code to be updated to "T5J 1E6".
Please visit below the web tool.
https://mapcreator.here.com/place:124c3x29-d6c9cbd3d53a4758b8c953132db92244/?l=53.5450,-113.5016,18,normal
Thank you!

Kusto query for getting cummulative count up to a given date [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a database with a set of events with a user id and timestamp, and I am trying to write a query that will give me the count of distinct users that have triggered an event up to each day. So if we have the following data:
Event | UID | Time Stamp
event 1 | 0 | 9/25/19 9:00 AM
event 2 | 1 | 9/25/19 3:00 PM
event 3 | 2 | 9/26/19 2:00 PM
event 4 | 1 | 9/28/19 5:00 PM
event 5 | 3 | 9/29/19 7:00 AM
Then the output should be:
9/25/19 : 2
9/26/19 : 3
9/27/19 : 3 (since there are no new events on the 27th)
9/28/19 : 3 (since user with UID=1 has already been counted)
9/29/19 : 4
I have a query which will get the number of events per day, but not the number of events of all days leading up to that day. Any help would be greatly appreciated!
there are several built-in user analytics plugins in Kusto/ADX: https://learn.microsoft.com/en-us/azure/kusto/query/useranalytics
one of them, for example, is the activity_engagement plugin: https://learn.microsoft.com/en-us/azure/kusto/query/activity-engagement-plugin
for example:
let T = datatable(Event:string, UID:int, Timestamp:datetime)
[
'event 1', 0, datetime(9/25/19 9:00 AM),
'event 2', 1, datetime(9/25/19 3:00 PM),
'event 3', 2, datetime(9/26/19 2:00 PM),
'event 4', 1, datetime(9/28/19 5:00 PM),
'event 5', 3, datetime(9/29/19 7:00 AM),
]
;
let min_date_time = toscalar(T | summarize startofday(min(Timestamp)));
let max_date_time = toscalar(T | summarize startofday(max(Timestamp)));
T
| evaluate activity_engagement (UID, Timestamp, 1d, 1d + max_date_time - min_date_time)
| project Timestamp, dcount_activities_outer
and, if you want to "fill the gap" for Sep-27, you can do the following:
let T = datatable(Event:string, UID:int, Timestamp:datetime)
[
'event 1', 0, datetime(9/25/19 9:00 AM),
'event 2', 1, datetime(9/25/19 3:00 PM),
'event 3', 2, datetime(9/26/19 2:00 PM),
'event 4', 1, datetime(9/28/19 5:00 PM),
'event 5', 3, datetime(9/29/19 7:00 AM),
]
;
let min_date_time = toscalar(T | summarize startofday(min(Timestamp)));
let max_date_time = toscalar(T | summarize startofday(max(Timestamp)));
range Timestamp from min_date_time to max_date_time step 1d
| join kind=leftouter (
T
| evaluate activity_engagement (UID, Timestamp, 1d, 1d + max_date_time - min_date_time)
| project Timestamp, dcount_activities_outer
) on Timestamp
| order by Timestamp asc
| project Timestamp, coalesce(dcount_activities_outer, prev(dcount_activities_outer))

Compare two lists to find date and time overlaps in Elixir

As part of some code that runs a booking system, we have a list of time_slots, which are tuples containing {start_time, end_time}. These are the available time slots that can be booked:
time_slots = [
{~T[09:00:00], ~T[13:00:00]},
{~T[09:00:00], ~T[17:00:00]},
{~T[09:00:00], ~T[21:00:00]},
{~T[13:00:00], ~T[17:00:00]},
{~T[13:00:00], ~T[21:00:00]},
{~T[17:00:00], ~T[21:00:00]}
]
Then we also have a list of bookings, which contains lists of tuples containing each {booking_start, booking_end}.
bookings = [
[
{~N[2019-06-13 09:00:00], ~N[2019-06-13 17:00:00]},
{~N[2019-06-13 17:00:00], ~N[2019-06-13 21:00:00]}
],
[{~N[2019-06-20 09:00:00], ~N[2019-06-20 21:00:00]}],
[
{~N[2019-06-22 13:00:00], ~N[2019-06-22 17:00:00]},
{~N[2019-06-22 17:00:00], ~N[2019-06-22 21:00:00]}
]
]
In this case, we would want the results to be the two bookings with all of their time_slots filled up:
2019-06-13
2019-06-20
As they have all of their time slots filled up, and then return these results as Dates.
To provide a bit more information:
For a time slot to be filled up it would require either a booking’s start or finish to overlap inside of it (regardless of how small that overlap is):
E.g. a booking of 0900–1000 would fill the 0900–1300, 0900–1700 and 0900–2100 time slots
A time slot can be filled with more than one booking:
E.g. we can have bookings of 0900–1000 and 1000–1200, which would both fit inside the 0900–1300 time slot.
If there is a booking that extends beyond the largest time slot, it counts as being filled:
E.g. a booking of 0800—2200 would fill the 0900–2100 time slot (along with all the others)
So my understanding of the question is: for a list of bookings, do all time slots conflict with at least one booking?
A conflicting booking can be answered by checking two things:
If the booking starts BEFORE the time slot starts, it conflicts if the booking finishes AFTER the time slot starts.
If the booking starts ON OR AFTER the time slot starts, it conflicts if the BOOKING starts before the time slot finishes.
A working code therefore would look like:
time_slots = [
{~T[09:00:00], ~T[13:00:00]},
{~T[09:00:00], ~T[17:00:00]},
{~T[09:00:00], ~T[21:00:00]},
{~T[13:00:00], ~T[17:00:00]},
{~T[13:00:00], ~T[21:00:00]},
{~T[17:00:00], ~T[21:00:00]}
]
bookings = [
[
{~N[2019-06-13 09:00:00], ~N[2019-06-13 17:00:00]},
{~N[2019-06-13 17:00:00], ~N[2019-06-13 21:00:00]}
],
[{~N[2019-06-20 09:00:00], ~N[2019-06-13 21:00:00]}],
[
{~N[2019-06-22 13:00:00], ~N[2019-06-22 17:00:00]},
{~N[2019-06-22 17:00:00], ~N[2019-06-22 21:00:00]}
]
]
bookings
|> Enum.filter(fn booking ->
Enum.all?(time_slots, fn {time_start, time_end} ->
Enum.any?(booking, fn {booking_start, booking_end} ->
if Time.compare(booking_start, time_start) == :lt do
Time.compare(booking_end, time_start) == :gt
else
Time.compare(booking_start, time_end) == :lt
end
end)
end)
end)
|> Enum.map(fn [{booking_start, _} | _] -> NaiveDateTime.to_date(booking_start) end)
PS: note you should not compare time/date/datetime with >, < and friends. Always use the relevant compare functions.
Although this might not cover all cases, given the sample data you provided this would work
defmodule BookingsTest do
#slots [
{~T[09:00:00], ~T[13:00:00]},
{~T[09:00:00], ~T[17:00:00]},
{~T[09:00:00], ~T[21:00:00]},
{~T[13:00:00], ~T[17:00:00]},
{~T[13:00:00], ~T[21:00:00]},
{~T[17:00:00], ~T[21:00:00]}
]
def booked_days(bookings, time_slots \\ #slots) do
Enum.reduce(bookings, [], fn(day_bookings, acc) ->
Enum.reduce(day_bookings, time_slots, fn({%{hour: s_time}, %{hour: e_time}}, ts) ->
Enum.reduce(ts, [], fn
({%{hour: slot_s}, %{hour: slot_e}} = slot, inner_acc) ->
case is_in_slot(s_time, e_time, slot_s, slot_e) do
true -> inner_acc
_ -> [slot | inner_acc]
end
end)
end)
|> case do
[] -> [day_bookings | acc]
_ -> acc
end
end)
|> Enum.reduce([], fn([{arb, _} | _], acc) -> [NaiveDateTime.to_date(arb) | acc] end)
end
def is_in_slot(same_start, _, same_start, _), do: true
def is_in_slot(s_time, e_time, slot_s, slot_e) when s_time < slot_s and e_time > slot_s, do: true
def is_in_slot(s_time, e_time, slot_s, slot_e) when s_time > slot_s and s_time < slot_e, do: true
def is_in_slot(_, _, _, _), do: false
end
> bookings = [
[
{~N[2019-06-13 10:00:00], ~N[2019-06-13 17:00:00]},
{~N[2019-06-13 17:00:00], ~N[2019-06-13 21:00:00]}
],
[{~N[2019-06-20 09:00:00], ~N[2019-06-20 21:00:00]}],
[
{~N[2019-06-22 13:00:00], ~N[2019-06-22 17:00:00]},
{~N[2019-06-22 17:00:00], ~N[2019-06-22 21:00:00]}
]
]
> BookingsTest.booked_days(bookings)
[~D[2019-06-13], ~D[2019-06-20]]
The idea is, reduce through the bookings list accumulating into an empty list, each enumeration will be the list of occupied slots for the day.
Reduce through this list, accumulating with the list of all time slots available.
Inside this reduce through the timeslots accumulator into an empty list.
For each slot check if the start and end time of the current day booking slot overlaps into the slot. If it does just return the inner accumulator as is. If it doesn't, add the slot into this accumulator.
In the end of the day_bookings reduction, if you have an empty list it means no slot remains available for the day. So you add it to the outer accumulator, this will be the list of fully booked days.
In the end you reduce again the results so to invert them and in the process set each element to be the Date, instead of the list of bookings for the day.
Assuming you have a typo in the second booking and it does not start almost a week after it’s own end, the solution might be much simpler than careful reducing.
The slots are filled when the booking starts and ends exactly at:
{start, end} =
time_slots
|> Enum.flat_map(&Tuple.to_list/1)
|> Enum.min_max()
#⇒ {~T[09:00:00], ~T[21:00:00]}
Which make the check almost trivial:
Enum.filter(bookings, fn booking ->
{s, e} = {Enum.map(booking, &elem(&1, 0)), Enum.map(booking, &elem(&1, 1))}
with {[s], [e]} <- {s -- e, e -- s} do
same_date =
[s, e]
|> Enum.map(&NaiveDateTime.to_date/1)
|> Enum.reduce(&==/2)
full = Enum.map([s, e], &NaiveDateTime.to_time/1)
same_date and full == [start, end]
end
end)
Kernel.SpecialForms.with/1 guarantees that whatever not expected will be filtered out.

Time loop for WooCommerce checkout select option

I need your help please.
I have time picker on my WordPress site, it looks like this:
woocommerce_form_field( 'time', array(
'type' => 'select',
'label' => __('Delivery Time', 'woocommerce'),
'placeholder' => _x('', 'placeholder', 'woocommerce'),
'required' => 'true',
'options' => array(
'9 AM - 10 AM' => __('9 AM - 10 AM', 'woocommerce' ),
'10 AM - 11 AM' => __('10 AM - 11 AM', 'woocommerce' ),
'11 AM - 12 PM' => __('11 AM - 12 PM', 'woocommerce' ),
'12 PM - 1 PM' => __('12 PM - 1 PM', 'woocommerce' ),
'1 PM - 2 PM' => __('1 PM - 2 PM', 'woocommerce' ),
'2 PM - 3 PM' => __('2 PM - 3 PM', 'woocommerce' ),
'3 PM - 4 PM' => __('3 PM - 4 PM', 'woocommerce' ),
'4 PM - 5 PM' => __('4 PM - 5 PM', 'woocommerce' ),
'5 PM - 6 PM' => __('5 PM - 6 PM', 'woocommerce' ),
'6 PM - 7 PM' => __('6 PM - 7 PM', 'woocommerce' ),
'7 PM - 8 PM' => __('7 PM - 8 PM', 'woocommerce' ),
'8 PM - 9 PM' => __('8 PM - 9 PM', 'woocommerce' ),
'9 PM - 10 PM' => __('9 PM - 10 PM', 'woocommerce' ),
'10 PM - 11 PM' => __('10 PM - 11 PM', 'woocommerce' ),
'11 PM - 12 AM' => __('11 PM - 12 AM', 'woocommerce' )
),
), $checkout->get_value( 'time' ));
However I want options are based on some rules:
1. If client choose todays date on date picker, no options will be avalible.
2. If client will choose tomorrows date, it will show options(Current Time + 24hours)
3. If client will choose date after tomorrow, all options will be avalible.
Please, I need your help.
Thank you.

Resources