I'm trying to query for documents older than 900 seconds, but I'm stuck. This is what I have tried so far:
r.table("bar")
.filter(r.expr(r.now() - 900).gt(r.row("updated_at")))
and
r.table("bar")
.filter(r.row("updated_at")
.during(r.time(1970, 1, 1, 'Z'), r.row("updated_at") - 900))
both throw TypeError: Illegal non-finite number 'NaN'. for some reason. The following does not, but returns no results:
r.table("bar")
.filter(900 < r.now() - r.row("updated_at"))
updated_at is a secondary index and holds RqlDateTime objects, RDB version is 2.3.0
You need to write r.now().sub(900) instead of r.now() - 900, because JavaScript doesn't allow you to override the binary operators.
Related
I'm playing around with neo4j and cypher and trying to find out why my query is slow.
Here is my original query that is marked as deprecated:
match (h:Hierarchy)-[r:PARENT_OF*1..4]->(s:SoldTo)
where h.id='0100001709'
and all(x in r
where x.to>=date('2022-02-15')
and x.from <= date('2022-02-15')
and x.hvkorg='S000'
and x.hvtweg='D1'
)
and s.loevm=""
and s.aufsd=""
and s.cassd=""
and s.faksd=""
and s.lifsd=""
and s.sperr=""
and s.sperz=""
r>eturn distinct h.id,s.id
This one works fine and returns a result quite quickly: Started streaming 60 records after 1 ms and completed after 17 ms. However neo4j gives the below warning:
This feature is deprecated and will be removed in future versions.
Binding relationships to a list in a variable length pattern is deprecated. (Binding a variable length relationship pattern to a variable ('r') is deprecated and will be unsupported in a future version. The recommended way is to bind the whole path to a variable, then extract the relationships:
MATCH p = (...)-[...]-(...)
WITH *, relationships(p) AS r)
Now, i've tried this:
match p=(h:Hierarchy)-[:PARENT_OF*1..4]->(s:SoldTo)
with *, relationships(p) as r
where h.id='0100001709'
and all(x in r
where x.to>=date('2022-02-15')
and x.from <= date('2022-02-15')
and x.hvkorg='S000'
and x.hvtweg='D1'
)
and s.loevm=""
and s.aufsd=""
and s.cassd=""
and s.faksd=""
and s.lifsd=""
and s.sperr=""
and s.sperz=""
return distinct h.id,s.id
But, this is very slow: Started streaming 60 records in less than 1 ms and completed after 36931 ms.
17ms vs 36931ms
Would any of you have any recommendation to speed things up using relationships()?
There is a typo error on the query. Instead of doing a scan on the first line, put the where clause (found in line #3 to line#2). This will enable a starting node at h and will make your query faster.
Original:
match p=(h:Hierarchy)-[:PARENT_OF*1..4]->(s:SoldTo)
with *, relationships(p) as r
where h.id='0100001709'
Updated:
match p=(h:Hierarchy)-[:PARENT_OF*1..4]->(s:SoldTo)
where h.id='0100001709'
with h, s, relationships(p) as r
...
...
return distinct h.id,s.id
We have the variables:
"Unique User"
"Version" (Plus, Light in a ratio 79:21 from all Unique User)
"total Events"
"Eventkatagories".
And following scenario:
We can't get the exact data how many users are plus or light users.
But we know how many events are triggered by version (plus/light).
Now we want to know how the relative frequency of events triggered grouped by Version and event category.
So in a pivot table there is the row dimension = Version and the column Dimension = event category.
So the measurement should be the relative frequency.
So the simple custom calculated field should be "total events / users"... But remember we can't get the absolute value of Users by Version, we just know the ratio (80-20).
So I build another calculated field called UsersbyVersion with following statement:
CASE
WHEN (Version = "light") THEN SUM(User) * 0.21
WHEN (Version = "Plus") THEN SUM(User) * 0.79
END
But this formula gives following error:
Invalid formula - Invalid input expression. - Failed to parse CASE
statement
If I use absolute numbers for the statement it works.
Example:
CASE
WHEN (Version = "Normal") THEN 5000
WHEN (Version = "Plus") THEN 25000
END
But we need the statement "User * ration" ... the ratio won't change a lot but the user value in relation to the date we want to set on the Data Studio Report.
So I guess the problem is that the statement won't work with a combination of metrics and dimensions.
I already tried putting the "User * 0.79" and "User * 0.21" in custom metrics but this won't work aswell.
Is there a way to combine dimensions and metrics in a calculated field as an measurement?
Thx for your help
Create 2 metrics -
users * 0.2 (lets call this UsersP2)
users * 0.8 (lets call this UsersP8)
Now this should work
CASE
WHEN (Version = "light") THEN UserP2
WHEN (Version = "Plus") THEN UserP8
END
Dataset
Result
[Win 10; R 3.4.3; RStudio 1.1.383; Rfacebook 0.6.15]
Hi!
I would like to ask two questions concerning the Rfacebook's getPost function:
Even though I have tried all possible combinations of the logical values for the arguments "comments", "reactions" and "likes", the best result I could get so far was a list of 3 components for each post ("post", "comments", and "likes") - that is, without the "reactions" component. Nevertheless, according to the rdocumentation, "getPost returns a list with up to four components: post, likes, comments, and reactions". getPost
Besides the (somehow strange) fact that, according to the same documentation, the argument "reactions" should be FALSE (default) in order to retrieve info on the total reactions to the post(s), I noticed a seemingly odd result: if I simultaneously set "reactions" and "likes" to be either TRUE or FALSE, R returns neither an error nor a warning message. The reason I find it a bit odd is because likes = !reactions in its own definition.
Here is the code:
#packageVersion("Rfacebook")
#[1] ‘0.6.15'
## temporary access token
fb_oauth <- "user access token"
qtd <- 5000
#pag_loop$id[1]
#[1] "242862559586_10156144461009587"
# arguments with default value (reactions = F, likes = T, comments = T)
x <- getPost(pag_loop$id[1], token = fb_oauth, n = qtd)
str(x)
# retrieves a list of 3: posts, likes, comments
Can someone please explain to me why I don't get the reaction's component?
Best,
Luana
Men, this is by the new version of facebook. This worked fine to V2.10 Version of API of facebook. As V2.11 and forward, it no longer works well.
I also can not capture the reactions, and the user's name is null. I have win 10 and R 3.4.2. Could to be R version? please, if you can to resolve this issue send me the response to my email
It has been confusing for me what the difference is between the version-vaild-for number (offset 92) and the file change counter (offset 96) in the database file header.
The entries at offsets 92 and 96 were added in later version of the SQLite library.
When an older version modifies the file, it will change the change counter (offset 24), but not adjust the version-valid-for number or the write library version number. So the library version number is no longer correct, because a different version last wrote to the file.
The version-valid-for number allows a new library to detect this case: if the change counter and the version-valid-for number do not match, then the write library version number is outdated, and must be ignored.
I recently updated Pandas and found this strange behaviour which broke some of my existing code.
I was using a column of Datetime.date objects as a the second level in a two-level MulitIndex.
However, when setting the index with the latest version, the Datetime.date objects are converted to Timestamp objects with 00:00:00 as the time component:
>>> pd.__version__
'0.15.1'
>>> df
0 ID date
0 0.486567 10 2014-11-12
1 0.214374 20 2014-11-13
>>> df.date[0]
datetime.date(2014, 11, 12)
>>> df.set_index(['ID', 'date']).index[0]
(10, Timestamp('2014-11-12 00:00:00'))
This doesn't happen with version 0.14 or older, nor does it happen for a single columns of dates set to index, only for MulitIndices.
There is a hack to get around it, setting the dates to a single level index, adding the other level and then swapping:
>>> df.set_index('date').set_index('ID', append=True).index.swaplevel(0, 1)[0]
(10, datetime.date(2014, 11, 12))
This seems strange and I wondered was it intentional and whether there is a proper way to use datetime.date objects in the new version.
see here
Their was an inconsistency in how date-likes (datetime.date,datetime.datetime,Timestamp) were inferred in a MultiIndex level. This led to the creation of an object dtyped Index rather than a DatetimeIndex. datetime.date are second class objects in pandas as they are not efficiently represented.
If you really really want to create this, you can do this:
In [8]: pd.MultiIndex.from_arrays([Index([datetime.date(2013,1,1)]),['a']])
Out[8]:
MultiIndex(levels=[[2013-01-01], [u'a']],
labels=[[0], [0]])
We came across the same issue and it is still a problem in 0.16. We consider it a bug as it is inconsistent with the operation of creating a single index, and only occurs with multiindex. Why silently change the type if we choose to have it as datetime.date? Set_index should just set the index without changing things.
We don't need the time component. If we wanted to speed things up and do it more efficiently by using a timestamp we should be able to choose that.
It breaks all the code where the index is converted back and forth between columns and index when the table is manipulated (pivoting etc, as it does silent type conversion). Also breaks interaction with other applications and code we have no control over.