Is there a way to make the console use the data imported into the graph database?
I tried this address:
http://console.neo4j.org/?init=http://localhost:7474/db/data/cypher
while running the server but can't get any data to show up when I try to run basic queries (MATCH n RETURN n LIMIT 25)
also the toggle viz doesn't produce anything.
Related
I am working on a dashboard, where in backend Kusto query are running and plot a graph on dashboard based on the results.
I am trying to print a custom message like,
| extend CustomColumn=iff(isempty(expectedExpiration),"Expiration data is not available for this ",expectedExpiration)
i tried isempty, isnull, isNaN function as well but I am not getting this custom message as an output.
Can you help in finding what is going wrong here or I am missing something?
Explained in prob statement
I am trying to kick off Google Cloud Function when two tables ga_sessions and events have successfully created in BigQuery (these tables can be created anytime in the gap of 3-4 hours).
I have written the following log stackdriver sink/log router to which Pub/Sub topic is subscribed (which in turn kick off google cloud function). However, it is not working. If I use sink/router individually for ga_sessions and events it works fine but when I combine them together then it doesn't work.
So my question is how do I take two different events from log stackdriver, combined them together & pass them to pub/sub topic
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="firebase-measurement#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.writeDisposition:"WRITE_TRUNCATE"
protoPayload.serviceData.jobCompletedEvent.job.jobStatus.state:"DONE"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events_intraday"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="analytics-processing-dev#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions_intraday"
Thanks in advance for your help/guidance.
The trick here is to create a metric that would actually show 1 when both conditions are met.
Try creating a new logs based metric and swithc to "query editor". There you can create your own metric using MQL language.
To be able to create a single metric from two "metrics" you need to use something like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
} | join | div
And here's some useful info on how to create an alerting policy using MQL.
The code for the alerting policy can look like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
}
| join | div
| condition val() >1
This is just an example to demonstrate that it is very likely possible to create the metric to monitor the creation of BigQuery tables but you have to test it yourself.
At work we use Shiny to develop management dashboards. The information source used to be flat files previously extracted from the database but now we are trying to develop a "live" dashboard allowing the user to query the database from within the dashboard. The dashboard is reached using a url and there is no login required to view the dashboard. Just point the browser and the dashboard loads.
I've got it working fine except for one problem. If two users are using the dashboard at the same time, running the same queries and creating the same tables, the results can become mixed. One user will see the results of another.
Queries are ran using an actionButton that runs a function such as:
dataTbl <- function(<select criteria>) {
sqlQuery(connection, "select * from ... ")
}
How can each user have their own unique session so that regardless of the number of users at any one time, each is separate and independent of the others?
You can do string search and replace, I would also suggest that you use promises package to do the querying https://rstudio.github.io/promises/articles/shiny.html
dataTbl <- function(table,products) {
qr <- "select * from TABLE where product = PRODUCTS"
qr <- gsub("TABLE",table,qr)
qr <- gsub("PRODUCTS",products,qr)
sqlQuery(connection,qr)
}
Below is a picture of the firebase database tree . I have items a, b , c. I want the value of totalresult = a + b + c
My requirement is : As the value of a or b or c gets updated , it should get automatically reflected in the totalresult item value.
Is there a way to set in firebase to do it automatically instead running a piece of code everytime to add these and update in firebase
Am able to run a piece of code to add these and update the value in totalresult. But I have to run it manually every time, which is not an ideal solution
There isn't an internal way to do this in firebase realtime database.
That said, while you still have to write code, you can write a firebase function to trigger on updates to those fields, and then apply the update to total result. This will be automatic instead of manual, as the trigger will happen for every event on the database.
Documentation is here for how to create such a trigger (probably using the "onWrite" event).
Of course, there are a few things to be aware of:
There will be a period of time while the function is running that the data is not updated. In other words, you should be tolerant of inconsistencies. (You will likely also want to do the actual writing to the total using a transaction)
You need to be careful to not run the function (or exit early) when "tot/total result" is being updated, or you could get into an infinite loop of functions (it'd be best to have the result object elsewhere in your tree)
I have models in SQL that update the data everyday automatically. Info about the models is stored in a table. One of the columns I called "error". It is a column with zeros, which change to 1 if the performance of the model is below a certain threshold. Now my question is whether it is possible that I get notified if an entry of the "error" column becomes 1.
The models I discussed are R scripts that I run in SQL. The scripts predict whether a customer makes a purchase or not.
I'm using Microsoft SQL Server.
To get a more immediate notification than an email, you can also use a tool like Notify17, which would send you a push notification on your browser/mobile phone (it is very easy to miss out emails).
There is a basic R recipe for it, which boils down to:
notify17 <- function(rawAPIKey, title, content = '') {
hookURL = paste0("https://hook.notify17.net/api/raw/", rawAPIKey)
query <- list(title = title,
content = content)
resp <- httr::POST(hookURL, body = query, encode = "form")
print(httr::content(resp))
}
# Usage:
notify17('RAW_API_KEY',
"Model training finished")
All you need is to get a raw API key from the dashboard and use it in a script.
I used the R script to send an email.
this will explain how to do it:
how do you send email from R