Get time from remote time server in R - r

How can the current timestamp be queried from a remote server (e.g., NIST, windows time server) using R?
I'm guessing it is possible to obtain time using NIST Internet Time Service (https://www.nist.gov/pml/time-and-frequency-division/services/internet-time-service-its), but unsure how this would be done.

Related

InfluxDB and Grafana on server without datetime sync

I’m running an app on raspberry. This raspberry is not connected to internet, and without RTC. So the raspberry datetime is random.
The app push data to GraphQL. So data are stored to the machine timestamp, witch is a random one.
Grafana is connected to GraphQL to read data.
Problem :
Data are stored at the datetime of the serveur, for example 2020-01-01 00h00m00s
Grafana get the datetime of the browser witch is sync with internet datetime, for example 2020-01-01 14h30m12s
My device is not connected to internet, but all apps (data producer app / InfluxDB / Grafana) are connected in localhost. The clock on my device is set randomly. It take probably the last known datetime ?
So data are stored at the datetime of device. But Grafana take the datetime of the browser (my phone / my pc) witch theirs datetime have been sync with UTC. When I tried t visualize data on Grafana, they are absent because they are stored at a datetime in the past.
Question :
How to say at Grafana to use server datetime and not browser one ?
My problem is not related with problem where data are store at UTC and Grafana at UTC+1.

How can i perform normal R-functions for hadoop remote on SQL Server?

how can I perform normal R-Code on a SQL Server without using the Microsoft rx-functions? I think the ComputeContext "RxInSqlServer" isn't the right one? But I couldn't find good Information about the other ComputeContext-options.
Is this possible with this Statement?
rxSetComputeContext(ComputeContext)
Or can I only use it to perform rx-functions? An other Option could be to set the Server Connection in RStudio or VisualStudio?
My Problem is: I want analyse data from hadoop via ODBC-Connection on the SQL Server, so I would like to use the performance of the remote SQL Server and not the data in SQL Server. And then I want analyse the hadoop-data with sparklyr.
Summary: I want to use the performance from the remote server and not the SQL Server data. So RStudio should run not local, it should perform and use the memory of the remote server.
Thanks!
The concept of a compute context in Microsoft R Server is, “Where will the computation be performed?”
When setting compute context, you are telling Microsoft R Server that computation will occur on either the local machine (with either “local” or “localpar” compute contexts), or, the script will be executed on a remote machine which has Microsoft R Server installed on it. Remote compute contexts are defined by creating a compute context object, and then setting the context to that object.
For SQL Server, you would create an RxInSqlServer() object, and then call rxSetComputeContext() on that object. For Hadoop, the object would be created via the RxHadoopMR() call.
In code, it would look something like:
CC <- RxHadoopMR( < context defined here > )
rxSetComputeContext(CC)
To see usage on defining a context, please see documentation (Enter "?RxHadoopMR" in the R Client, no quotes).
Any call to an "rx" function after this will be performed on the Hadoop cluster, with no data being transferred to the client; other than the results.
RxInSqlServer() would follow the same pattern.
Note: To perform any remote computation, Microsoft R Server must be installed on that machine.
If you wish to run a standard R function on a remote compute context, you must wrap that function in a call to rxExec(). rxExec() is desinged as an interface to parallelize any Open Source R function and allow for its execution on a remote context. Please see documentation (enter "?rxExec" in the R Client, no quotes) for usage.
For information on efficient parallelization, please see this blog: https://blogs.msdn.microsoft.com/microsoftrservertigerteam/2016/11/14/performance-optimization-when-using-rxexec-to-parallelize-algorithms/
You called out "without using the Microsoft rx-functions" and I am interpreting this as, "I would like to use Open Source R Algorithms on data in-SQL Server", with Microsoft R Server, you must use rxExec() as the interface to run Open Source R. If you want to use no rx functions at all, you will need to query the data to your local machine, and then use Open Source R. To interface with a remote context using Microsoft R Server, the bare minimum is using rxExec().
This is how you will be able to achieve the first part of your ask, "how can I perform normal R-Code on a SQL Server without using the Microsoft rx-functions? I think the ComputeContext "RxInSqlServer" isn't the right one?"
For your second ask, "My Problem is: I want analyse data from hadoop via ODBC-Connection on the SQL Server, so I would like to use the performance of the remote SQL Server and not the data in SQL Server. And then I want analyse the hadoop-data with sparklyr."
First, I'd like to comment that with the release of Microsoft R Server 9.1, you can use sparklyr in-line with an MRS Spark connection, for some examples, please see this blog: https://blogs.msdn.microsoft.com/microsoftrservertigerteam/2017/04/19/new-features-in-9-1-microsoft-r-server-with-sparklyr-interoperability/
Secondly, what you are trying to do is very involved. I can think of two ways that this is possible.
One is, if you have SQL Server PolyBase, you can configure SQL Server to make a virtual table referencing data in Hadoop, similar to Hive. After you have referenced your Hadoop data in SQl Server, you would use an RxInSqlServer() compute context on these tables. This would analyse the data in SQL Server, and return the results to the client.
Here is a detailed blog explaining an end-to-end setup on Cloudera and SQL Server: https://blogs.msdn.microsoft.com/microsoftrservertigerteam/2016/10/17/integrating-polybase-with-cloudera-using-active-directory-authentication/
The Second, which I would NOT recommend, is untested, hacky, and has the following prereqs:
1) Your Hadoop cluster must have OpenSSH installed and configured
2) Your SQL Server Machine must have the ability to SSH into your Hadoop Cluster
3) You must be able to place an SSH Key on your SQL Server machine in a directory which the R Services process has the ability to access
And I need to add another disclaimer here, there is No Guarantee of this working, and, likely, it will not work. The software was not designed to operate in this fashion.
You would then do the following:
On your client machine, you would define a custom function which contains the analysis that you wish to perform, this can be Open Source R Function, rx functions, or a mix.
In this custom function, before calling any other R or rx functions, you would define a RxHadoopMR compute context object which points to your cluster, referencing the SSH key in the directory on the SQL Server machine as if you were executing from that machine. (in the same way that you would define the RxHadoopMR object if you were to do a remote Hadoop operation from your client machine).
Within this custom function, immediately after RxHadoopMR() is defined, you would call rxSetComputeContext() on your defined RxHadoopMR() object
Still in this custom function, write the actual script which will operate on the data in Hadoop.
After this function is defined, you would define an RxInSqlServer() compute context object on the client machine.
You would set your compute context to RxInSqlServer()
Then you would call rxExec() with your custom function as an input.
What this will do is execute your custom function on the SQL Server machine, which would hopefully cause it to define its compute context as your Hadoop cluster, and pull the data over SSH for analysis on the SQL Server machine; returning the results to client.
With that said, this is not how Microsoft R Server was designed to be used, and if you wish to optimize performance, please use Option One and configure PolyBase.

R web server that handles sessions

I am not sure if this is the right place to ask this question. Please point out to me where if this is the case.
I must build a multi user, stateful (sessions; object persistance) web application that will uses .NET in the backend and must connect to R in order to perform calculations on data that lies in a SQL server 2016 DB. Basically, I need to connect a MS based backend with R.
Everything is clear, except for the problem that I need to find an R server that handles sessions. I know shiny but I can't use it (long story).
rApache and openCPU do not handle sessions.
Rserve for windows is very limited (no parallel connections are supported, subsequent connections share the same namespace and sessions are not supported - this is a consequence of the fact that parallel connections are not supported)
Finally, I have seen Rook (i.e. Run R/Rook as a web server on startup) but I can't read anywhere, even the docs. if it is able to deal with sessions. My question is: is there a non stateless R web server or does anyone knows if Rook is stateless?
EDIT:
Apparently, this question has been around for longer: http://jeffreyhorner.tumblr.com/about#comment-789093732

PingFederate OpenToken Sample Application

I'm trying out the sample applications provided together with the PingFederate .NET Integration Kit. I was able to make it work for the Single Server set-up (my machine served as both the IdP and the SP).
But when I tried setting up two machines like it was specified in this link:
https://documentation.pingidentity.com/display/NETIK/Deploying+the+Sample+Applications
A more realistic scenario is to deploy the applications on a separate IIS server machine
I was able to edit the Adapter Instance and the Default URL but there's this problem of clock skew between servers
Verify that your server clocks are synchronized. If they are not synchronized, you can account for this by adjusting the Not Before Tolerance value in the OpenToken adapter configuration, which is the amount of time (in seconds) to allow for clock skew between servers. The default and recommended value is 0.
I checked the possible values and the max is 3600 seconds.
Question: What if my server has more than an hour of time difference? Is that set-up still possible? (Servers are actually on different time zones.)
The OpenToken uses GMT, so timezones are taken out of the picture - as long as your server is set to the proper time, and actual proper timezone for where it is, it should work just fine. For example, you can have serverA in New York City, and serverB in Los Angeles. If serverA is set to Eastern Time, and serverB is set to Pacific Time, then the OpenToken will work - since it converts times to GMT, the times on the token will be the "same".
Hope that makes sense - I need another cup of coffee this morning. :)

How to synchronize hour between two Win7 machines to have the same timestamp

I developed a MFC application in C++ which I use to capture data from some USB sensors in order to save the information in a TXT file. For each data, I also save the timestamp with boost function.
I need to use my application also with another pc since I have to acquire data twice and I'd like to know what's the best method to synchronize the data between these two pcs.
I'm using the first pc to create a Wifi network and I'm able to connect the second pc to the first one and then I'm trying to use "net time" to set the NTP service without success.
Is there any tutorial or someone which can explain me how to synchronize date time between two Win7 pcs, please?
Perhaps this command? Run from the cmd prompt.
net time \\MasterPC /set /yes
where MasterPC is the network name of the computer you want to take the time from.

Resources