I have some data stored in redis which I am accessing through R using the package rredis. Is it possible to find that time which a piece of data stored in redis (in my particular example a hash) last changed from within the R terminal?
Thanks
No.
Redis does not maintain the timestamp of a key's last update. It is possible, however, to implement this type of behavior "manually" by adding this logic to your application's code.
Related
I want to issue something e.g. a new option. Inside the flow where I'm issuing this new option, I need to get info from two separate oracles that need to provide data for the output state.
How should I do this... should I have one output and 3 commands? command with data from Oracle 1, command with data from Oracle 2 and then the issue command? or can this be done with one command?
It's entirely up to you - your command can contain whatever you data you want, so in theory, you could do the whole with one command.
Having said that, I would probably split it out into at least two commands for clarity and privacy. The privacy element is you can build a filtered transaction for the oracle to sign that only contains the oracle command.
If you don't mind the two oracles seeing the data sent to each for signing, you could encapsulate the data in one command e.g.
class OracleCommand(val spotPrice: SpotPrice, val volatility: Volatility) : CommandData
Where one oracle attests to spotPrice and the other to Volatility.
However, you would find it hard to determine what part of the data they attested to since they will both sign over the entire filtered transaction.
Unless you knew the design of the oracle could specifically pick out the correct data, you're probably better off going with three separate commands.
I need to capture session level details in a table. i have 150 workflows for which i need to maintain an audit table which will have info like session start time, session end time, applied rows, rejected rows etc.
i can not use workflow variables using assignment task method because i need a reusable solution since i have 150 workflows.
i tried using Post Session Success Command Task with built in variables as $PM#TableName etc.
But there is no built in varible to capture the session start time and end time.
Last option i thought that could help is extract stats from session logs. can anyone please explain me how to achieve this.
Please let me know if there is any way to deal with this issue.
Thanks in advance
Here's the Framework to get this done:
http://powercenternotes.blogspot.com/2014/01/an-etl-framework-for-operational.html
Each session hfas a $SessionName.StartTime as well as $SessionName.EndTime variable available
It's a terrible idea to try and parse the logs. Feasible, but I'd avoid it.
All information is already available in Informatica Repository. For every Workflow and every task of the Workflow. That should be used instead.
I am trying to determine if I am able to keep data in-memory with RStudio to be used by multiple sessions, or for the session to at least be preserved. Searching for information about the existence/nonexistence of this feature has proven to be challenging.
The test is this:
In a session with RStudio create a variable and assign a value to it.
In another session run a script that refers to that variable.
If the variable is assigned a value then the script will work, otherwise it fails with "Error: object variable not found.
Is it possible to make a cross session variable in Rstudio Server that will work with this procedure without engaging file i/o? Or is it simply unavailable as a server function?
Unfortunately this is not possible because of the way R itself is designed.
Each R session has its own private memory space which contains values and data (the global environment for that session, etc.).
In order to create a cross-session variable, the R sessions would have to share memory, and they would also have to coordinate access to that memory, so that (for instance) if one session was changing the value of the variable, the other session could not read the value until the first session was done changing it. This sort of coordination mechanism just doesn't exist in R.
If you want to do this there are a couple work-arounds:
Keep your data in a place that both sessions can read and write to safely, for instance in a database, or
As you mentioned, engaging file I/O is an option, but this isn't too hard: use a .Rdata file; when you wish to publish data to other sessions, write the private variables as an R data file (using e.g. save), and when the other session wishes to synchronize, it can load the data into its own private space (using e.g. load).
Is there any way to query a SQLite database for basic meta data such as:
Last date/time updated
Hash of database to indicate "state"
I am just looking for a simple, infrastructural way to have a script evaluate different databases and take a reasonable point of view on whether they are the same "state" as other databases in a different environment (PROD and DEV for instance).
In my experience, if no update, new record, or any change is made to the SQLite database file, the last modified time of the file doesn't change. So the last modified time should suffice for the time of any change made to database.
If 2 database files with same state are only accessed for reading, their modified times are always the same.
Similarly you get the file sizes for comparison.
You can use the whole file to calculate hash. If you consider same data in the database as the same "state" regardless of any difference in the past, then maybe you want hash of the all records in database, which is probably not simple.
I am collecting data every second and storing it in a ":memory" database. Inserting data into this database is inside a transaction.
Everytime one request is sending to server and server will read data from the first memory, do some calculation, store it in the second database and send it back to the client. For this, I am creating another ":memory:" database to store the aggregated information of the first db. I cannot use the same db because I need to do some large calculation to get the aggregated result. This cannot be done inside the transaction( because if one collection takes 5 sec I will lose all the 4 seconds data). I cannot create table in the same database because I will not be able to write the aggregate data while it is collecting and inserting the original data(it is inside transaction and it is collecting every one second)
-- Sometimes I want to retrieve data from both the databses. How can I link both these memory databases? Using attach database stmt, I can attach the second db to the first one. But the problem is next time when a request comes how will I check the second db is exist or not?
-- Suppose, I am attaching the second memory db to first one. Will it lock the second database, when we write data to the first db?
-- Is there any other way to store this aggregated data??
As far as I got your idea, I don't think that you need two databases at all. I suppose you are misinterpreting the idea of transactions in sql.
If you are beginning a transaction other processes will be still allowed to read data. If you are reading data, you probably don't need a database lock.
A possible workflow could look as the following.
Insert some data to the database (use a transaction just for the
insertion process)
Perform heavy calculations on the database (but do not use a transaction, otherwise it will prevent other processes of inserting any data to your database). Even if this step includes really heavy computation, you can still insert and read data by using another process as SELECT statements will not lock your database.
Write results to the database (again, by using a transaction)
Just make sure that heavy calculations are not performed within a transaction.
If you want a more detailed description of this solution, look at the documentation about the file locking behaviour of sqlite3: http://www.sqlite.org/lockingv3.html